What's new

BACKUPMON BACKUPMON v1.5.10 -Mar 1, 2024- Backup/Restore your Router: JFFS + NVRAM + External USB Drive! (**Thread closed due to age**)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

I'm am a novice but when I read the cru script, it appears to me the cru a performs a cru d automatically. Therefore there is no harm calling cru a multiple times with the same id.

Here is the snippet:
Bash:
grep -v "#$ID#\$" $F >$N 2>/dev/null
        if [ "$1" = "a" ]; then
                shift
                shift
                echo "$* #$ID#" >>$N
        fi
 
I'm am a novice but when I read the cru script, it appears to me the cru a performs a cru d automatically. Therefore there is no harm calling cru a multiple times with the same id.
True - The checking first is not absolutely required, if repeatedly re-writing the crontab file without need is acceptable, and if logging of the anomaly (jobs missing when they should be present) is not desired.
 
True - The checking first is not absolutely required, if repeatedly re-writing the crontab file without need is acceptable, and if logging of the anomaly (jobs missing when they should be present) is not desired.
Agree. I guess for someone who is not looking to debug but just stop the issue until others can fix, maybe your idea may work for them? That is, just add a copy of all their custom cru a commands in the firewall-start script.
 
Finally making a back-up in cron is important only if you are changing your configuration on a daily/weekly basis.
Otherwise it si worth to run a manual only when needed avoiding to increment the used space on the backup disk.
 
Agree. I guess for someone who is not looking to debug but just stop the issue until others can fix, maybe your idea may work for them? That is, just add a copy of all their custom cru a commands in the firewall-start script.
I'm very tempted to add these to the firewall-start script to bypass the issue... but services-start typically is the "preferred" file to drop these into. I'd really like to see closure on this cron job deletion issue before changing things over to firewall-start.
 
I want to say @dave14305 was right about the lock file issue. Imagine crontab being invoked numerous times from many different script calls at the same time. Let's say all script developers are doing this from "services-start" for example. IF the lockfile prevention on cru preventing all the crontab file action from happening at the same time is not strong, each time the crontab file gets changed or manipulated any number of jobs could be missing at a given time upon generation of the new crontab file especially if scripts are trying to perform cru commands in the background at the same time other scripts are doing the same thing. This would also explain why it is more stable to put your crontab additions inside firewall-start(i.e. because no one else is doing that). Lol while your guys idea of using firewall-start is creative way of resolving your own disappearing cron rules, it is only going to temporarily do that if all the other developers start doing the same you will be back to square one (a.k.a nuking the crontab).
 
Last edited:
I want to say @dave14305 was right about the lock file issue. Imagine crontab being invoked numerous times from many different script calls at the same time. Let's say all script developers are doing this from "services-start" for example. IF the lockfile prevention on cru preventing all the crontab file action from happening at the same time is not strong, each time the crontab file gets changed or manipulated any number of jobs could be missing at a given time upon generation of the new crontab file especially if scripts are trying to perform cru commands in the background at the same time other scripts are doing the same thing. This would also explain why it is more stable to put your crontab additions inside firewall-start(i.e. because no one else is doing that). Lol while your guys idea of using firewall-start is creative way of resolving your own disappearing cron rules, it is only going to temporarily do that if all the other developers start doing the same you will be back to square one (a.k.a nuking the crontab).

You seem to be saying we "know for a fact" that the missing jobs are being caused by multiple scripts trying to use cru at/near the same time, and some failure of the lock file logic. Has that been proven? (I know there are some other threads on this, but I haven't been following them all so may very well have missed something.)

If that has been proven, then I agree -- moving cru won't help if multiple scripts do it.
 
Last edited:
You seem to be saying we "know for a fact" that the missing jobs are being caused by multiple scripts trying to use cru at/near the same time, and some failure of the lock file logic. Has that been proven? (I know there are some other threads on this, but I haven't been following them all so may very well have missed something.)

If that has been proven, then I agree -- moving cru won't help if multiple scripts do it.

I am not saying anything is "true", I am merely saying if it is, then your logic might become misplaced at somepoint. Especially if every developer and their mother decides to place cru logic in firewall-start, without heeding the possibility that @dave14305 is right.

Using this post as reference where @dave14305 mentions one of his improvements would be to the locking mechanism which would prevent concurrent runs of cru.
 
Last edited:
I am not saying anything is "true", I am merely saying if it is, then your logic might become misplaced at somepoint. Especially if every developer and their mother decides to place cru logic in firewall-start, without heading the possibility that @dave14305 is right.

Ok, so it seems the concern is that this isn't guaranteed to solve the problem in aggregate -- and we might end up "back where we started." That is a valid concern. Since we don't know the root cause, nobody can be sure either way without actually trying (testing) it. What is the ECD for knowing with 100% certainty what the root cause is?

I can say -- in my environment -- the missing cron jobs were definitely NOT related to multiple cru calls in services-start. My crontab was always perfect, with all jobs intact, after boot (which is the only time services-start runs). For me, the "missing jobs" always happened after several days or weeks of router uptime. Are some folks seeing the jobs missing immediately after boot? That was never the case for me, which is why I ruled out "cru collisions in services-start" as the cause.

I scoured the logs for months, and didn't find anything definitive, but noticed each instance had been preceded by a WAN outage/restoration. Since restoration kicks off firewall-start, I added the "check and add if not found" script there. That was in December-2022, and I have not had a single missing cron job since. I would certainly classify this as a "mitigation measure" rather than a "fix", but I couldn't engineer a "fix" since I couldn't find the "root cause."

I appreciate the caution. It is never ideal to implement any fix (or mitigation) when the "root cause" isn't known. Maybe you're right, and it is best to just do nothing unless/until that root cause is found. I am not advocating for a change to backupmon or any other scripts in this regard; I was just sharing how I made sure the jobs were added back if/when they went missing. And I would be the first to insist that nothing ever be implemented without adequate testing.
 
Ok, so it seems the concern is that this isn't guaranteed to solve the problem in aggregate -- and we might end up "back where we started." That is a valid concern. Since we don't know the root cause, nobody can be sure either way without actually trying (testing) it. What is the ECD for knowing with 100% certainty what the root cause is?

I can say -- in my environment -- the missing cron jobs were definitely NOT related to multiple cru calls in services-start. My crontab was always perfect, with all jobs intact, after boot (which is the only time services-start runs). For me, the "missing jobs" always happened after several days or weeks of router uptime. Are some folks seeing the jobs missing immediately after boot? That was never the case for me, which is why I ruled out "cru collisions in services-start" as the cause.

I scoured the logs for months, and didn't find anything definitive, but noticed each instance had been preceded by a WAN outage/restoration. Since restoration kicks off firewall-start, I added the "check and add if not found" script there. That was in December-2022, and I have not had a single missing cron job since. I would certainly classify this as a "mitigation measure" rather than a "fix", but I couldn't engineer a "fix" since I couldn't find the "root cause."

I appreciate the caution. It is never ideal to implement any fix (or mitigation) when the "root cause" isn't known. Maybe you're right, and it is best to just do nothing unless/until that root cause is found. I am not advocating for a change to backupmon or any other scripts in this regard; I was just sharing how I made sure the jobs were added back if/when they went missing. And I would be the first to insist that nothing ever be implemented without adequate testing.
Yea, I do not like to rely on the crontab myself because I have seen this instability other users are reporting. But I haven't seen it happen in a long time. Well, I don't necessarily think the problem is related specifically to service-start. It more so could be linked to service-event. Like you said though-it definitely needs to be investigated before final decisions are made.
 
It more so could be linked to service-event.

I actually came to the opposite conclusion -- that using service-event (or firewall-start) is the way to mitigate the problem, rather than anything causing it.

When I saw jobs missing, they were always the jobs that had been added from services-start or postmount. I had been using services-start for mine, while scribe's logrotate job is added from post-mount, and those were the jobs "missing" after certain WAN down/WAN restored events occurred. Conversely, all the jobs added by scripts hooked in service-event, like diversion and skynet, were still present.

My best guess is that some (unknown) event, or combination of events, causes the entirety of crontab to be lost. Scripts hooked via service-event or firewall-start then have another chance to check and add jobs again, whereas scripts hooked via services-start and post-mount generally get only one shot, at boot time.
 
Last edited:
New BACKUPMON Beta is available... featuring a new integration with AMTM Email to notify you of backup success and failures. Wanted to let you all take this for a spin for a little while before unleashing it! Huge thanks to @Martinski for his wonderful contributions! :)

v1.5.3b3
- MINOR:
Added new functionality to BACKUPMON to give you the ability to receive backup SUCCESS and FAILURE email notifications. This capability is provided through the AMTM Email functionality (AMTM->em) and made possible through a wonderful common library graciously made available by @Martinski! When you enable the AMTM email notification functionality in the config menu (option #14), the script will download a library file into /jffs/addons/shared-libs. Libraries and functions like these can be shared between many other scripts from one common location. In BACKUPMON, you can enable notifcations for either SUCCESS or FAILURE events, or both. If using primary and secondary backups, you will get notifications for both, whether they succeed or fail. PLEASE NOTE: AMTM Email (AMTM->em) must be set up, configured and working before enabling this functionality in BACKUPMON.
- PATCH: Changed the versioning logic to align with the general accepted way of versioning, using the notation: major.minor.patch ... finally, right? After seeing @thelonelycoder changing his ways, I figured it was probably time for me as well. All my scripts moving forward will go this route. Change log wording is now changed to conform to the major/minor/patch standards. So previously, FIXED now conforms to PATCH, ADDED conforms to MINOR, and MAJOR stays the same!
- PATCH: Due to an overlap of common variable names when integrating with AMTM, some mitigations had to be made and will change the username and password variable names within the backupmon.cfg upon start. This will run 1x only once variable names have been corrected.
- PATCH: Added AMTM Email testing as a menu item off the main setup menu. This allows you to give the AMTM email capabilities a quick test, providing verbose on-screen feedback during the process.

Download link:
Code:
curl --retry 3 "https://raw.githubusercontent.com/ViktorJp/BACKUPMON/master/backupmon-1.5.3b3.sh" -o "/jffs/scripts/backupmon.sh" && chmod 755 "/jffs/scripts/backupmon.sh"

Significant Screenshots:

New item #14 lets you enable email notifications, for either success, failure or both!
1707506303182.png


More detail on what to expect with this option:
1707506849725.png


During a backup, should any failure occur, or ends in a successful backup, an email will get sent at the end of the job.
1707506755828.png
 
Last edited:
Brilliant work. Test email successfully sent and received.
 
Last edited:
Works great!

Enhancement request: Please consider adding the destination of the backup into the e-mail subject or message. As I mentioned before, I backup to local NAS, remote NAS, and local USB — yes, because i can. ;) Primary backup or secondary backup are not descriptive enough for my use case.
 
Works great!

Enhancement request: Please consider adding the destination of the backup into the e-mail subject or message. As I mentioned before, I backup to local NAS, remote NAS, and local USB — yes, because i can. ;) Primary backup or secondary backup are not descriptive enough for my use case.
Anything is possible! If you want to "mock up" a sample email of how you'd like to see it, I can work on that! :)
 
Anything is possible! If you want to "mock up" a sample email of how you'd like to see it, I can work on that! :)
Simply change the line that reads "SUCCESS: BACKUPMON completed a successful primary backup" to "SUCCESS: BACKUPMON completed a successful primary (or secondary) backup to 'destination'", where 'destination' is pertinent BACKUPMEDIA value from .cfg file. If you remember, I rotate between two saved .cfg files in order to perform backup to third destination. Thanks!
 
Simply change the line that reads "SUCCESS: BACKUPMON completed a successful primary backup" to "SUCCESS: BACKUPMON completed a successful primary (or secondary) backup to 'destination'", where 'destination' is pertinent BACKUPMEDIA value from .cfg file. If you remember, I rotate between two saved .cfg files in order to perform backup to third destination. Thanks!
Well, BACKUPMEDIA can either be "Network" or "USB".... so you want this to say:

SUCCESS: BACKUPMON completed a successful primary backup to: Network

-or-

SUCCESS: BACKUPMON completed a successful primary backup to: USB

?
 
Simply change the line that reads "SUCCESS: BACKUPMON completed a successful primary backup" to "SUCCESS: BACKUPMON completed a successful primary (or secondary) backup to 'destination'", where 'destination' is pertinent BACKUPMEDIA value from .cfg file. If you remember, I rotate between two saved .cfg files in order to perform backup to third destination. Thanks!
That was easy enough... ;) Please download and reinstall this same version, and you'll see that BACKUPMEDIA entry.

Code:
curl --retry 3 "https://raw.githubusercontent.com/ViktorJp/BACKUPMON/master/backupmon-1.5.2b1.sh" -o "/jffs/scripts/backupmon.sh" && chmod 755 "/jffs/scripts/backupmon.sh"

Code:
Date/Time: Feb 09 2024 16:05:57
Asus Router Model: GT-AX6000
Firmware/Build Number: 3004.388.6_0
EXT USB Drive Label Name: ASUS-SSD

SUCCESS: BACKUPMON completed a successful primary backup to destination: Network.

Sent by the "backupmon.sh" Tool.
From the "GT-AX6000" router.

2024-Feb-09, 04:05:57 PM EST (Fri)
 
Last edited:
New BACKUPMON Beta is available... featuring a new integration with AMTM Email to notify you of backup success and failures. Wanted to let you all take this for a spin for a little while before unleashing it! Huge thanks to @Martinski for his wonderful contributions! :)

v1.5.2b1
- MINOR:
Added new functionality to BACKUPMON to give you the ability to receive backup SUCCESS and FAILURE email notifications. This capability is provided through the AMTM Email functionality (AMTM->em) and made possible through a wonderful common library graciously made available by @Martinski! When you enable the AMTM email notification functionality in the config menu (option #14), the script will download a library file into /jffs/addons/shared-libs. Libraries and functions like these can be shared between many other scripts from one common location. In BACKUPMON, you can enable notifcations for either SUCCESS or FAILURE events, or both. If using primary and secondary backups, you will get notifications for both, whether they succeed or fail. PLEASE NOTE: AMTM Email (AMTM->em) must be set up, configured and working before enabling this functionality in BACKUPMON.
- PATCH: Changed the versioning logic to align with the general accepted way of versioning, using the notation: major.minor.patch ... finally, right? After seeing @thelonelycoder changing his ways, I figured it was probably time for me as well. All my scripts moving forward will go this route. Change log wording is now changed to conform to the major/minor/patch standards. So previously, FIXED now conforms to PATCH, ADDED conforms to MINOR, and MAJOR stays the same!
- PATCH: Due to an overlap of common variable names when integrating with AMTM, some mitigations had to be made and will change the username and password variable names within the backupmon.cfg upon start. This will run 1x only once variable names have been corrected.

Download link:
Code:
curl --retry 3 "https://raw.githubusercontent.com/ViktorJp/BACKUPMON/master/backupmon-1.5.2b1.sh" -o "/jffs/scripts/backupmon.sh" && chmod 755 "/jffs/scripts/backupmon.sh"

Significant Screenshots:

New item #14 lets you enable email notifications, for either success, failure or both!
View attachment 56322

More detail on what to expect with this option:
View attachment 56324

During a backup, should any failure occur, or ends in a successful backup, an email will get sent at the end of the job.
View attachment 56323
works fine. Test email received.
 
That was easy enough... ;) Please download and reinstall this same version, and you'll see that BACKUPMEDIA entry.

Code:
curl --retry 3 "https://raw.githubusercontent.com/ViktorJp/BACKUPMON/master/backupmon-1.5.2b1.sh" -o "/jffs/scripts/backupmon.sh" && chmod 755 "/jffs/scripts/backupmon.sh"

Code:
Date/Time: Feb 09 2024 16:05:57
Asus Router Model: GT-AX6000
Firmware/Build Number: 3004.388.6_0
EXT USB Drive Label Name: ASUS-SSD

SUCCESS: BACKUPMON completed a successful primary backup to destination: Network.

Sent by the "backupmon.sh" Tool.
From the "GT-AX6000" router.

2024-Feb-09, 04:05:57 PM EST (Fri)
PERFECT!!!
 

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Staff online

Top