What's new

BACKUPMON BACKUPMON v1.5.10 -Mar 1, 2024- Backup/Restore your Router: JFFS + NVRAM + External USB Drive! (**Thread closed due to age**)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Thanks @Viktor Jaep, my very own Mode! :p

Possibly a bit more nested than I had in mind - I think I was envisaging not having the “day of week” or “day number” folders at all but rather that each backup would have a folder named for the day/time such as “20230909-143412” and within that folder would be just the 2 .tars for that backup cycle with your standard “basic” naming. These folders would then naturally sort from oldest to newest, and then every once in a while you could just manually delete a whole bunch of the oldest ones (or rename the odd one you wanted to keep as a rollback point)? And you could also see instantly which one you wanted to restore from a list? So a much “flatter” structure I guess is what I was thinking? You could then even optionally auto-prune dated backups older than 7,14,30,60,90 days or whatever?

Anyway, just throwing it out there - what you have looks very functional, but I’m a bit worried that multiple backups in each of the day or day number folders gets tedious to clean up - it may become a bit of a disk-space eating monster!

😁
 
Thanks @Viktor Jaep, my very own Mode! :p

Possibly a bit more nested than I had in mind - I think I was envisaging not having the “day of week” or “day number” folders at all but rather that each backup would have a folder named for the day/time such as “20230909-143412” and within that folder would be just the 2 .tars for that backup cycle with your standard “basic” naming. These folders would then naturally sort from oldest to newest, and then every once in a while you could just manually delete a whole bunch of the oldest ones (or rename the odd one you wanted to keep as a rollback point)? And you could also see instantly which one you wanted to restore from a list? So a much “flatter” structure I guess is what I was thinking? You could then even optionally auto-prune dated backups older than 7,14,30,60,90 days or whatever?

Anyway, just throwing it out there - what you have looks very functional, but I’m a bit worried that multiple backups in each of the day or day number folders gets tedious to clean up - it may become a bit of a disk-space eating monster!

😁
Gotcha... yeah, I was going by the current methodology of the week, month, year cycles, and dropping backups into there. I see what you mean... I'll sleep on it, and will see how this can be realized. :)
 
I'll sleep on it, and will see how this can be realized. :)
Sweet dreams ... about to go out on a small boat for a sunset cruise and cocktails myself... 😁
 
Sweet dreams ... about to go out on a small boat for a sunset cruise and cocktails myself... 😁
Well, I got some rest... and dreamed up a solution! Introducing v0.9RC, this time with the real @Stephen Harrington (tm) mode... aka Perpetual Frequency! Please note, I have not addressed any trimming options yet... which is why it's called perpetual. :)

What's new?
v0.9RC - (September 9, 2023)
- ADDED:
Major new functionality was added to give you an additional backup frequency choice... So in addition to weekly, monthly and yearly backup frequencies, you will now have a 'perpetual' option. (Thanks to @Stephen Harrington for the suggestion!) In this mode, instead of a Mon-Fri, 01-31 or 001-365 folder being created for the weekly, monthly or yearly backup frequencies, a unique folder based on date/time is created when the backup runs. For example: 20230909-092021. In this mode, only BASIC mode backups are allowed, which provide for a simpler naming convention, and easier restoration process. When using the 'perpetual frequency' option under BASIC mode, backups are not pruned, nor are they overwritten if multiple backups are taken on the same day. In this case, a new, uniquely named backup folder is created each time.

Download Links:
Code:
curl --retry 3 "https://raw.githubusercontent.com/ViktorJp/BACKUPMON/master/backupmon-0.9.sh" -o "/jffs/scripts/backupmon.sh" && chmod 755 "/jffs/scripts/backupmon.sh"

Significant Screenshots:
Here's option #8 under the config menu, with explanations, allowing you to choose 'perpetual' mode:
1694266445811.png


Example of folders it creates under your network share:
1694266495021.png
 
Last edited:
@Viktor Jaep congrats - Perpetual was exactly what I had in mind, you nailed it!

If/when you ever get around to contemplating “pruned perpetual” can I ask that you have an option to NOT prune any backup folder that has been renamed from your standard YYYYMMDD-HHMMSS format?

For example, I may have a backup named “20230824-171819” that was the last backup I did on firmware 388.2_2 …

I may have manually renamed that folder “20230824-171819 Last 388.2_2” and want to keep it in case I need to roll back to that firmware 3 weeks down the track for some reason, but don’t want it to be deleted even though I have “pruned perpetual” set to “keep last 14 days”.

I’m sure your OCD would approve …

😁
 
@Viktor Jaep congrats - Perpetual was exactly what I had in mind, you nailed it!

If/when you ever get around to contemplating “pruned perpetual” can I ask that you have an option to NOT prune any backup folder that has been renamed from your standard YYYYMMDD-HHMMSS format?

For example, I may have a backup named “20230824-171819” that was the last backup I did on firmware 388.2_2 …

I may have manually renamed that folder “20230824-171819 Last 388.2_2” and want to keep it in case I need to roll back to that firmware 3 weeks down the track for some reason, but don’t want it to be deleted even though I have “pruned perpetual” set to “keep last 14 days”.

I’m sure your OCD would approve …

😁
Wish I could say I have figured out the pruning... but no... seems to be much more difficult than anticipated. In fact, the command that you'd normally use to perform this function is severely hobbled on our routers, and missing a bunch of different functionality. :( I'll continue looking into this... but as of right now, it looks like a manual pruning job... just like the yard. :p
 
Wish I could say I have figured out the pruning... but no... seems to be much more difficult than anticipated.
Ah well, thanks for looking into it. No Entware version with extra goodness?
Manual pruning will be just fine. Add it to the list when I need to free up space on my NAS!
 
Just to feed your OCD, spot the typo …


Code:
BACKUPMON v0.9

Normal Backup starting in 10 seconds. Press [S]etup or [X] to override and enter RESTORE mode

Backing up to \\192.168.1.200\NetBackup mounted to /tmp/mnt/backups
Backup directory location: /asusrouter
Frequency: Perptual
Mode: Basic

[Normal Backup Commencing]...

Messages:
STATUS: External Drive (\\192.168.1.200\NetBackup) mounted successfully under: /tmp/mnt/backups
STATUS: Daily Backup Directory successfully created.
STATUS: Finished backing up JFFS to /tmp/mnt/backups/asusrouter/20230910-144139/jffs.tar.gz.
tar: ./entware/var/syslog-ng.ctl: socket ignored
tar: ./entware/var/run/syslog-ng/syslog-ng.ctl: socket ignored
STATUS: Finished backing up EXT Drive to /tmp/mnt/backups/asusrouter/20230910-144139/AMTM-USB.tar.gz.
STATUS: Finished copying backupmon.sh script to /tmp/mnt/backups/asusrouter.
STATUS: Finished copying backupmon.cfg script to /tmp/mnt/backups/asusrouter.
STATUS: Finished copying exclusions script to /tmp/mnt/backups/asusrouter.
STATUS: Finished copying restore instructions.txt to /tmp/mnt/backups/asusrouter.
STATUS: Settling for 10 seconds...
STATUS: External Drive (\\192.168.1.200\NetBackup) unmounted successfully.

And just to feed MY OCD, is it possible to re-direct the socket errors somewhere so I don’t see them, or does that muck up something else?

😁
 
Just to feed your OCD, spot the typo …


Code:
BACKUPMON v0.9

Normal Backup starting in 10 seconds. Press [S]etup or [X] to override and enter RESTORE mode

Backing up to \\192.168.1.200\NetBackup mounted to /tmp/mnt/backups
Backup directory location: /asusrouter
Frequency: Perptual
Mode: Basic

[Normal Backup Commencing]...

Messages:
STATUS: External Drive (\\192.168.1.200\NetBackup) mounted successfully under: /tmp/mnt/backups
STATUS: Daily Backup Directory successfully created.
STATUS: Finished backing up JFFS to /tmp/mnt/backups/asusrouter/20230910-144139/jffs.tar.gz.
tar: ./entware/var/syslog-ng.ctl: socket ignored
tar: ./entware/var/run/syslog-ng/syslog-ng.ctl: socket ignored
STATUS: Finished backing up EXT Drive to /tmp/mnt/backups/asusrouter/20230910-144139/AMTM-USB.tar.gz.
STATUS: Finished copying backupmon.sh script to /tmp/mnt/backups/asusrouter.
STATUS: Finished copying backupmon.cfg script to /tmp/mnt/backups/asusrouter.
STATUS: Finished copying exclusions script to /tmp/mnt/backups/asusrouter.
STATUS: Finished copying restore instructions.txt to /tmp/mnt/backups/asusrouter.
STATUS: Settling for 10 seconds...
STATUS: External Drive (\\192.168.1.200\NetBackup) unmounted successfully.

And just to feed MY OCD, is it possible to re-direct the socket errors somewhere so I don’t see them, or does that muck up something else?

😁
ARRRRRGHGHGHGHGH... thanks! :)

No, I was just wanting people to see any possible TAR errors, just incase... but if it's just benign sockets that are the only thing appearing, I can probably turn that off...
 
but if it's just benign sockets that are the only thing appearing, I can probably turn that off...
Only error I’ve ever seen was when share failed to unmount, but that being now fixed it’s only the socket errors nowadays … always a good thing to have error messages though I guess - might run out of disk space on my NAS or something similar at some point and I WOULD want to know that!

😁
 
Come to think of it, if it was a scheduled backup from the cron job we wouldn’t get any warning anyway, would we? You could send a “job failure” email using the AMTM mail interface perhaps if you wanted to go that far …
 
Wish I could say I have figured out the pruning... but no... seems to be much more difficult than anticipated. In fact, the command that you'd normally use to perform this function is severely hobbled on our routers, and missing a bunch of different functionality. :( I'll continue looking into this... but as of right now, it looks like a manual pruning job... just like the yard. :p

@Viktor Jaep

Just thinking out load here.... I have done this before on the router to get an age of a file....

1. Age in seconds of a file since the beginning of time (well, linux's time) date +%s -r $filename
2. Age right now since the beginning of time date +%s
3. Subtract the two to get the age difference, divide by (60*60*24) to get number of days.
4. Test the file to see if it is older than x days, if so, delete the little bugger
 
I think this has all got a bit too complicated for my liking.
The latest version is about 60K and has pages & pages of stuff I can't really follow.

I've gone back to the original script by @Jeffrey Young - it fits on one screen, has about six lines that actually do anything and does the job fine once you add the 'umount -l' that stops my router from crashing (would like to know more about that really!)
 
would like to know more about that really!

Amen on that. The 388.4 release seems plagued with random reboots. Seen reports of SpdMerlin, FlexQos, QoS, and many other circumstances. Be nice if we could nail down that cause. My AX88U is still running 386.7_2. I have resisted 388 as 388.1 had the asd pounding the crap out of hard drives, .2 had the arp issue that would have affected me. .3 did not apply to me, and now .4 seems to have reboot issues that even a simple umount is causing. I will be skipping .4 as well. 388 will eventually mature enough to be reliable.

I think this has all got a bit too complicated for my liking.

Yes, I get it. Sometimes simple is better. But hey, @Viktor Jaep has had a blast of a time so far, so good for him.
 
@Viktor Jaep

Just thinking out load here.... I have done this before on the router to get an age of a file....

1. Age in seconds of a file since the beginning of time (well, linux's time) date +%s -r $filename
2. Age right now since the beginning of time date +%s
3. Subtract the two to get the age difference, divide by (60*60*24) to get number of days.
4. Test the file to see if it is older than x days, if so, delete the little bugger
You're spot-on...that's exactly what I'm playing with... throwing the folder into a for loop, and determining the age using the %s method... but the results coming back from this are giving me issues. It will take some playing with.

Secondly, I don't feel exactly comfortable running a 'rm -fr" on a bunch of backup folders in some random folder. There's a lot that could go wrong there if something wasn't configured right. I would feel terrible if someone wiped out their router because of this.

Anyway... keeping perpetual very manual at the moment. ;)
 
You're spot-on...that's exactly what I'm playing with... throwing the folder into a for loop, and determining the age using the %s method... but the results coming back from this are giving me issues. It will take some playing with.

Secondly, I don't feel exactly comfortable running a 'rm -fr" on a bunch of backup folders in some random folder. There's a lot that could go wrong there if something wasn't configured right. I would feel terrible if someone wiped out their router because of this.

Anyway... keeping perpetual very manual at the moment. ;)
I was thinking leave the deleting to the user via a separate Cron job.
 
@Viktor Jaep

Just thinking out load here.... I have done this before on the router to get an age of a file....

1. Age in seconds of a file since the beginning of time (well, linux's time) date +%s -r $filename
2. Age right now since the beginning of time date +%s
3. Subtract the two to get the age difference, divide by (60*60*24) to get number of days.
4. Test the file to see if it is older than x days, if so, delete the little bugger

I agree with @Jeffrey Young; the date command is very useful when "filtering" files or directories for deletion based on a specific number of days. For example, below is the function I use in my own shell scripts:
Bash:
_DeleteFileDirAfterNumberOfDays_()
{
   local retCode=1  minNumOfDays=7
   if [ $# -eq 0 ] || [ -z "$1" ] || [ -z "$2" ] || \
      { [ ! -f "$1" ] && [ ! -d "$1" ] ; }
   then
      printf "\nFile or Directory [$1] is *NOT* FOUND.\n"
      return 1
   fi
   if ! echo "$2" | grep -qE "^[1-9][0-9]*$" || [ "$2" -lt "$minNumOfDays" ]
   then
      printf "\nNumber of days [$2] is *NOT* VALID.\n"
      return 1
   fi
   if [ "$(($(date +%s) - $(date +%s -r "$1")))" -gt "$(($2 * 86400))" ]
   then
       if [ -f "$1" ]
       then rmOpts="-f"
       else rmOpts="-fr"
       fi
       printf "\nDeleting \"$1\" with more than $2 days...\n"
       rm $rmOpts "$1" ; retCode="$?"
   fi
   return "$retCode"
}

Example call:
Bash:
_DeleteFileDirAfterNumberOfDays_ "/FULL/PATH/TO/fileToCheckForRemoval.txt" 30

Just my 2 cents.
 
I agree with @Jeffrey Young; the date command is very useful when "filtering" files or directories for deletion based on a specific number of days. For example, below is the function I use in my own shell scripts:
Bash:
_DeleteFileDirAfterNumberOfDays_()
{
   local retCode=1  minNumOfDays=7
   if [ $# -eq 0 ] || [ -z "$1" ] || [ -z "$2" ] || \
      { [ ! -f "$1" ] && [ ! -d "$1" ] ; }
   then
      printf "\nFile or Directory [$1] is *NOT* FOUND.\n"
      return 1
   fi
   if ! echo "$2" | grep -qE "^[1-9][0-9]*$" || [ "$2" -lt "$minNumOfDays" ]
   then
      printf "\nNumber of days [$2] is *NOT* VALID.\n"
      return 1
   fi
   if [ "$(($(date +%s) - $(date +%s -r "$1")))" -gt "$(($2 * 86400))" ]
   then
       if [ -f "$1" ]
       then rmOpts="-f"
       else rmOpts="-fr"
       fi
       printf "\nDeleting \"$1\" with more than $2 days...\n"
       rm $rmOpts "$1" ; retCode="$?"
   fi
   return "$retCode"
}

Example call:
Bash:
_DeleteFileDirAfterNumberOfDays_ "/FULL/PATH/TO/fileToCheckForRemoval.txt" 30

Just my 2 cents.
As always, you share great snippets!

Another one for the library
 
I think this has all got a bit too complicated for my liking.
The latest version is about 60K and has pages & pages of stuff I can't really follow.
Really a bunch of fluff to make it easier for some of the more non-technical users be able to run backups/restores, give them more feedback during the process, and give some options if you wanted to back things up differently. There's definitely some ways to optimize some of the repetitive code, but I haven't gotten to that point yet... But I totally understand... @Jeffrey Young's script is the main engine behind this thing, and can't go wrong there. ;)
 
I agree with @Jeffrey Young; the date command is very useful when "filtering" files or directories for deletion based on a specific number of days. For example, below is the function I use in my own shell scripts:
Bash:
_DeleteFileDirAfterNumberOfDays_()
{
   local retCode=1  minNumOfDays=7
   if [ $# -eq 0 ] || [ -z "$1" ] || [ -z "$2" ] || \
      { [ ! -f "$1" ] && [ ! -d "$1" ] ; }
   then
      printf "\nFile or Directory [$1] is *NOT* FOUND.\n"
      return 1
   fi
   if ! echo "$2" | grep -qE "^[1-9][0-9]*$" || [ "$2" -lt "$minNumOfDays" ]
   then
      printf "\nNumber of days [$2] is *NOT* VALID.\n"
      return 1
   fi
   if [ "$(($(date +%s) - $(date +%s -r "$1")))" -gt "$(($2 * 86400))" ]
   then
       if [ -f "$1" ]
       then rmOpts="-f"
       else rmOpts="-fr"
       fi
       printf "\nDeleting \"$1\" with more than $2 days...\n"
       rm $rmOpts "$1" ; retCode="$?"
   fi
   return "$retCode"
}

Example call:
Bash:
_DeleteFileDirAfterNumberOfDays_ "/FULL/PATH/TO/fileToCheckForRemoval.txt" 30

Just my 2 cents.
Thank you VERY much for this, @Martinski! This example gave me a great boost forward... and have a working PoC using your function to wipe out backup folders! I really appreciate it! :) Like @Jeffrey Young said, you post some incredible snippets... will be keeping this one in my collection as well! LOL

BTW... virtually every example out there that shows how to filter and delete folders revolves around the "find <path> -type d" command (the -type d being directories) -- it's just that our version of find on our routers doesn't have the "-type d" option. ARGH. ;) Sometimes it can be so difficult coming up a workaround using other tools to accomplish the same mission.
 
Last edited:

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top