What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Status
Not open for further replies.
You are right, the only problem with this is that it will rely on a second device, which means its use cases are limited.

We can try to deal with memory somehow? I don't think it's storage because I can expand the .zip successfully which means 145+162MB which is 307MB, which is much higher than when I tried manually, that should of only been 160MB at the most.
It's likely memory (RAM issue)
 
Oh wait, the problem is that the router IS trying to deal with memory, by shutting down our curl and https services, etc.

I'm assuming as a safety it unloads anything not needed for the flash, including the system tools we use to upload the flash.
 
Oh wait, the problem is that the router IS trying to deal with memory, by shutting down our curl and https services, etc.

I'm assuming as a safety it unloads anything not needed for the flash, including the system tools we use to upload the flash.
Yes. Well, it's going to be a cat and mouse game. :(

But our script has root access, is there anything we can do to prevent this?
 
Yes. Well, it's going to be a cat and mouse game. :(

But our script has root access, is there anything we can do to prevent this?
You can’t fit 10 lbs of sh!t in a 5 lb bag…no matter who you are.
 
I'm wondering if the first time I thought it worked because I had this enabled?

1696529168103.png


Maybe some how this setting delays the shutdown of httpd until the flash is complete somehow?

Not currently at my desk but will play with ideas more later this evening

Edit: I had disabled it during the manual test to since I was testing using the LAN IP in the script.
 
Maybe I'm missing the point... but isn't there a way to just upload the .w file using curl, instead of having to download/upload the .zip file to local storage, then extract all the files contained within in order to select the .w file to upload? Almost would seem like you'd need your own non-zipped repository to pick .w files from?
 
You can’t fit 10 lbs of sh!t in a 5 lb bag…no matter who you are.
I'm afraid you're right. It seems that only desktop App can be used as a solution. So is there anything we can do about cross-platform compatibility? Using shell script on macOS and Linux and PowerShell script on Windows?



EDIT:

Wait, @ExtremeFiretop does the hnd-write solution you used before work on the router alone? If it works, that means not that much RAM is needed during the firmware flash.
 
Maybe I'm missing the point... but isn't there a way to just upload the .w file using curl, instead of having to download/upload the .zip file to local storage, then extract all the files contained within in order to select the .w file to upload? Almost would seem like you'd need your own non-zipped repository to pick .w files from?
It’s not a big deal to unzip and delete the zip immediately. And introducing an untrusted firmware repository defeats the purpose of wanting to automate the upgrade. More thinking needed, that’s all.
 
EDIT:

Wait, @ExtremeFiretop does the hnd-write solution you used before work on the router alone? If it works, that means not that much RAM is needed during the firmware flash.

Correct, worked directly on the router before, 100% worked on the router alone. So it's why I'm leaning towards services shutting down just by ASUS design instead of actually due to limited storage or memory.

More testing needed
 
Correct, worked directly on the router before, 100% worked on the router alone. So it's why I'm leaning towards services shutting down just by ASUS design instead of actually due to limited storage or memory.
I think there may actually be multiple protection measures between the GUI and hnd-write to ensure the success of the firmware upgrade, one of which is freeing RAM, which caused our current failure.

More testing needed
I'll try to test the script on my RT-AC68U over the weekend. As I said before, this is an extreme model, with a huge firmware size and the least RAM (256MB) among RMerlin supported devices, and if a breakthrough can be made on this model then I think we still have hope.
 
I tried to re-enabled webui-redirection and it made no difference.

However one thing I noticed is between the router and the desktop the behavior is almost identical.
On my desktop the only difference is I actually see it reach 100% and give me an output before rebooting.

However on my router, I only see it reach 4% and then disconnect me and reboot, and I'm now wondering if because its "uploading" over local storage and not over the network if it's simply completing too fast for me to get that output.
That is the only reason I assume it isn't working, so to test the theory I will try manually again, but doing a downgrade to see if it actually completes or not.

HTML:
+ curl.exe 'http://www.asusrouter.com/upgrade.cgi' `
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (  % Total    % ...  Time  Current:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0 80.0M    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0 80.0M    0     0    0  384k      0   181k  0:07:32  0:00:02  0:07:30  181k
  0 80.0M    0     0    0  384k      0   122k  0:11:08  0:00:03  0:11:05  122k
  0 80.0M    0     0    0  384k      0  94899  0:14:43  0:00:04  0:14:39 94933
  0 80.0M    0     0    0  384k      0  76282  0:18:19  0:00:05  0:18:14 76308
  0 80.0M    0     0    0  384k      0  63771  0:21:55  0:00:06  0:21:49 76382
  0 80.0M    0     0    0  448k      0  66020  0:21:10  0:00:06  0:21:04 13574
 31 80.0M    0     0   31 25.3M      0  3414k  0:00:23  0:00:07  0:00:16 5716k
 96 80.0M    0     0   96 77.1M      0  9172k  0:00:08  0:00:08 --:--:-- 17.1M
100 80.0M    0   952  100 80.0M    102  8777k  0:00:09  0:00:09 --:--:-- 19.0M
<html>
<head>
<title>ASUS Wireless Router Web Manager</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta HTTP-EQUIV="Pragma" CONTENT="no-cache">
<meta HTTP-EQUIV="Expires" CONTENT="-1">
<link rel="shortcut icon" href="images/favicon.png">
<link rel="icon" href="images/favicon.png">
</head>
<body>
<script>
var knv = "4.1.52";
var reboot_needed_time = eval("90");
parent.document.getElementById("hiddenMask").style.visibility = "hidden";
if(parent.Bcmwifi_support && knv_threshold >= 4){
reboot_needed_time += 40;
parent.showLoadingBar(reboot_needed_time);
setTimeout("parent.detect_httpd();", (reboot_needed_time+2)*1000);
}
else if(parent.based_modelid == "RT-N11P"){
parent.showLoadingBar(160);
setTimeout("parent.detect_httpd();", 162000);
}
else{
parent.showLoadingBar(270);
setTimeout("parent.detect_httpd();", 272000);
}
</script>
</body>
</html>''
 
Can confirm, trying manually again, with the last beta3, did not successfully flash on the router, it once again stopped at 4%, rebooted and came back on the old version a minute later.
 
I even watched the asd.log to see if asd was blocking the local upload as suspicious, but no changes.
 
GOT IT TO WORK!

Successfully downgraded myself to beta3 from the router.

The hack? Use nohup on the curl.

nohup curl 'http://www.asusrouter.com/upgrade.cgi' \
--referer http://www.asusrouter.com/Advanced_FirmwareUpgrade_Content.asp \
--user-agent 'Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0' \
-H 'Accept-Language: en-US,en;q=0.5' \
-H 'Origin: http://www.asusrouter.com/' \
-F 'current_page=Advanced_FirmwareUpgrade_Content.asp' \
-F 'next_page=' \
-F 'action_mode=' \
-F 'action_script=' \
-F 'action_wait=' \
-F 'preferred_lang=EN' \
-F 'firmver=3.0.0.4' \
-F "file=@/home/root/GT-AXE11000_firmware/GT-AXE11000_3004_388.4_beta3_pureubi.w" \
--cookie /tmp/cookie.txt > /tmp/upload_response.txt 2>&1 &
 
Just successfully re-upgraded myself using the script this time modified with nohup instead of my last manual attempt with no nohup.

So at least for my model, that rules out any memory or storage issues. It really was just curl being shut down in the upgrade, as soon as the terminal/shell session closed the curl was terminated, now with nohup and &, it allows it to run as a background job and not hang up until completed.

I think thats some huge improvement today guys! I need some sleep, but thank you so much for all the help :D

Feel free to keep poking at this, I'll be uploading the updated script in a moment.
 
GOT IT TO WORK!
Great, you found a breakthrough.

I'll be uploading the updated script in a moment.

You don't need to add nohup in the final user-facing script, because the SSH session being killed only happens during our debugging, when the script is invoked by a custom script or a cron job, their parent process is not dropbear, so it will not be killed.

Need to try it from cron next…
cron will not kill itself because it is a separate process.




Reminds me of another script I developed before, which involved restarting the LAN, so the SSH session would be killed. Then the trick I use is when the script is ready to run until the LAN is restarted, add a cron job, then exit and let the cron job run the rest of the script again.

I remember in my case nohup wasn't working, there was even a thread on SNB asking why it wasn't working.
Maybe they finally fixed nohup? I have no idea.
 
Last edited:
I also just wanted to add, this is really the type of thread I was hoping to have, you don't understand, I am really thankful I have found open minded individuals willing to poke at this project and troubleshoot our way through it.
This has actually been fun! :)
 
I also just wanted to add, this is really the type of thread I was hoping to have, you don't understand, I am really thankful I have found open minded individuals willing to poke at this project and troubleshoot our way through it.
This has actually been fun! :)
It's great to be a part of this.
 
Last edited:
You don't need to add nohup in the final user-facing script, because the SSH session being killed only happens during our debugging, when the script is invoked by a custom script or a cron job, their parent process is not dropbear, so it will not be killed.

Is there harm in keeping nohup? The reason I ask is if I call/run the script from SSH shell like the below and select option 2:

1696539625084.png


it won't work unless I have nohup right now, it does the download and then quits the sessons and fails, however if I have nohup and select 2, it does the download and completes the flash successfully.
it makes me wonder about any future implementations if we remove it.

Edit: i understand when the cron calls the run_now function directly it may not be needed, but when about when a user first configures? etc?
 
Status
Not open for further replies.
Similar threads
Thread starter Title Forum Replies Date
F seeking help with bridge mode Asuswrt-Merlin 28

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top