What's new

Buffalo Linkstation internals?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

tigerdog

Occasional Visitor
My linkstation 420 backups have stopped working; trying to access configuration throws an error "Error: get backup task job. A system error has occurred. Reboot linkstation and try again." Rebooting does nothing. This happened after a power failure; I expect some internal file is corrupted. Does anyone have knowledge of the internals of Linkstation software? Any idea how to get things back?
 
Having had a couple of buffalo's in the past they're not the best when it comes to longevity. I ended up shucking the drives and using them elsewhere. I would take the drive out and hookup an adapter to a PC and run scandisk on it in the very least. I don't recall a way of updating the firmware or software on the unit itself other then trh GUI. If there's a USB port on it there might be a recovery method through that. Otherwise the safest option might be remove the disks and do a hard reset on it and it might fix the SW issue.
 
Thanks, Tech Junky. The ironic thing is that the data on the RAID array appears to be intact, and all other functions in the system (GUI, DLNA server, FTP server, SMB shares, etc.) seem to be working properly. It's only the NAS-to-NAS backup that's b0rked. The unit is updated to the latest Buffalo firmware (2022-01 fix for a 2022 samba security bug.)

I think I've about had it with Buffalo's utter lack of support and I've seen enough threads in this and other forums to realize I'm not the only one. Sad, really, because the units' small, quiet, power-efficient nature lends itself nicely to home network use.
 
They have some appeal but lack in performance when you put them through their paces. The ones I had were self contained single disk. I've since just rolled a NAS into my single system that's a router/switch/AP/firewall/Plex/etc. I get 400MB/s plus out of it with a 5ge nic. Raid 10 gives a good boost and redundancy. Not a true backup but good enough if there's a failure of a disk or two if they're not in the same set.
 
Code:
Sep  7 05:49:27 Goldenbear local0.err nasapi - buffalo_jsonrpc2 - process - ERROR- method: backup.listjobs, params: {} Traceback (most recent call last):   File "/usr/local/lib/nasapi/buffalo_jsonrpc2.py", line 99, in process     result = method(**kwargs)   File "/usr/local/lib/nasapi
Sep  7 05:49:28 Goldenbear local0.info nasapi - system - restore_db - INFO- success

When I try to do anything with backup, lines like this are placed in /var/log.linkstation.log
 
Well, if you can scp the py file referenced and look at line 99 to see what it says you can probably fix it and upload it back to the buffalo.

Have you tried deleting the job and adding it back?
 
Yes. Any effort to manipulate backup jobs results in the same message, with different triggers, example
Error: unable to get backup job.
Error: add backup job

it's as if where backup jobs are stored is inaccessible. I can't figure out where this is, though.

Code:
        if isinstance(method, basestring):
            method = self.load_method(method)

        try:
            params = data.get('params', [])
            if isinstance(params, list):
                result = method(*params, **extra_vars)
            elif isinstance(params, dict):
                kwargs = dict([(str(k), v) for k, v in params.iteritems()])
                kwargs.update(extra_vars)
                result = method(**kwargs)
            else:
                raise JsonRpcException(data.get('id'), INVALID_PARAMS)
            resdata = None

line 99 is
result = method(**kwargs)
 
Do you know where they're stored? If it's a permission issue you can try setting the permissions through scp to all of the options to make them executable.
 
That's the problem in a nutshell: I have no idea where the profiles are stored, or how. There seems to be zero documentation and nearly zero forum discussion. I haven't been able to find any mention of reverse-hacks to understand the internal structure.

Anyone? Bueller?
 
Well, if you can scp you should be able to ssh and run some commands to figure it out. Linux commands should work like ls -l for displaying the files. chmod 777 * will make everything executable. The error logs kind of give a hint where things are stored.

Or the alternative you could try pulling a backup from the other side that does work. Or use rsync to run the backup with cron timers.
 
Buffalo locks out ssh. Unlock tools exist but don't work with current firmware. I'm limited to one command at a time, with limits to character return. I can export files and have "ls -l" until my fingers fall off, but still haven't cracked it. Would be helpful if I understood python instead of just old-dog languages. I'm about ready to chuck it and buy a Synology.
 
It can be a PITA for sure. This is why I used a PC and built my own instead. I can manage things better and secure it w/o worrying about the latest script kiddies making remote wipe scripts. It's relatively easy to do with Linux and R/W able to any OS using EXT4 / Samba. With the ability to manipulate the Raids / NIC speeds / etc. to make it conform to your network instead of the generic 1ge / 10ge speeds and slow disks compared to better options. While most NAS options are decent enough some are bottom of the barrel for sure.

For instance on my setup I have 5 disks in play with R10 which isn't a normal option but gives me an immediate hot standby disk if something fails. With 4 disks hitting 400MB/s+ I put in a 5GE NIC to cover the transfer speed w/o a bottleneck and also use the same NIC for my router/switch functions as well. There's just more time savings when you DIY a better solution. The devices for the masses are lackluster and basic. They get the job done but not much more w/o pain. This shows more when you try to use them to host media for playback and need to transcode the files for playback. The CPU / RAM in most NAS's doesn't do this very well and even the high priced options claim they can w/o stuttering but, looking at the CPU specs it doesn't seem like they would do a good job. The devil is in the details.
 
SOLVED
If anyone finds this thread in the future, know the issue was solved by replacing the files /etc/melco/backup1, backup2... backup8 with unmolested copies from a firmware install image. Apparently the internal processes cannot recover if any of these files is corrupted or missing. The contents of the file should consist of the following lines:

Code:
status=
type=now
start_time=0:00
week=Sun
overwrite=off
crypt=off
compress=off
folder=<>
trashbox=off
mirror=off
logfile=on
force=off
 
So, did you copy one to replace the other and rename it? I've done this sort of thing with Linux service files to duplicate them for redundant processes I wanted to start automatically.

Looks like it might be easier to edit the file directly vs the GUI as well once you have an idea of what's required to make it work.
 
So, did you copy one to replace the other and rename it? I've done this sort of thing with Linux service files to duplicate them for redundant processes I wanted to start automatically.

Looks like it might be easier to edit the file directly vs the GUI as well once you have an idea of what's required to make it work.
I have not been able to get ssh access to the device, so I was limited to single commands via ACP commander GUI. One of the files was clean, so I deleted the others and copied the good one into each of the other required files. If that hadn't worked, I had other workarounds in mind... but it worked.
 
Well, you have it working now but, who knows what caused the corruption in the first place. Always a good idea to have a path forward in mind if you don't want to deal with it in the future. It's funny how some simple little text files can muck everything up so easily. It's this sort of thing that has me leading people to DIY solutions where there's less risk of dealing with oddities like this. There's always something you give up for ease of use with prepackaged systems. Coders either forget or intentionally do something to make it less optimal. When it comes to NAS devices though there's been a lot of issues with backdoors being left open and data being lost due to the exposure and subsequent breaches. Sometimes it's even self inflicted by companies with an error in the firmware pushed out that upon reboot to activate the new firmware trigger events. I've seen so many different scenarios over the years it's amazing some of these companies are still in business. In theory it should be really simple to make these devices and have them be secure but they still manage to F them up. When you get beyond a certain price point it just makes more sense to use a PC + drives instead of paying for an over priced metal case w/ a CPU/RAM and some sata ports w/ proprietary software and limited to a single purpose.
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top