What's new

Low-Power SSD for amtm, Diversion, Entware and Tailscale on the USB 2.0 Port (USB 3.0 configured as USB 2.0)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Status
Not open for further replies.
Thanks, I am under the impression that info from AMTM, Diversion(logs) and Entware reside on the router side and would not have been located on the USB drive if no swap was used.
Some information (e.g. amtm files) is stored on the router under /jffs. Other information (e.g. Entware, Diversion logs) is stored on the USB drive. None of that has anything to do with whether swap is being used or not (other than the swapfile itself would be on the USB drive).

What was copied? and why would this method of moving info be preferred over starting fresh (not including the Tailscale info, which I get) ?
Everything was copied to a new bigger drive. It's really just that simple.
 
Some information (e.g. amtm files) is stored on the router under /jffs. Other information (e.g. Entware, Diversion logs) is stored on the USB drive. None of that has anything to do with whether swap is being used or not (other than the swapfile itself would be on the USB drive).


Everything was copied to a new bigger drive. It's really just that simple.

Thank you!
 
Nand doesn't 'move bad blocks'. The controller simply maps them out of future use (if possible).

Nand isn't managed by individual cells. They are used in blocks. A physical chip can be (totally) damaged, or a few blocks can be beyond the capabilities of the controller/nand to ensure reliability. Either way, small capacity SSDs have much less error correction because less can be done with them.

Hoping to get better than USB stick endurance is not a very high bar to reach. And the small SSDs don't strive for dependability either.

If you value the reliability and dependability of your network with the settings/scripts you've chosen, getting an SSD of equivalent reliability should be paramount.

As for the no swap file issue, with a large SSD, may as well add it (10GB/or max size allowed). Even if your current scripts seem to be stable without one.

Smaller nand=cheap nand. (All nand is not created equal).
Cheap nand=cheap controller. (Neither are all controllers).
Cheap nand and cheap controller means little thought has been put to wear levelling, if any. Or to quality firmware (see SanDisk).

Hmmm, sounds just like a USB stick. 🙂


BTW, that caching drive for the Lenovo isn't a good model of the use it will get in the router (operating in Windows is different).

Also, those max power levels are never seen with router workloads (regardless of the interface).
 
I ask because I started using the 5GB size for the reason that there is more surface area to write and take care of bad/used areas.

This is not needed. Wear levelling works on the entire drive. Not confined to a file. The controller doesn't care of how big the file is.
 
This is not needed. Wear levelling works on the entire drive. Not confined to a file. The controller doesn't care of how big the file is.

A Swap file is located on a Partition, The larger the Partition--the more surface area there is for the controller to do its job.
 
The controller works with physical cells. A partition is a logical structure as well.
 
Nand doesn't 'move bad blocks'. The controller simply maps them out of future use (if possible).

Nand isn't managed by individual cells. They are used in blocks. A physical chip can be (totally) damaged, or a few blocks can be beyond the capabilities of the controller/nand to ensure reliability. Either way, small capacity SSDs have much less error correction because less can be done with them.

Hoping to get better than USB stick endurance is not a very high bar to reach. And the small SSDs don't strive for dependability either.

If you value the reliability and dependability of your network with the settings/scripts you've chosen, getting an SSD of equivalent reliability should be paramount.

As for the no swap file issue, with a large SSD, may as well add it (10GB/or max size allowed). Even if your current scripts seem to be stable without one.

Smaller nand=cheap nand. (All nand is not created equal).
Cheap nand=cheap controller. (Neither are all controllers).
Cheap nand and cheap controller means little thought has been put to wear levelling, if any. Or to quality firmware (see SanDisk).

Hmmm, sounds just like a USB stick. 🙂


BTW, that caching drive for the Lenovo isn't a good model of the use it will get in the router (operating in Windows is different).

Also, those max power levels are never seen with router workloads (regardless of the interface).

If you look closely at Post #8 Toshiba BiCS3 is a good quality, just an older generation.
 
The controller works with physical cells. A partition is a logical structure as well.

The Partition needs enough healthy cells to operate safely, the bigger the better.
 
The Partition needs enough healthy cells to operate safely, the bigger the better.

It doesn't matter what your logical structure is. The controller wear levelling works on the entire drive rotating physical cells. Your partition in physical form is actually everywhere on the chip. The controller just knows how many cells it is. It's not like HDD inside or outside on the platter. This is why SSDs have no defragmentation.
 
@Tech9 is correct. Wear levelling is something that happens across the entirety of the physical storage device. It is not constrained by partitioning which is an entirely logical construct used by the operating system.
It doesn't matter what your logical structure is. The controller wear levelling mechanism works on the entire drive rotating physical cells. Your partition in physical form is actually everywhere on the chip. The controller just knows how many cells it is. It's not like HDD inside or outside on the platter. This is why SSDs have no defragmentation.

When, in my case I take a 64GB drive and Short-stroke the file size to a 5GB size, it's not going to look outside that area for new good cells. You have constrained the total area of the writes.
 
When, in my case I take a 64GB drive and Short-stroke the file size to a 5GB size, it's not going to look outside that area for new good cells. You have constrained the total area of the writes.

Short-stroking is a HDD strategy, it doesn't apply to SSDs. SSDs don't have inner, outer, faster or slower areas. I think you are confusing the way hard disks work with the way SSDs work.

P.S. We're talking specifically about SSDs with wear levelling here, not old-style flash drives without wear levelling.
 
Last edited:
@John Fitzgerald, visual representation of your files on SSD will be something like this:

1706568391267.png


The white cells are all your partitions and files. The blue cells may be available or not even part of a logical structure.
 
When, in my case I take a 64GB drive and Short-stroke the file size to a 5GB size, it's not going to look outside that area for new good cells. You have constrained the total area of the writes.
Short stroke... a USB Samsung 64GB FIT Plus flash drive? Uh I don't think that's how things work. Short stroking is normally only for mechanical hard drives (i.e. spinning platters).
 
It will even if the rest of the drive is not formatted. The controller will use all 64GB available for wear levelling. There is no "area" on the chip.

OK, so why do I need to designate a swap file size. Shouldn't it just see the whole drive and use it's full capacity without having to pick a size, just write to the drive as is, when it's needed. (as long as it's formatted to the proper kind I.E. Extension4, in this case.) What benefit is there to constraining the size, or picking larger or smaller?
 
OK, so why do I need to designate a swap file size. Shouldn't it just see the whole drive and use it's full capacity without having to pick a size, just write to the drive as is, when it's needed. (as long as it's formatted to the proper kind I.E. Extension4, in this case.) What benefit is there to constraining the size, or picking larger or smaller?
I'm not being rude (honestly) but this, combined with your earlier comments about swap leads me to think you may have a fundamental misunderstanding of what a swapfile is or how it's used.
 
OK, so why do I need to designate a swap file size. Shouldn't it just see the whole drive and use it's full capacity without having to pick a size, just write to the drive as is, when it's needed. (as long as it's formatted to the proper kind I.E. Extension4, in this case.) What benefit is there to constraining the size, or picking larger or smaller?
Don't confuse a swap file with a swap partition.
 
OK, so why do I need to designate a swap file size.

Because you need a logical structure where the OS can write data. The 2GB swap is for compatibility only. Larger swap is not needed and may actually have negative effect. If your router starts actively swapping due to low RAM condition - it's going down soon after. This USB swap is dead slow to keep it running normally.
 
I'm not being rude (honestly) but this, combined with your earlier comments about swap leads me to think you have a fundamental misunderstanding of what a swapfile is or how it's used.

Maybe. It's an external area the the router can use to supplant overflow. (in my case Diversion & Skynet)
 
Status
Not open for further replies.

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top