What's new

Maximum # harddisk a motherboard can handle with 2 SAS expanders

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Denywinarto

New Around Here
I'm new to this JBOD thing so bear with me..
Planning to build gigantic Media server running Ubuntu.
I just bought 1 HP SAS expander and an IBM M1015 SAS Card which has 2 SFF8087 ports, meaning it can hold up to 2 SAS expanders...
I've measured the mobo and it looks like 2 SAS expander could fit in it.. and because of certain scripts i want to run, i'm limiting my library to 1 computer.

Here's the specs :

Motherboad : Asrock Z87
Memory : 16GB 1600MhZ DDR3 G-skill
Processor : Intel® Core™ i3-4130 Processor (3M Cache, 3.40 GHz)
PSU: Plan to use 2-3 PSUs connected with Add2Psu..


So technically that's 64 HDD from SAS expander + 10 from motherboard internal SATA ports.

Questions:

1. Is there any known limitation from motherboard or other factors that i'm not aware of? Is there anything that could prevent 74 HDD from working properly with this setup? (It will be filled with 8TB and 6TB of HDDs btw)..
2. Is it Ok to run SATA from SAS expander and from internal SATA port together?
3. Is non-ECC memory viable enough?

I don't mind a little performace drop as long as i can stream 1080p content
I'm asking this because i want to build acrylic custom casing, and of course i need to know the max capacity to detemine the size of the casing

thanks
 
Check the specs on the SAS cards, if I recall they are not SATA compatible...
 
More than about 8 hdd's you'll need enterprise level (SAS ) drives to work and keep working properly. I am not too fond of the Asrock motherboards myself (I wish you luck with them), too flaky for my taste.

The ram is too low for the system build, imo (just guessing from the storage that will be available). Getting back to the motherboard issue, I would also recommend to run a MB with ECC ram support (I'm pretty sure the Asrock won't).

You need to make these 74 hdd's as stable and immovable as possible. Vibrations is what will kill them and their performance (I remember a video of someone screaming at a SAN setup and affecting the performance).

Trying to do this with consumer/gaming level of equipment will not end well in the long run.
 
Check the specs on the SAS cards, if I recall they are not SATA compatible...

The SAS card is paired with SAS expanders and SFF8087 to sata cables.. I tested it earlier with 1 SATA HDD and it works..

More than about 8 hdd's you'll need enterprise level (SAS ) drives to work and keep working properly. I am not too fond of the Asrock motherboards myself (I wish you luck with them), too flaky for my taste.

The ram is too low for the system build, imo (just guessing from the storage that will be available). Getting back to the motherboard issue, I would also recommend to run a MB with ECC ram support (I'm pretty sure the Asrock won't).

You need to make these 74 hdd's as stable and immovable as possible. Vibrations is what will kill them and their performance (I remember a video of someone screaming at a SAN setup and affecting the performance).

Trying to do this with consumer/gaming level of equipment will not end well in the long run.

Hmm i think i can bump the memory up to 32 GB.. thats the max... Will that be enough?
I wanted to consider ECC ram..
but that requires me to change almost the entire setup.. and server mobo's arent cheap in my country..
 
IF one really wants to go that large... then one needs to consider the storage pool, and here is a case where ZFS might come into play, and there - I wouldn't rush into Ubuntu - rather FreeBSD perhaps...

even then, if one has that many disks, consider breaking them out into separate groups, and then some level of logical volume management on top of that... to be honest, the largest single storage pool I had to manage was 24 disks, and that was a handful...

I don't have a problem with Asrock - but for this solution, consider their server based solutions, which are branded under Asrock-Rack rather than their desktop boards.

Rather than trying to roll one's own - I would consider perhaps FreeNAS or if you're not inclined on BSD, perhaps the debian based OpenMediaVault (which is quite interesting these days)...

sharing my thoughts...
 
And FWIW - there's some insight here - but one might consider the forums over on servethehome.com - there are quite a few folks there that are re-using enterprise gear for very high scale deployments...

Not discouraging discussion here - and always feel free to discuss...
 
The server motherboard i have so happens to be for a storage server. Some SAS can use sata so check on that first. As for PSU you dont need 2/3 except for redundancy. If you have a total of 74 drives and each drives consumes 10W (old drives consumes up to 15W, new drives can go down to 5W). So your PSU should be able to supply 740W for the drives. If the drive uses 5V than you need that amount of 5V power. If they are 12V than you need that amount of 12V power. Add the board, CPU and such and you may need a 1KW PSU. Dual server 1KW PSUs for the redundancy. I would suggest going above 1KW per PSU if possible.

You also need to consider how you will use it. I really would suggest a multi level RAID for both redundancy and performance. JBOD only adds one drive to the next from start to end, you get the capacity but not the redundancy or performance.

Seagate has 10TB drives so you can use that to get 740TB of space.

Theres no limit to how many drives a board can handle. The OS and file system though thats different. Consider using BTRFS or ZFS over mdadm (ZFS raid is complicated but if you are ok with the limitations you can use that). ZFS raid however is faster than mdadm when it comes to rebuilding.
 
Power is the key with that many drives - not a big problem, but make sure one has sufficient power available...

Also, have a plan for drive failures - the more drives within the array, the higher the probability that one will have to do some level of rebuild/recovery - and this is a strength of ZFS...

I would urge some level of caution with BTRFS as a RAID - it's ok over mdadm (let mdadm build the array, and layer BTRFS over it).

LVM is also something to consider with very large arrays with building out the shares...

(just thinking out loud - Centos with XFS and LVM might be an approach if one is adverse to ZFS)
 
ZFS is one of the best file systems but it has its limitations when it comes to RAID in terms of adding more storage. ZFS over mdadm removes the limitations but also removes some of the benefits of ZFS though not all.

BTRFS doesnt handle the RAID directly, it just works in a way that is optimised with the storage on a hardware level (blocks, etc). So if you dont use ZFS raid or ZFS over mdadm raid the next best thing is actually BTRFS. @sfx2000 being older was around during the days when BTRFS was new and buggy but i use it now on all my openSUSE machines and works fine. You use mdadm to make the raid and just format with BTRFS.
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top