DIYVideoEditServe
New Around Here
So I dove off the deep end on this... But I needed to.
We are a small video production company that is experiencing growing pains for our editing storage. We need to work in Adobe Premiere Productions to share our work and must use shared storage to make this feasible. Anyhow, to that end I bought a used Supermicro Superserver chassis from UNIXSurplus on eBay and have been setting it up now for a week or so, and learning a crap-ton as could be expected. The specs for that box are listed at the bottom of this post.
The workstations are:
- Windows 10 Home gaming-style PC (Gigabyte Aorus Gaming i7 with GTek 10G NIC)
- i7 MacBook Pro (OWC Thunderbolt NIC)
- i9 MacBook Pro (OWC Thunderbolt NIC)
So its a happy mix of OS's... no Linux or FreeNAS stuff. I want the server to run Windows 10 Pro since there are applications that we use that will take advantage of the hardware. Vienna Ensemble being the first one to try. Plus, I have zero experience in Linux etc. and would rather not go down that rabbit hole if possible.
PROGRESS THUS FAR:
Since we have 36 total bays, I wanted to leave room to grow. Starting with
- 12 HGST 12TB SAS drives to build the first RAID
First issue was figuring how best to do this since there are no real options for high-speed software raid in Windows. Looked at "spaces" but that is only parity and not stripping so that's out. I eventually flashed the HBA back to IR mode to use the hardware RAID it was designed for. After much thinking about RAID 6, I went with RAID 10 and a lower capacity since the card only did 0, 1, or 10. Plus after hearing about rebuild nightmares of RAID 6, I think this is the best option for speed/availability for us. The card limits you to 10 disks per group so I have a 54TB RAID 10 and an 11TB RAID 1. I had no idea that the "background initialization" did not stop me from using the array and thought it was going to take a week before I could test it!! LOL.... anyhow, many things are being learned on the job, so-to-speak.
ON TO THE NETWORK:
I went with a QNAP QSW-M1208-8C managed switch as it mixes SFP+DAC with regular RG45. OK so far... But wow, the info on how to setup that switch up is minimal and I had to reset it to get the default IP to work before I found the QFinder app which will search it out on the network in case you connected to a DHCP server first. That took an afternoon for me.
It seems to be working just fine now and has the server, PC workstation, and a 1G link to the rest of the network and router.
https://www.amazon.com/gp/product/B08JTZ79KT/?tag=snbforums-20
OK, first the PC workstation: I originally had an ASUS 10G NIC that arrived DOA and would not even power up in the PCI slot. Not sure what was up with that. Ordered a GTek rebranded Intel NIC with single SFP+ port and have been messing with that since. The PC was full-up on PCI cards and the GTek really did not like the last slot. Lost video, couldn't even see BIOS post. Moving cards around and now its in a 16X slot and everything seems to work, but not at good speed. And that is what brings me to you fine folks.
https://www.amazon.com/gp/product/B01LZRSQM9/?tag=snbforums-20
TESTING:
I am using LAN Speed Test Lite and am getting horrible results: Write - 0.19Gbps and Read - 1.06Gbps. Just barely using up gigabit. OK, so I've been reading a bunch of the suggestions here and have tried a couple things (including moving the NIC to a 16X slot) but no improvement. I also tried moving a 100+Gb video file over and would get around 200MB/sec for about 30 seconds and then it would slow to 0 for another 30 seconds and then back up to 200, repeating this cycle until the file transfer completed.
Jumbo frame is set to 9014 on both machines.
The SuperServer has only one of four NICs connected right now, RJ45 CAT6 15'
File is coming from a normal 7200rpm SATA drive so the speed makes sense but the drop outs don't.
PC connected by SFP+DAC 1m cable
I have not done anything more radical than setting the Jumbo frame size and thought now is a good time to get some advice. I hope its something stupid that my inexperience has overlooked but who knows...
Thanks in advance for any help on this and hopefully I can share good real world experiences with this type of setup as we use it more.
-Ashley
VIDEO SERVER/NAS SPECS:
SKU: 4U-X10DRI-T4+-36BLS3
Performance Specs:
Processor: 2x Intel Xeon E5-2620 V3 Hex (6) Core 2.4.Ghz ( total 12 Cores)
Memory: 128GB DDR4 (16 x 8GB - DDR4 - REG 2133)
Hard Drives: None
Controller: 1x AOC-S3008L-L8e HBA 12Gb/s HBA UNRAID (Great for FreeNas)
NIC: * Integrated Intel X540 Quad Port 10GBase-T
Supermicro Model: SSG-6048R-E1CR36N
Secondary Chassis/ Motherboard specs:
Supermicro 4U 36x 3.5" Drive Bays
Server Chassis/ Case: CSE-847BE1C-R1K28LPB
Motherboard: X10DRi-T4+
* Integrated IPMI 2.0 Management
Backplane: 2x Backplane:
*BPN-SAS3-846EL1 24-port 4U SAS3 12Gbps single-expander backplane, support up to 24x 3.5-inch SAS3/SATA3 HDD/SSD
*BPN-SAS3-826EL1 12-port 2U SAS3 12Gbps single-expander backplane, support up to 12x 3.5-inch SAS3/SATA3 HDD/SSD
PCI-Expansions slots: Low Profile 2 PCI-E 3.0 x16 slot, 3 PCI-E 3.0 x8 slot, 1 PCI-E 2.0 x4 (in x8) slots
HD Caddies: 36x 3.5" Supermicro caddy
Power: 2x 1280Watt Power Supply PWS-1K28P-SQ
Rail Kit: Generic Supermicro 3rd party
We are a small video production company that is experiencing growing pains for our editing storage. We need to work in Adobe Premiere Productions to share our work and must use shared storage to make this feasible. Anyhow, to that end I bought a used Supermicro Superserver chassis from UNIXSurplus on eBay and have been setting it up now for a week or so, and learning a crap-ton as could be expected. The specs for that box are listed at the bottom of this post.
The workstations are:
- Windows 10 Home gaming-style PC (Gigabyte Aorus Gaming i7 with GTek 10G NIC)
- i7 MacBook Pro (OWC Thunderbolt NIC)
- i9 MacBook Pro (OWC Thunderbolt NIC)
So its a happy mix of OS's... no Linux or FreeNAS stuff. I want the server to run Windows 10 Pro since there are applications that we use that will take advantage of the hardware. Vienna Ensemble being the first one to try. Plus, I have zero experience in Linux etc. and would rather not go down that rabbit hole if possible.
PROGRESS THUS FAR:
Since we have 36 total bays, I wanted to leave room to grow. Starting with
- 12 HGST 12TB SAS drives to build the first RAID
First issue was figuring how best to do this since there are no real options for high-speed software raid in Windows. Looked at "spaces" but that is only parity and not stripping so that's out. I eventually flashed the HBA back to IR mode to use the hardware RAID it was designed for. After much thinking about RAID 6, I went with RAID 10 and a lower capacity since the card only did 0, 1, or 10. Plus after hearing about rebuild nightmares of RAID 6, I think this is the best option for speed/availability for us. The card limits you to 10 disks per group so I have a 54TB RAID 10 and an 11TB RAID 1. I had no idea that the "background initialization" did not stop me from using the array and thought it was going to take a week before I could test it!! LOL.... anyhow, many things are being learned on the job, so-to-speak.
ON TO THE NETWORK:
I went with a QNAP QSW-M1208-8C managed switch as it mixes SFP+DAC with regular RG45. OK so far... But wow, the info on how to setup that switch up is minimal and I had to reset it to get the default IP to work before I found the QFinder app which will search it out on the network in case you connected to a DHCP server first. That took an afternoon for me.

https://www.amazon.com/gp/product/B08JTZ79KT/?tag=snbforums-20
OK, first the PC workstation: I originally had an ASUS 10G NIC that arrived DOA and would not even power up in the PCI slot. Not sure what was up with that. Ordered a GTek rebranded Intel NIC with single SFP+ port and have been messing with that since. The PC was full-up on PCI cards and the GTek really did not like the last slot. Lost video, couldn't even see BIOS post. Moving cards around and now its in a 16X slot and everything seems to work, but not at good speed. And that is what brings me to you fine folks.
https://www.amazon.com/gp/product/B01LZRSQM9/?tag=snbforums-20
TESTING:
I am using LAN Speed Test Lite and am getting horrible results: Write - 0.19Gbps and Read - 1.06Gbps. Just barely using up gigabit. OK, so I've been reading a bunch of the suggestions here and have tried a couple things (including moving the NIC to a 16X slot) but no improvement. I also tried moving a 100+Gb video file over and would get around 200MB/sec for about 30 seconds and then it would slow to 0 for another 30 seconds and then back up to 200, repeating this cycle until the file transfer completed.
Jumbo frame is set to 9014 on both machines.
The SuperServer has only one of four NICs connected right now, RJ45 CAT6 15'
File is coming from a normal 7200rpm SATA drive so the speed makes sense but the drop outs don't.
PC connected by SFP+DAC 1m cable
I have not done anything more radical than setting the Jumbo frame size and thought now is a good time to get some advice. I hope its something stupid that my inexperience has overlooked but who knows...
Thanks in advance for any help on this and hopefully I can share good real world experiences with this type of setup as we use it more.
-Ashley
VIDEO SERVER/NAS SPECS:
SKU: 4U-X10DRI-T4+-36BLS3
Performance Specs:
Processor: 2x Intel Xeon E5-2620 V3 Hex (6) Core 2.4.Ghz ( total 12 Cores)
Memory: 128GB DDR4 (16 x 8GB - DDR4 - REG 2133)
Hard Drives: None
Controller: 1x AOC-S3008L-L8e HBA 12Gb/s HBA UNRAID (Great for FreeNas)
NIC: * Integrated Intel X540 Quad Port 10GBase-T
Supermicro Model: SSG-6048R-E1CR36N
Secondary Chassis/ Motherboard specs:
Supermicro 4U 36x 3.5" Drive Bays
Server Chassis/ Case: CSE-847BE1C-R1K28LPB
Motherboard: X10DRi-T4+
* Integrated IPMI 2.0 Management
Backplane: 2x Backplane:
*BPN-SAS3-846EL1 24-port 4U SAS3 12Gbps single-expander backplane, support up to 24x 3.5-inch SAS3/SATA3 HDD/SSD
*BPN-SAS3-826EL1 12-port 2U SAS3 12Gbps single-expander backplane, support up to 12x 3.5-inch SAS3/SATA3 HDD/SSD
PCI-Expansions slots: Low Profile 2 PCI-E 3.0 x16 slot, 3 PCI-E 3.0 x8 slot, 1 PCI-E 2.0 x4 (in x8) slots
HD Caddies: 36x 3.5" Supermicro caddy
Power: 2x 1280Watt Power Supply PWS-1K28P-SQ
Rail Kit: Generic Supermicro 3rd party