What's new

Build Your Own Fibre Channel SAN

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Thanks a lot for this impressive How To !

Probably stupid, but I was wondering if it was possible to access Old Shuck over FC from your Windows station, and over ethernet from other devices? Just to have BlackDog possible to power off.

You can access Old Shuck through the management channel, HTTPS over tcp/ip, even with the DAS server down ( BlackDog ). You can only access the actual FC shared storage with the DAS server up. Ol'Shuck doesn't know about the filesystems on the disks, only your DAS server does.

This isn't easy to solve, for the storage to be available without the DAS server, you have to multipath and used a shared filesystem (NTFS is not a shared FS). Take a look at some of the other posts about approaches for that.
 
You can access Old Shuck through the management channel, HTTPS over tcp/ip, even with the DAS server down ( BlackDog ). You can only access the actual FC shared storage with the DAS server up. Ol'Shuck doesn't know about the filesystems on the disks, only your DAS server does.

This isn't easy to solve, for the storage to be available without the DAS server, you have to multipath and used a shared filesystem (NTFS is not a shared FS). Take a look at some of the other posts about approaches for that.

Thank you, I did not catch the shared FS subtlety at first. I'm waiting forward from reading the third part! :)
 
How to use Win 2008 instead of OpenFiler for target?

I have pretty much the exact same setup (except I scored 4 port Qlogic cards instead) and am trying to setup my Win 2008 server to export it's RAID over the Qlogic cards and haven't yet figured out how to have this exact same point to point setup but using 2 windows boxes.

Any pointers on how to replace the OpenFiler box with a Win 2008 box instead?

Also, since I have a few 4 port cards, and can set up 4 point to point connections (1 server, 3 workstations including a Mac), what is the best way of sharing the disk then? I realize this gets into clustered filesystem territory, but I can't find an open source FS that works between windows (and ofcourse I'm adding a Mac to this equation). Commercial filesystems are out of my budget for the time being.

Ay way I could just do filesharing over IP on these point to point fibre connections?

Any pointers greatly appreciated!
 
Not a windows expert, but to do sharing or multi-pathing, I'm pretty sure you have to be running Hyper-V and Server 2008 R2. I think this is primarily for failover between VMs.

There is a technet article about using CSV, Clustered Shared Volumes, and how to set them up. It is not at all clear. The advantage appears as though the overlying filesystem is NTFS, so nothing needs to be done on the client nodes.
 
I've skimmed over Windows CSV in the Hyper-V stuff and it's not applicable to what I'm doing. It seems to be just for failover not for shared access.

I want multiple Windows (and a Mac) boxes to simultaneously mount/read/write a drive from a Win 2008 server via the point to point setup done in this tutorial.

I'll settle for just 1 workstation and 1 server in windows right now and worry about the shared filesystem later once I figure out how to get a fibre point to point between win boxen.

But for now, how do I get a win 2008 server to do what the author is doing with the openfiler setup? I have the same hardware (just more ports)
 
I've skimmed over Windows CSV in the Hyper-V stuff and it's not applicable to what I'm doing. It seems to be just for failover not for shared access.

I want multiple Windows (and a Mac) boxes to simultaneously mount/read/write a drive from a Win 2008 server via the point to point setup done in this tutorial.

I'll settle for just 1 workstation and 1 server in windows right now and worry about the shared filesystem later once I figure out how to get a fibre point to point between win boxen.

But for now, how do I get a win 2008 server to do what the author is doing with the openfiler setup? I have the same hardware (just more ports)

I think you would need to start out exactly the same as Greg did in his article. Download the SANSurfer software from Qlogic's website and setup your client in the same fashion described in the article.

For the server side you might take a look at Microsoft's TechNet Library. I did a quick check on my server (Win2008 Server R2) and all you need to install is the Storage Manger for SANs feature. I think from there you just need to install the Qlogic card in your server and maybe download drivers/software from Qlogic. Once that is done you should be able to open up the Storage Manger for SAN program and it will automatically find your Fibre Channel setup and allow you to assign disks to it. But this is just a guess based on the little bit of research I did.

Hope that helps.

00Roush
 
Roush pretty much nailed the approach, low risk all the way.


There appears to be two approaches to sharing LUNs across multiple machines (Caveat: I have not done this, and do not claim to be an expert, but this would be my approach):

HyperV with Windows 2008 running CSV:

As with all Microsoft kitchen sink tech, this appears to quote Amadeus, too many notes, all the pieces have to be in place, and you have to identify the machines that are going to be participating.

From the link I provided, you'll see under number four:

CSV allows every cluster node to access the disk concurrently. This is accomplished by creating a common namespace under %SystemDrive%\ClusterStorage. For this reason, it is necessary to have the OS on the same drive letter on every node in the cluster (such as C:\, which will be used in this blog). You will see the same directory from every node in the cluster and this is the way to access CSV disks.

To run CSV you have to be running Server 2008, HyperV, and set up clustering.

VMware running ESXi under Server 2008, using VMFS:


The other approach is the one mentioned earlier in this thread, run ESXi under Windows Server, this will give you VMFS, and NTFS can run on top of that.


In either case you need a Hypervisor to coordinate block access to the disks.

I am VERY interested in your attempts, what you learn, please keep us apprised. I'll help where I can.
 
Great series, Greg. Any idea when Part 3 will be ready? I'm anxiously awaiting your update.

Do you think you might consider adding a used fiber channel switch and doing some multipathing? From the very little I've ready, it seems SCST is fairly straight forward.

My goal would be to have the SAN with two client nodes, each with two paths to the SAN. I could probably skip the fiber switch and just get an extra card or two in the SAN.
 
Could you tell me what the difference between a Storage appliance like this (DDN S2A6620) and building out this fibre channel SAN? It looks like these products are closer to a very large NAS rather than a SAN. I don't see a motherboard, CPU(s) or memory.
 
Could you tell me what the difference between a Storage appliance like this (DDN S2A6620) and building out this fibre channel SAN? It looks like these products are closer to a very large NAS rather than a SAN. I don't see a motherboard, CPU(s) or memory.

The definition difference is a SAN provides storage at the block level, a NAS at the file level. According to the spec sheet for the S2A6620, it provides multiple 8 Gig fibre channel interface points.

It also seems to have a managed cache of 12Gb, and is highly optimized for disk I/O.

Ol'Shuck is a volkswagon next to that Mercedes - but are more alike than different.
 
Status on the 3rd part?

I really like this series and plan to build this out for use in a home lab as storage for some half-size servers as well as some home media storage.

Thank you for everything you have done so far.

Fred
 
You're welcome. There won't be a Part III, however.
 
Say it ain't so!

No part 3?

Can you elaborate? Are there further issues that can be addressed here by the community?

Fred
 
Part 3 would have been pretty esoteric and of very limited interest.
 
How to get multiple LUN's

How can you get multiple LUN's from this build? Do you need to have multiple Raid Cards? Or can you divvy up the drives on one raid card and make multiple luns from that?

I only ask because I would like to do this at home and get a fiber switch and hook up all my servers to a san and consolidate all the internal HD (d:\ drives) into the san.

Thanks

Love the How to's
 
Clifton,

Ol' Shuck has three LUNS, one of which spans two different RAID cards. So Yes, Yes, and Yes. There is no limit to the number of LUNs.

I run multiple RAID cards because one one raid card provided 20 drive capability.

If you are willing to build a DAS Node (Networked PC witch a fiber card), a fiber switch is unnecessary, but if you are widely distributing different LUNs, a switch could be handy.

Lessons learn from Ol'Shuck, go with a nice case that is both quiet and cool. And provided you can deal with a 2TB limit on the size of LUNS, the only way you can share a LUN across multiple clients, is by using VMFS (which has the 2TB limit) - or by using a DAS Node.

The other lesson is that RAID cards are not required, unless you want the redundancy and reliability they offer, the performance gain is not significant. Using a non-raid SATA or SAS HBA is a real option.

I can strongly recommend PCI-X based server, I recently built a 2U server with dual quad core 64-bit 3.16Ghz Xeons, with 16GB memory and 3Ware RAID controller for less than $650, all in. It could support 18 Drives.

I can report that Ol'Shuck is capable now, with the advent of 4TB drives, capable of exporting 70 some terabytes in a RAID 5 configuration.

How Big are you thinking? What speed FC HBAs? Have you started searching for hardware?
 
great article

I'm seeing in NASPT benchmark mysterious performance figures for File Copy to NAS - all the other figures are where you would expect them to be, but this figure is anomalously way low. Stripe size isn't enough to explain the figure, and (a bit of a preview) it doesn't get better under SAN performance. Any ideas?

Also anyone have hands on experience tuning Openfiler for 3Ware raid controllers? Lessons learned?



hello Greg
great informative article there.
just a few questions there. i see that the results of benchmark in the nas file copy results.
do you think it may be a network interface bandwidth issue.
you said your mother board has two nic ports have you tried link aggregation ?can that be done in openfiler?will that improve the read and write throughput.
 
great article

I'm seeing in NASPT benchmark mysterious performance figures for File Copy to NAS - all the other figures are where you would expect them to be, but this figure is anomalously way low. Stripe size isn't enough to explain the figure, and (a bit of a preview) it doesn't get better under SAN performance. Any ideas?

Also anyone have hands on experience tuning Openfiler for 3Ware raid controllers? Lessons learned?

great article there Greg very informative.

one thing which i don't understand though is why is there so much of diffrence in hd playback results and file copy from nas results?
one more thing is you have stated the the motherboard has two nics.
have you tried link aggregation.does openfiler support this?
will that effectively increase read throughput?
 
great article there Greg very informative.

one thing which i don't understand though is why is there so much of diffrence in hd playback results and file copy from nas results?
one more thing is you have stated the the motherboard has two nics.
have you tried link aggregation.does openfiler support this?
will that effectively increase read throughput?

HD playback is the continuous "playback" of a single large file, where file copy is based on a large number of small reads across multiple files, the most demanding form of I/O - which is impacted by tcp/ip tuning, stripe size and disk speeds ( caching doesn't help, either on the RAID card, or in the machine because the reads are too small ). HD playback benefits from read ahead, caching and the large stripe size.

Openfiler supports link aggregation and it works fine, but does not impact Old Shuck's throughput largely because all file access/read/write all go through the fibre channel interface as a SAN. When I ran it as a NAS before converting it to SAN, I did not test aggregated throughput, but I see no reason for there not to be improved performance in a multi-node environment.

If you look at the gallery for Ol'Shuck, you will see the final NASPT performance chart.
 
Last edited:
HD playback is the continuous "playback" of a single large file, where file copy is based on a large number of small reads across multiple files, the most demanding form of I/O - which is impacted by tcp/ip tuning, stripe size and disk speeds ( caching doesn't help, either on the RAID card, or in the machine because the reads are too small ). HD playback benefits from read ahead, caching and the large stripe size.

Openfiler supports link aggregation and it works fine, but does not impact Old Shuck's throughput largely because all file access/read/write all go through the fibre channel interface as a SAN. When I ran it as a NAS before converting it to SAN, I did not test aggregated throughput, but I see no reason for improved performance in a multi-node environment.

If you look at the gallery for Ol'Shuck, you will see the final NASPT performance chart.

i have never used this benchmarks my self i thought file copy to be copying a single large file.
so results indicate that if i copy a large file ~1 gb i will get the same speed as HD playback.?

i curretly dont have a machine with two nics but will surely test link aggression and test how effective it is.
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top