What's new

Using your NAS to host virtual machines!

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Dennis Wood

Senior Member
With most NAS units online 24/7, there is a very compelling argument to be able to use them to host virtual machines.

For many outside of the IT world, the term "virtual machine" may not mean a whole lot, however the concept is fairly simple. Most of us know that the majority of time on small servers etc, the CPU is idle. Regardless, in the past, most would build up multiple physical computers for different tasks. Most recent processors have a virtualization feature set: http://ark.intel.com/Products/VirtualizationTechnology that allows one CPU to host multiple operating systems simultaneously. This means that you can run Linux, Windows, Ubuntu Server etc., all on the same computer, and all fully functioning, simultaneously.

Some of the Qnap NAS units use processors that support virtualization. They have made me a happy guy by offering a very slick solution, the "Virtualization Station" (QTS 4.1) that makes it very simple to set up and maintain virtual machines. You can see below that we're running a 2012 Server R2 (backup domain controller) as well as a windows 8.1 workstation. This configuration removes the need for us to build up a separate server, and allows us to power down a remote access machine that used to run 24/7. Essentially we're running a NAS, Windows server, and workstation using essentially the same power footprint as the single NAS.

vm.jpg


What do you need to set up your own?

1. The Virtualization Station will not let you add a VM until you add RAM to the NAS. On the Qnap TS-470 Pro, or TS-870 Pro, these are DDR3 1333 SODIMM cards..about $35 for 4GB. These NAS units (which use an i3 processor) ship with 2x 1GB SODIMMs, so you'll need to remove/replace 1 (or both) SODIMMs to add more RAM. On these particular NAS units the RAM looks to be impossible to get at without major disassembly, however it's actually easy. Remove Drives 1, 2 and 3 as well as the NAS case cover. Remove the ATX power plug. You'll see that you can sneak the RAM in with one hand coming in from the top of the NAS, and the other reaching into drive bay 1. I ended up removed the 2 x 1GB SODIMMS, and replacing with 2 x 4GB SODIMMS for a total of 8GB of RAM.

2. You'll need a .iso image of your OS installation disk somewhere on the NAS (which you will point the wizard to) during creation of your virtual machine. Once the VM is created (about 20 seconds), the OS installation will start once you "power on" the virtual machine. Alternatively you can create a VM image from an existing physical workstation and import it.

3. At least one of the NAS LAN ports will be dedicated to the virtual machine when you create it. If you create additional VMs you can either use that one LAN connection for all of them, or add dedicated ports if you have enough to spare :) We are using the 1GbE ports for the two VMs, and using added 10GbE ports for the NAS file traffic.

4. You can access your VM either from the console that is built in, or windows remote desktop etc. as appropriate. Either way the virtual machine behaves exactly like a discrete computer, so once installed, is administered exactly as you might think.

So there you go. Easy Peasy.

QNAP's Virtualization Station tutorial: http://www.qnap.com/useng/index.php?lang=en-us&sn=9595

Cheers,
Dennis.
 
Thank you for the write-up, Dennis.

It's amazing how far consumer gear has come to host VMs and such. Enterprise gear such as NetApp has had this ability for a while but obviously more expensive.
 
I actually use something similar but it's a home built server running ESXi that hosts my NAS and a hand full of other VM's. Works perfectly and I have absolutely no issues with it.
 
You're welcome Kris :) T, I did try out booting a workstation from a 1gig flash card running ESxi, and was impressed with the efficiency. It did take some reading, registration, downloads, boot prep and then client config from a remote admin workstation. The impressive thing about the NAS implementation is that it comes with support..and it took all of 5 minutes to get the OS running and OS installed. My issue with zfs in general was write performance...we were looking for 500MB/s from a six disk parity array. Otherwise zfs in a virtual environment is very compelling.

My take on virtualization in general (particularly in a net zero building) is that a lot of power can be saved. Making it a few clicks from a web GUI gets my thumbs up :)
 
My issue with zfs in general was write performance...we were looking for 500MB/s from a six disk parity array. Otherwise zfs in a virtual environment is very compelling.

I'm actually running a ZFS pool on one of my VMs with some pretty good write speed. But I have a dedicated SATA controller and a direct path passthru to the VM.

It was probably way more expensive to do it this way, but it works well!
 
I'm waiting patiently for a fast zfs solution as the file system is clearly superior. Microsoft is at least attempting a solution with REFS..but for now a $180 Rocket Raid 2720 is providing sustained 500MB/s writes and reads (from a 6 disk raid5 array) at 1000MB/s over 10GbE and NTFS. From my research on ZFS, I wasn't able to see anything close to this using a 6 to 8 disk array. I also tested out Microsoft Storage spaces with parity and could get to 350MB/s writes (reads at 900MB/s were no problem) but at a cost of 4 x Sata3 SSD drives dedicated to journaling. A hardware solution to accelerate ZFS writes would be awesome. Our needs are a bit exceptional in terms of satisfying multiple workstation loads, 4 of them 10GbE enabled for RAW photo/video processing. At the 750MB/s range, there is definitely a wall (with large file transfers)over 10GbE with SAMBA's SMB3 not having multichannel support. Microsoft has a clear advantage at least for now over 10GbE SMB3 multichannel, but sadly no ZFS support. REFS is not ready for prime time yet. We've settled on a hybrid solution with 2012 R2 Server combined with the TS-870, both 10GbE equipped.

Between virtualization and ZFS, I'm seeing excellent solutions out there for "all in one" solutions (where bleeding edge speeds are not required) incorporating pfsense, NAS, and server solutions are all running on one low power Avoton board. Very impressive stuff. I'd love to build up something around the Silverstone DS380 ( http://www.silverstonetek.com/product.php?pid=452 ) ... just need a 10GbE embedded mini-itx board.

At some point I'll test out running the VMs over one of the 10GbE ports on the QNAP box instead of the 1GbE. For now, the VMs I've set up have no need for 10GbE.
 
Last edited:
Excellent write-up.

I don't know if you have had a chance to play with around with the Virtualization Station much, but on the off chance you have I thought I would send my questions to you first since you seem very knowledgeable.

#1. Is there any recommendation on whether to stick with the base drivers Windows detects during a basic VM install in this environment, or should we use the VirtIO storage and network controller drivers for better performance? I looked over the QNAP site and various tutorials and couldn't find a recommendation.

#2. Do you need to use a dedicated network with your VMs or can you use the primary NAS connection. I want to trunk my QNAP NIC ports so I would prefer not to leave one out just to let VMs use it.

#3. Is the memory and CPU resources statically allocated to the VMs or is it dynamically shared?

Thanks again!
 
I have had several images live since the first post, with zero issues. I have restored them once using a backup of each VM, with zero issues just to test disaster recovery. I'm just using the default drivers in the vms, as performance is not an issue on the BDC, and remote access VMs in use.

You can have all VMs on one connection, however one NAS port is used up. We have four ports, so the two 1Gbe ports are dedicated to the VMs, and the 10GbE to network and server backup.

It looks like memory is statically assigned (you set it at what you want for each VM) but CPU cores are shared with the NAS. I updated only to 8GB, so using 2GB for the BDC VM, 4 GB for windows 8.1 workstation VM, leaving 2GB RAM for the NAS, which is all it will use.

Hope that helps..
 
I have had several images live since the first post, with zero issues. I have restored them once using a backup of each VM, with zero issues just to test disaster recovery. I'm just using the default drivers in the vms, as performance is not an issue on the BDC, and remote access VMs in use.

You can have all VMs on one connection, however one NAS port is used up. We have four ports, so the two 1Gbe ports are dedicated to the VMs, and the 10GbE to network and server backup.

It looks like memory is statically assigned (you set it at what you want for each VM) but CPU cores are shared with the NAS. I updated only to 8GB, so using 2GB for the BDC VM, 4 GB for windows 8.1 workstation VM, leaving 2GB RAM for the NAS, which is all it will use.

Hope that helps..

Thanks for the response!

So there is no way to share a NIC team between the main NAS and the VMs on it? I.E. You have to dedicate a NIC to the VMs? I ask because I was planning on providing a 2 x 1GB LACP team for the standard NAS communications and VMs, and then a 2 x 1GB team (I think LACP) for iSCSI access. If I have to dedicate a NIC to VM access instead of sharing it with the NAS then my plans have to change.

Thanks for the other information as well!
 
Correct, the NAS will not share a NIC with the VMs. This is one reason 10GbE is so attractive in the closet..it cuts the port count down.

I do have two iscsi drives mounted on the NAS, but the they are targeted via 10GbE. One is a dedicated server 2012 r2 backup volume (2TB), other is for the the BDC VM volume which stores rsync secondary backup (6TB). Both are set up as file based with thin provisioning. So far, so good.

If you spring for the NAS 10gbe card, then you can also install a 10Gbe nic in your server for iscsi access, direct connected.
 
Thanks again for your follow up.

That sucks we can't share the NICs between the NAS and the VM environment. I really want 2 NIC teams to mirror my customer environment so I am not sure what I will do now.
 
Hi

Thank you for your excellent write-up.I need some clarity because I'm a newbie and don't know much about IT

Does this mean i'd be able to have the NAS run with both a virtual OS such as Windows7/8 and still have the ability to access my data at the same time?
For example if there are 2 x HDs in the NAS. One could be used for the Virtual OS and the other for my data which i;d want to access across my LAN as well as outside world?

Would I also be able to install anti virus software on the VM? how would I access it? like for example would it be a case of plugging in the IP address of the NAS into a browser and then following it with a port number? or does it need to be done by using mstsc command?
 
NAS = network attached storage..

You are asking for something different..

See my post above

FYI I have an nl36 microserver ... The first one.
As mentioned they aren't very powerful.. But I have mine running a bunch of vm s and it goes OK.

Only time its slow is when updating the guests patches.. And I have that set to run overnight
 
Hi

Thank you for your excellent write-up.I need some clarity because I'm a newbie and don't know much about IT

Does this mean i'd be able to have the NAS run with both a virtual OS such as Windows7/8 and still have the ability to access my data at the same time?
For example if there are 2 x HDs in the NAS. One could be used for the Virtual OS and the other for my data which i;d want to access across my LAN as well as outside world?

Would I also be able to install anti virus software on the VM? how would I access it? like for example would it be a case of plugging in the IP address of the NAS into a browser and then following it with a port number? or does it need to be done by using mstsc command?
urge you to do virtual machine work on a computer/PC, not a NAS. For many technical and prudence reasons.
 
Bubble, QNAP's NAS OS is Linux based, so the Virtualization Station just makes it a lot easier to set up VMs. The guide I linked to goes through how to "install" the OS. You feed the Virtualization Station the path to an iso file (sitting on a NAS share) corresponding to the OS you want to install. The console (part of Virt Station) allows you to install the OS exactly as you would on a physical machine. The VM files themselves sit on the NAS...we created a separate shared folder on the NAS for them with no client access. Backup is done by taking VM snapshots..and it's a good idea to copy the VM image files to external backup. I was able to import them right back in after a bare metal NAS rebuild.

The NAS data/services/shares etc. are completely accessible as normal with virtual machines running. You access the VMs either via the console provided with the Virtualization Station, or you just remote into the VM using RDP for Windows..you may want to set a static IP on the network interface present in the VM to make this easier. As far as antivirus etc, you would install this exactly as you would on a normal workstation. The i3 processor in the TS-870 Pro is mostly idle, therefore an excellent host for VMs in my opinion.

After 6 weeks or so of 24/7 operation, zero issues. Just make sure you are running the latest version of QTS 4.1 posted June 6. Remember that the Virtualization Station will only run on select NAS units where hardware is up to the task.
 
Last edited:

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top