This post has already been read 48821 times!
I spent some time this weekend upgrading my home lab. I needed to improve my shared storage and hoped that I could reuse old h/w instead of buying something new and expensive. I’ve been using an QNAP TS-459 Pro II Turbo NAS for the last couple of year. It’s iSCSI performance is acceptable for 1-5 VM’s but when I needed to build a complete View environment or vCloud Director lab I usually reverted to local storage which rather defeated the purpose.
I started to look around for storage solutions that could give me loads of IOps with 4+ SATA disks and three SSD’s with options like auto-tiering and VAAI to get the IOps I wanted and 2Tb+ of usable storage. In my professional work I work daily with these kind of enterprise storage boxes and nowadays it is all about software. Would it be possible to build something like that myself?
I’ve send some tweets around to storage people all around the globe and everybody answered “try #Nexenta It has Cache and VAAI”. Now this gets interesting!
My QNAP needed some more terabytes to storage my music and movies. I bought 4x Seagate Barracuda 7200 2TB 7200rpm 64MB SATA3 drives to replace the old 4x 1Tb drives to reuse.
I’ve got 2x HP XW9400 Workstations equipped with 16Gb memory and 2 AMD dual core CPU’s I used to run ESX5i on.
The workstation has 8 SATA ports onboard but only room for 4 SATA drives inside the systems so I bought:
1x Chieftec SNT-3141 SATA 4-HDD, Hot-Swap and removed the cd-rom drives and floppy drives and placed the hot-swap bay in the Workstation. This upgraded me to 8 SATA bays.
On the Network side I have added an 4x port Intel gigabit server network card. My management traffic arrive on the mainboard network card , and my iSCSI stack is running on the Intel card. The iSCSI ports are set to use 9000 MTU.
Installing NexentaStor CE on HP XW9400
It took me more time than I expected, the installed is very slow. Just wait and it will finish eventually.
I needed to change some options in the bios to get rid of some PCI errors.
Once up and running I found out all onboard interfaces are supported, all network ports where working and also the onboard 8 port SATA controller was visible with all 8 disks attached! This got me more than happy!
As you can see my volume volume1 is composed of 5 disks in RAIDZ1, and I’m using one 120Gb SSD as the L2ARC read cache, and one 60Gb SSD as the ZLOG write cache disk.
Because I don’t have more than 16GB of RAM in the server, I decided not to use the de-dupe functionality of the NexentaStor.
vStorage API for Array Integration (VAAI) provides five different benefits. In effect, the ESX hypervisor instructs the storage controller to off-load certain tasks and perform them at storage controller level, leaving I/O and CPU cycles available to the VMs.
- SCSI write same. Accelerates zero block writes when creating new virtual disks.
- SCSI ATS. Enables a specific LUN region to be locked instead of the entire LUN when cloning a VM.
- SCSI block copy. Avoids reading and writing of block data through the ESX host during a block copy operation.
- SCSI unmap. Enables freed blocks to be returned to the pool for new allocation when no longer used for VM storage.
VAAI support is applicable only with block-based protocols like iSCSI. Other SCSI commands are all performance related.
This got me thinking… block level…. FC… let’s see if I can get this to work… I jumped up and looked in my old gear and found 2 x 4Gb Qlogic Fiber Channel cards. I know this is not something everybody has lying around (It’s still OK if u use iSCSI )
I found a good article that enabled FC target mode on Oracle Solaris. I shutdown the Nexenta, plugged in a 4Gb FC adapter and configured the ports into target mode. It worked like a charm. (watch out when updating the Nexenta Software, this will disable FC target mode again)
After the adapter was in target mode, you can configure the mappings in the web interface.
I needed to map some lun’s to FC and some other LUN’s to iSCSI, this all can be done in the web interface.
I created 2 Initiator groups which included one ESX host to split up my lun’s for testing purposes.
One included the WWN for the FC initiator and one with the IQN of the iSCSI initiator.
I did the same with one 750GB LUN on iSCSI with 1 RDM LUN and added them to the iSCSI initiator group.
On your ESX5i host iSCSI must be configured as defined here so we can use round robin and make use of all your network interface cards and get some real performance!
I presented three 750GB LUNs to my ESX5i server. You can see in the following screenshot those four LUNs with LUN ID 0,1,2 while the small 4GB LUN with ID 3 are RDM LUN. We also see that the VAAI Hardware Acceleration is Supported!
In the Test
The second test is what I expected, max 120MB/sec on the iSCSI LUN. It would make sense if it was more like 250MB/sec because Round Robin would make use of 2x 1Gb network.
This is rather good result also, but still I get 3x the speed on the FC RDM LUN.