Building superfast whitebox storage with Nexenta CE

I spent some time this weekend upgrading my home lab. I needed to improve my shared storage and hoped that I could reuse old h/w instead of buying something new and expensive. I’ve been using an QNAP TS-459 Pro II Turbo NAS for the last couple of year. It’s iSCSI performance is acceptable for 1-5 VM’s but when I needed to build a complete View environment or vCloud Director lab I usually reverted to local storage which rather defeated the purpose.

I started to look around for storage solutions that could give me loads of IOps with 4+ SATA disks and three SSD’s with options like auto-tiering and VAAI to get the IOps I wanted and 2Tb+ of usable storage. In my professional work I work daily with these kind of enterprise storage boxes and nowadays it is all about software. Would it be possible to build something like that myself?

I’ve send some tweets around to storage people all around the globe and everybody answered “try #Nexenta It has Cache and VAAI”. Now this gets interesting!

Hardware

My QNAP needed some more terabytes to storage my music and movies. I bought 4x Seagate Barracuda 7200 2TB 7200rpm 64MB SATA3 drives to replace the old 4x 1Tb drives to reuse.

I’ve got 2x HP XW9400 Workstations equipped with 16Gb memory and 2 AMD dual core CPU’s I used to run ESX5i on.

The workstation has 8 SATA ports onboard but only room for 4 SATA drives inside the systems so I bought:
1x Chieftec SNT-3141 SATA 4-HDD, Hot-Swap and removed the cd-rom drives and floppy drives and placed the hot-swap bay in the Workstation. This upgraded me to 8 SATA bays.

I have 1xOCZ Vertex2 120Gb SATA2 SSD laying around (used for ZFS L2ARC cache and purchased one OCZ Vertex4 64GB 2.5″ SATA3 (for ZFS intent log (ZIL) ).

On the Network side I have added an 4x port Intel gigabit server network card. My management traffic arrive on the mainboard network card , and my iSCSI stack is running on the Intel card. The iSCSI ports are set to use 9000 MTU.

Installing NexentaStor CE on HP XW9400

Read the Nexenta Release Notes and download the iso file NexentaStor Community Edition 3.1.2.

It took me more time than I expected, the installed is very slow. Just wait and it will finish eventually.
I needed to change some options in the bios to get rid of some PCI errors.

Once up and running I found out all onboard interfaces are supported, all network ports where working and also the onboard 8 port SATA controller was visible with all 8 disks attached! This got me more than happy!

Storage Layout

As you can see my volume volume1 is composed of 5 disks in RAIDZ1, and I’m using one 120Gb SSD as the L2ARC read cache, and one 60Gb SSD as the ZLOG write cache disk.SNAGHTML668f8a2
Because I don’t have more than 16GB of RAM in the server, I decided not to use the de-dupe functionality of the NexentaStor.

VAAI support!

vStorage API for Array Integration (VAAI) provides five different benefits. In effect, the ESX hypervisor instructs the storage controller to off-load certain tasks and perform them at storage controller level, leaving I/O and CPU cycles available to the VMs.

  • SCSI write same. Accelerates zero block writes when creating new virtual disks.
  • SCSI ATS. Enables a specific LUN region to be locked instead of the entire LUN when cloning a VM.
  • SCSI block copy. Avoids reading and writing of block data through the ESX host during a block copy operation.
  • SCSI unmap. Enables freed blocks to be returned to the pool for new allocation when no longer used for VM storage.

VAAI support is applicable only with block-based protocols like iSCSI. Other SCSI commands are all performance related.

FC

This got me thinking… block level…. FC… let’s see if I can get this to work… I jumped up and looked in my old gear and found 2 x 4Gb Qlogic Fiber Channel cards. I know this is not something everybody has lying around (It’s still OK if u use iSCSI Smile)

I found a good article that enabled FC target mode on Oracle Solaris. I shutdown the Nexenta, plugged in a 4Gb FC adapter and configured the ports into target mode. It worked like a charm. (watch out when updating the Nexenta Software, this will disable FC target mode again)

After the adapter was in target mode, you can configure the mappings in the web interface.
I needed to map some lun’s to FC and some other LUN’s to iSCSI, this all can be done in the web interface.

image

I created 2 Initiator groups which included one ESX host to split up my lun’s for testing purposes.
One included the WWN for the FC initiator and one with the IQN of the iSCSI initiator.

I created 3x 750 Gb LUN’s for all my VM’s and one 4GB to add as RDM for testing purposes.
image
Added all 4 to the Fiberchannel group.

I did the same with one 750GB LUN on iSCSI with 1 RDM LUN and added them to the iSCSI initiator group.

On your ESX5i host iSCSI must be configured as defined here so we can use round robin and make use of all your network interface cards and get some real performance!
SNAGHTML67d315e

I presented three 750GB LUNs to my ESX5i server. You can see in the following screenshot those four LUNs with LUN ID 0,1,2 while the small 4GB LUN with ID 3 are RDM LUN. We also see that the VAAI Hardware Acceleration is Supported!image

On the iSCSI part, make sure all your lun’s are configured as RR (Round Robin)
It makes you use all 4 paths to your Nexenta and use maximum bandwidth.
SNAGHTML68555bf

As shown below, all lun’s are mounted with hardware acceleration enabled! this is going to be fun!image

In the Test

I mounted the two 4Gb LUN’s (one FC, one iSCSI) to a Windows 2008R2 Server to do the first tests:image
[E] is FC and [F] is iSCSI (also bound the RDM disk to Round Robin)

The first test is awesome, almost 400MB/sec on the 4GB FC LUN.
image

The second test is what I expected, max 120MB/sec on the iSCSI LUN. It would make sense if it was more like 250MB/sec because Round Robin would make use of 2x 1Gb network.
image
This is rather good result also, but still I get 3x the speed on the FC RDM LUN.

Page 2:  IO stats with this IOAnalyzer fling from VMware.

About Marco Broeken

Marco Broeken is Author of this blog and owner of vSpecialist Consulting and co-owner of XtraDesktop where he currently works as a Senior Virtualization Consultant. Marco has over 15 years experience in IT.

Comments

  1. Nexenta CE ready for production ?
    i need install for vmware with 90 VM, web- mail etc

  2. super great post !. i found the same qlogic cards on ebay for around $30 a pop and upgraded my lab system to 4gbps under $100 with 2 esxi and 1 nexenta.

  3. Norman says:

    Hi,
    i’m interested in ur network setup on nexenta-side. Did u team adapters on nexenta? Following the vmware recommendations i use Software initiator with port binding on esxi side.

    For Example: There are 2 esxi servers with each 4 interfaces for iscsi. Always 2 of them goes to a dedicated iscsi switch1 other 2 goes so switch2 . The Nexenta Box is connected to both switches. What is the right Networksetup for nexenta? Use aggregated Interfaces?

    • I recommend using 2 ip-adresses on different subnets for iSCSI (isolated) and place both ip-adresses on the ESX host as iSCSI targets
      like in this image

      You could easely have 8 path’s to your target’s and get maximum performance.

    • Norman says:

      Hi,
      ty thats exactly the way i use it.
      1. But following your image each esx has 2 paths per target right?
      2. What kind of teaming u use beween switch an Nexenta box? (LACP or trunk mode? Are there any recommendations?
      I got 4 path per target with this setup (2 iscsi connections per iscsi target (iscsi network) x 2 targets = 4). Sometimes i got high datastore latency spikes on higher load. Im searching for the reason…

  4. neofuxx says:

    i think there are 8 SAS and 6 SATA Ports inside, are they all useable?

  5. Neat article.

    A few things I’m curious of.

    I’ve run NexentaStor Enterprise since 2.x days, I know with 3.x VAAI is supported, though I was under the impression NOT with the Community Edition? Same goes for the FC management via the GUI. Am I missing something?

    • Ryan, VAAI is enabled within the community edition. FC is NOT. but Community Edition is not something that is supported anyway. But you CAN just enable the target mode on the FC adapters.

      The mapping function just works, also for FC :) which is handy right

  6. Nice one, good results!

    Cheers!

  7. Hi Marco,

    Great post. It’s always nice to have some FC cards just laying around!

    Looking out for the next post ;-)

    Arjan

Leave a Reply