Building my Nexenta VM using (NFS) best practices

This post has already been read 25175 times!

I am currently using Nexentastor for my home lab. In my previous post I’ve explained how I build my low power 64Gb home lab. I’ve installed Nexentastor Community Edition on my virtual machine with the following specs:

  • 2vCPU
  • 8GB RAM
  • 16Gb system disk on local SSD
  • use VMXNET3 network adapter for better performance.
  • I attached 7 disks as RDM (4 SATA + 3 SSD)

If you got VT-d capable CPU you might want to attach your SATA controller directly to your Nexenta VM.

Download the Nexentastor community edition from here:
http://www.nexentastor.org/projects/site/wiki/CommunityEdition

My NexentaStor uses SSDs for ARC cache to provide high availability and better performance.

  • NexentaStor uses physical HDDs as volumes for capacity. A volume can be a single disk or multiple disks.
  • Nexenta volumes are comprised of one or more virtual devices (VDEV) to address physical disk failures. A VDEV can be a mirror, or any RAID level (RAID-Z, RAID-Z2, or RAID-Z3) configuration.
  • VDEVs are read and written to as a physical stripe. Therefore, using more VDEVs creates a wider stripe and improves performance. NexentaStor automatically distributes the load across all devices and reads from wherever the data resides.

Network Recommendations

The following are best practices for your network:

  • Separate the backend storage network from any client traffic (by using VLAN’s).
  • Separate the internal network from your externally-accessible networks.
  • Use Jumbo Frames (MTU 9000) on your network end to end (ESX vSwitch and VMkernel – Physical switch – Nexenta network interface).

Create Dataset using the GUI

First you need to create your ZFS volume

  • Click Data Management > Data Sets.
  • In the Volumes pane, click Create.
  • Select the disks to assign to the volume
  • Select an appropriate Redundancy Type.
  • In this example screenshot I used a stripe of 4 SATA disks. (only for testing, NO REDUNDANCY) I recommend you to mirror all your SATA disks to get more performance and redundancy.
    image
  • In my setup I mirrored my log disks (write cache) and have only one cache disk (read cache). The cache disk is for your read cache and does not need to be mirrored.
  • The picture below shows the minimal, good, better and bests options to go for:

image

NexentaStor NFS Folder Options

When you create the NFS folder on the Nexenta Appliance, set the following options:

  • Click Data Management > Shares
  • Click Create and create a new share
  • Record size: 8K – 16K  (default is 128K)
  • Deduplication: OFF
  • Set compression to ON, to have minimal CPU impact. compression ON uses LZJB compression.
    Don’t use gzip. It’s not a great fit, due to threading issues in the implementation, although there are some interesting new integrations coming in more recent ZFS releases

image

  • You can disable Sync to speed up your NFS server but this is not recommended. If you have a power failure you will corrupt all your volumes / shares.

 

Set NFS version

  • Click Settings / Misc. Services.
  • In the NFS Server pane, click Configure
  • Select the Service State option to enable NFS
  • Type 3 to set the Server version (default is 4) vSphere still runs NFS v3.
    image

Enabling VAAI for NFS!

I wrote about this earlier. You can use VAAI for NFS on your virtual or physical Nexenta box. I found the beta VAAI plugin in some USB stick I got from Nexenta on VMworld Barcelona. Check it out here.

image

Add NFS share to your ESXi box

  • Click Data Management > Shares
  • Click on the share you created and copy the mountpoint “/volumes/zfs/nfs01” to your clipboard:image
  • Use the mount point to mount the share to your ESXi box
    image

More Resources

For more Best Practices for Running vSphere on NFS Storage, read this excelent whitepaper:
http://www.vmware.com/files/pdf/techpaper/VMware-NFS-Best-Practices-WP-EN-New.pdf

Using NexentaStor for VMware Customers:
http://info.nexenta.com/rs/nexenta/images/5000-nxs-v0.0-000002-A_nxstor_vmware_best_practices.pdf

That’s about it. If you have any remarks, questions or you experience issues?
Please respond to this post below:

 

About Marco Broeken

Marco Broeken is Author of this blog and owner of vSpecialist Consulting and co-owner of XtraDesktop where he currently works as a Senior Virtualization Consultant. Marco has over 15 years experience in IT.

Comments

  1. Jumbo frames with bonded interfaces can be tricky, especially if using LACP with VLAN’s. I included a screen shot in my blog post on how to do this using the command line on the NexentaStor box. Basically I changed the individual interfaces to higher MTU then removed IP addresses, then bonded together and applied a VLAN.
    http://www.adcapnet.com/blog/cisco-nexenta-zfs-storage-appliance-configuration-and-benchmarking/

    • Please take a look on a new blog post from Chris Wahl:

      Please Don’t use portchannels for vSphere Storage Traffic
      http://wahlnetwork.com/2013/03/05/stop-using-port-channels-to-vsphere-hosts/

      • That is an excellent blog post. It addresses not using port channels on the vSphere host side, which makes sense to me.

        The article mentions setting up each port on the vSphere host in a different subnet and VLAN and then using load based distribution algorithms to allow more than 1G of traffic out the virtual machines. I guess that is less complicated.

        However, I don’t see how it applies to setting up port channels on the Nexenta box. The shared storage device by design has many different hosts accessing it simultaneously, so it has to make maximum use of its network connections. Load balancing over the LACP link is controlled on the network side by the switch, and on the Nexenta box by the Solaris networking driver.

        Furthermore, by having two links share a single IP address, it provides a redundancy of connection that eliminates a single point of failure.

  2. Hi,

    You mention that its best to put the storage traffic on another network or at least VLAN it, as you have the Intel NUC’s and cant do that how have you got round this and have you found any other issues with the limitations on the NUC’s

    I am looking to get a few NUC’s myself and wanted to know as I am hoping to use them as a lab for my VPC 5 cert.

    Thanks.

    • Hi Aaran,

      I am using a different storage VLAN for my Intel NUC’s. But it runs on the same ethernet port.
      You need a VLAN capable switch, first create a vlan on the switch. add a portgroup on that vlan to your ESXi host and redirect your storage traffic to that portgroup.

      I’m still using native vlan for management, and created multiple VLAN’s (for vMotion / vCloud Director etc.)

  3. I’d leave compression enabled (which is the default in recent releases), LZJB compression is pretty efficient and unless you know explicitly that your work load precludes its use effectively its a win.

    CPUs are crazy fast compared to disk/network cycles, so a few CPU cycles leading to smaller IOs across inherently constrained IO paths is well worth it. Remember that the process is pretty asymmetrical anyway. In many years of ZFS deployments its rarely proved to be an issue and much of the time is not just a space saver but an IO accelerator, hence the choice we made to make it the default and have those few with more pertinent knowledge override and disable it.

    BTW, gzip is not at this time a great fit, due to threading issues in the implementation, although there are some interesting new integrations coming in more recent ZFS releases when Nexenta catch up.

    HTH

    Craig

    • Chanks Craig, I changed the post accordingly.

      I don’t use compression in my lab because of the minimal cpu power I have in my lab

      • Understood, but even with your “minimal” CPU power, I’d enable it … it still far outstrips the network IO capability, etc. Admittedly you are virtualising Nexenta, so CPU cycles are more controlled, but that also means you have the means by which to assess impact by monitoring CPU usage long term in VMware.

        LZJB does a very quick assessment of blocks “compressibility” and only compresses if there is predicted to be a greater than 12% gain via compressing the data block. Otherwise the compressor engine (in LZJBs case) will straight thru the block unchanged. BTW, it also scales relatively well if you were to offer more CPUs to it.

        Look out for the new LZ4 compression type (write up at http://wiki.illumos.org/display/illumos/LZ4+Compression) being integrated into the future releases of ZFS.

  4. Great stuff, Marco. As we spoke about, you don’t need to change the NFS server/client version on the Nexenta side if you’re using ESXi 5.x. That setting forces the highest mount version possible, but it still serves and allows mounts as both NFSv3 and v4. Previous version of ESX could not deal with an NFS server providing v3 and v4, but the latest can.

Speak Your Mind

*