Building my Nexenta VM using (NFS) best practices

This post has already been read 30968 times!

I am currently using Nexentastor for my home lab. In my previous post I’ve explained how I build my low power 64Gb home lab. I’ve installed Nexentastor Community Edition on my virtual machine with the following specs:

  • 2vCPU
  • 8GB RAM
  • 16Gb system disk on local SSD
  • use VMXNET3 network adapter for better performance.
  • I attached 7 disks as RDM (4 SATA + 3 SSD)

If you got VT-d capable CPU you might want to attach your SATA controller directly to your Nexenta VM.

Download the Nexentastor community edition from here:

My NexentaStor uses SSDs for ARC cache to provide high availability and better performance.

  • NexentaStor uses physical HDDs as volumes for capacity. A volume can be a single disk or multiple disks.
  • Nexenta volumes are comprised of one or more virtual devices (VDEV) to address physical disk failures. A VDEV can be a mirror, or any RAID level (RAID-Z, RAID-Z2, or RAID-Z3) configuration.
  • VDEVs are read and written to as a physical stripe. Therefore, using more VDEVs creates a wider stripe and improves performance. NexentaStor automatically distributes the load across all devices and reads from wherever the data resides.

Network Recommendations

The following are best practices for your network:

  • Separate the backend storage network from any client traffic (by using VLAN’s).
  • Separate the internal network from your externally-accessible networks.
  • Use Jumbo Frames (MTU 9000) on your network end to end (ESX vSwitch and VMkernel – Physical switch – Nexenta network interface).

Create Dataset using the GUI

First you need to create your ZFS volume

  • Click Data Management > Data Sets.
  • In the Volumes pane, click Create.
  • Select the disks to assign to the volume
  • Select an appropriate Redundancy Type.
  • In this example screenshot I used a stripe of 4 SATA disks. (only for testing, NO REDUNDANCY) I recommend you to mirror all your SATA disks to get more performance and redundancy.
  • In my setup I mirrored my log disks (write cache) and have only one cache disk (read cache). The cache disk is for your read cache and does not need to be mirrored.
  • The picture below shows the minimal, good, better and bests options to go for:


NexentaStor NFS Folder Options

When you create the NFS folder on the Nexenta Appliance, set the following options:

  • Click Data Management > Shares
  • Click Create and create a new share
  • Record size: 8K – 16K  (default is 128K)
  • Deduplication: OFF
  • Set compression to ON, to have minimal CPU impact. compression ON uses LZJB compression.
    Don’t use gzip. It’s not a great fit, due to threading issues in the implementation, although there are some interesting new integrations coming in more recent ZFS releases


  • You can disable Sync to speed up your NFS server but this is not recommended. If you have a power failure you will corrupt all your volumes / shares.


Set NFS version

  • Click Settings / Misc. Services.
  • In the NFS Server pane, click Configure
  • Select the Service State option to enable NFS
  • Type 3 to set the Server version (default is 4) vSphere still runs NFS v3.

Enabling VAAI for NFS!

I wrote about this earlier. You can use VAAI for NFS on your virtual or physical Nexenta box. I found the beta VAAI plugin in some USB stick I got from Nexenta on VMworld Barcelona. Check it out here.


Add NFS share to your ESXi box

  • Click Data Management > Shares
  • Click on the share you created and copy the mountpoint “/volumes/zfs/nfs01” to your clipboard:image
  • Use the mount point to mount the share to your ESXi box

More Resources

For more Best Practices for Running vSphere on NFS Storage, read this excelent whitepaper:

Using NexentaStor for VMware Customers:

That’s about it. If you have any remarks, questions or you experience issues?
Please respond to this post below:


9 thoughts on “Building my Nexenta VM using (NFS) best practices”

  1. Great stuff, Marco. As we spoke about, you don’t need to change the NFS server/client version on the Nexenta side if you’re using ESXi 5.x. That setting forces the highest mount version possible, but it still serves and allows mounts as both NFSv3 and v4. Previous version of ESX could not deal with an NFS server providing v3 and v4, but the latest can.

  2. I’d leave compression enabled (which is the default in recent releases), LZJB compression is pretty efficient and unless you know explicitly that your work load precludes its use effectively its a win.

    CPUs are crazy fast compared to disk/network cycles, so a few CPU cycles leading to smaller IOs across inherently constrained IO paths is well worth it. Remember that the process is pretty asymmetrical anyway. In many years of ZFS deployments its rarely proved to be an issue and much of the time is not just a space saver but an IO accelerator, hence the choice we made to make it the default and have those few with more pertinent knowledge override and disable it.

    BTW, gzip is not at this time a great fit, due to threading issues in the implementation, although there are some interesting new integrations coming in more recent ZFS releases when Nexenta catch up.



      • Understood, but even with your “minimal” CPU power, I’d enable it … it still far outstrips the network IO capability, etc. Admittedly you are virtualising Nexenta, so CPU cycles are more controlled, but that also means you have the means by which to assess impact by monitoring CPU usage long term in VMware.

        LZJB does a very quick assessment of blocks “compressibility” and only compresses if there is predicted to be a greater than 12% gain via compressing the data block. Otherwise the compressor engine (in LZJBs case) will straight thru the block unchanged. BTW, it also scales relatively well if you were to offer more CPUs to it.

        Look out for the new LZ4 compression type (write up at being integrated into the future releases of ZFS.

  3. Hi,

    You mention that its best to put the storage traffic on another network or at least VLAN it, as you have the Intel NUC’s and cant do that how have you got round this and have you found any other issues with the limitations on the NUC’s

    I am looking to get a few NUC’s myself and wanted to know as I am hoping to use them as a lab for my VPC 5 cert.


    • Hi Aaran,

      I am using a different storage VLAN for my Intel NUC’s. But it runs on the same ethernet port.
      You need a VLAN capable switch, first create a vlan on the switch. add a portgroup on that vlan to your ESXi host and redirect your storage traffic to that portgroup.

      I’m still using native vlan for management, and created multiple VLAN’s (for vMotion / vCloud Director etc.)

      • That is an excellent blog post. It addresses not using port channels on the vSphere host side, which makes sense to me.

        The article mentions setting up each port on the vSphere host in a different subnet and VLAN and then using load based distribution algorithms to allow more than 1G of traffic out the virtual machines. I guess that is less complicated.

        However, I don’t see how it applies to setting up port channels on the Nexenta box. The shared storage device by design has many different hosts accessing it simultaneously, so it has to make maximum use of its network connections. Load balancing over the LACP link is controlled on the network side by the switch, and on the Nexenta box by the Solaris networking driver.

        Furthermore, by having two links share a single IP address, it provides a redundancy of connection that eliminates a single point of failure.

Comments are closed.