In the test: Nexenta CE with VMware I/O Analyzer

This post has already been read 31763 times!

Wondering what the performance is of my home build Nexenta CE box connected to one ESX host over Fiber Channel. I started testing with three I/O Analyzer VM’s distributed over 3 LUN’s connected with FC. (using Fiber Channel target with Nexenta CE is not supported)

512bytes Read Throughput Benchmark Results:

100% Read: 378MB/sec

512bytes Write IOPS Benchmark Results:

100% Write:  59.187 IOps


4k Benchmark Results:

100% Sequential Reads: 59.000 IOps
100%-Sequential Reads- (1) 
100% Sequential Reads- (1)100% Sequential Reads-

100% Random Writes:
  3.350 IOps
100% Random Writes- (1)
100% Random Writes- (1)100% Random Writes- (1)

100% Random Reads: 48.312 IOps

100%RandomReads- (1)
100%RandomReads- (1)100%RandomReads- (1)

9 thoughts on “In the test: Nexenta CE with VMware I/O Analyzer”

  1. Your home build Nexenta box is awesome!!!
    Looking at your 100% Random Writes: 3.350 IOps, that’s what one of our EMC DMX3 array used to deliver back in 2006, a big iron, many spindles and today you have all that ‘power’ in a PC form… OMG 🙂

  2. It’s nice to see people interesting in openstorage.
    I work with zfs since opensolaris 99 so it’s more than 3 years now and i can’t understand how people can still buy traditional array…

    But here you will have some trouble with your installation. You must put a second mirrored zil. Because if your ssd crash your data loose their integrity.

    And if you have more disk make many raidz on the same zpool. Because for the data integrity raidz can lloks like raid5 but when it comes to performance and iops raidz looks like more to a raid3 (only one disk handle the iops actually) so multiple vdev is the key performance with nexenta (and of course the zil if you use nfs)

    • I was thinking about doing exactly that. I have the second ssd lying at home for ZIL mirror, but I currently have no room for all those disks. I need a new raid card and e-sata connector to some kind of 4-6 disk bay.

  3. Marco,

    A comment on your random writes figure of 3.350 IOPs – it is actually possible to raise that limit even further – all you would need is a faster ZIL SSD. A colleague and I did an in-depth investigation of ZIL performance with consumer-grade SSDs here: – and what we found is that ZIL write latency (how quickly we can commit a random synchronous write to the SSD) is key in ensuring good random write performance.

    For example, in our test with the Intel 3xx, the average write latency during the tests was around 0.35ms, and write I/O is blocked until we get a reply from the drive that the write has been committed. I suspect that your latency is somewhat similar on the Vertex 4, and while for home use it is still a very impressive figure, in large production environments this won’t be enough. You’d have to use a “true Enterprise” super-fast SSD, or even a RAM-based battery-backed device to get even better speeds. 0.1ms = 10k IOPs; 0.01ms = 100k IOPs – roughly.

    Of course, you could also set sync=disabled on your zpool, but then you are at risk for losing data in a power outage.

    • I think for as this being a homelab, these figures are more than acceptable 🙂
      The systems are in use as homelab and not 24×7 available. I turn them on when I use them.

      But you got me thinking. Battery Backed RAM-SSD. I searched for a while on google but found nothing.

      What is recommended to use as ZIL and is cheap? Is there something Nexenta recommends?

      • Marco,

        At this point, there are two “common” solutions for ram-based SSD used in the enterprise: DDRdrive and ZeusRAM. Both are outside the home lab price range, for sure.

        I’ve not played with the ACARD ANS-9010 at all – but it sounds like it could be quite useful for a home lab. It’s essentially a DIY RAM SSD with Flash backup and a battery, and can be found on eBay pretty cheaply 🙂

  4. Hi there, Nice Rig.
    Been using both ACARD and DDRdrive. One advantage with ACARD is that you can split the memory in to “two mirrored ZIL” in one device. We have both those configurations in something called Enterprise-In-a-Box. But if you are a lotto winner, go for the STEC drives 🙂 Please feel free to drop an email if you’re interested in any discussions around Nexenta or “ZIL” parts. /BR

Comments are closed.