My take on Tintri, VM-Aware Storage

This post has already been read 15444 times!

Technical Deep Dive on the Tintri architecture

This video was recorded during Storage Field Day 2 where I was actually present as a delegate.

Make sure you watch:

  • 3:35 VMware bottlenecks and the solution: Proportional Scheduling
  • 8:30 How much flash should we allocate per VM
  • 15:00 Tintri does pre-fetch VM swap disks

Tintri Performance Troubleshooting Demo


While I had access to Tintri’s lab environment in the US for a few days I was curious about what I could abserve from a performance perspecitive.
Please Note: The Tintri Lightning lab was not built for performance. It only has one ESX host and will not give you a good representation.

Due to the lab setup, I’m not going to post specific results in this article, but I will say that I was impressed with some of the numbers I saw even in a non-performance optimized environment.  Running a few IOmeter tests, I was able to see over 30,000 IOPS at low latency (less than 1.5ms) for random 4K reads.  Other tests showed similarly strong performance with low latency.

At some point in the future, I hope to have access to an environment more specifically set up for performance testing and I’ll prepare a more detailed performance analysis to post.

My Thoughts

Tintri VMstore is way ahead of the competition in being VM-Aware. I am very impressed with the dashboard and the low latency the array brings due to the improved queueing method (no more noisy neighbors!).

I’m hoping more vendors are going to implement some kind of per-VM awareness on their arrays. VMware is already providing them guidance on how they can implement that (through vVOL’s) but, until they do. Tintri is high on my list of storage vendors for virtual environments, especially for VDI deployments.

I am looking forward to see how this great product continues to evolve and becomes more scalable and mature.