This post has already been read 16158 times!
As a new Tintri Partner and Tintri Certified System Engineer number #101, I decided to take a closer look at Tintri Storage. Tintri granted me access to their partner lightning lab and I got some hours to spend on this amazing piece of storage hardware to play around and see first hand what it was all about.
Being one of the first companies with VM-aware storage, Tintri is ahead of the game by far. VMware is also pushing the adoption of VM-aware storage with their upcoming virtual volumes API (vVOLS). However, word on the street is that the Virtual Volumes project is taking such a huge amount of time for every vendor to implement. My advice; If you need VM-aware storage now; don’t wait for Virtual Volumes… I don’t think it will be implemented in any array any time soon.
The real magic of Tintri’s VMstore is being aware of every single VM you run. By integrating with vCenter and tracking unique identifiers per vDisk, the system can correlate what blocks make up a VM. This enables the array to do a lot of smart things, like taking snapshots and clones on a per VM-level granularity. In the upcoming 2.0 release replication on a per VM-level granularity will be added.
Per VM flash consumption
The Tintri VMstore T540 has about 1,8TB of usable flash capacity (post-RAID), with inline deduplication and compression the logical amount of data that can be stored on flash is much higher. Non-VM-aware storage would just fill up the available flash capacity and then start evicting the least recently used (LRU) or least frequently used (LFU) blocks of data. Usually this happen in fairly large chunks of multiple megabytes even up to gigabytes at a time.
Tintri is able to very granularly determine the per-VM working set or skew as some might call it. The working set of a VM is a usually relatively small part of the VM that is really active and responsible for most of the produced IO’s. Tintri determines the per VM working set on an 8KB basis. This allows the working set of every VM on the system to reside on flash. The non active part of VM’s if necessary will be placed on the traditional spinning disk.
Traditional storage works with a First In First Out (FIFO) IO queue, the result maybe that certain VM’s using lots and lots of resources, because they produce a lot of IO’s. This can starve other VM’s doing only a few IO’s. A nice example would be combining low latency virtual desktops and high bandwidth database servers on one storage solution. The virtual desktops would most likely suffer from high latency due to the fact that the database server is doing lots of IOs and consuming lots of resources. The end result would be unhappy end users, this is the reason why most vendors will advise separate storage for VDI projects.
It’s like going to a grocery store and having to wait inline for others to pay for the weekly groceries while you have only one item, the reason why some stores have separate lines for people with ten or less items. Tintri uses proportional scheduling for IO’s, another unique VM-aware feature, which basically means they have a separate IO queue per vDisk and every queue gets the same priority. Combined with the smart placing of the working set on flash, this results in a sub-millisecond latency for all VMs on the appliance, even when using a mix of virtual desktops and virtual servers.
Since Tintri keeps track of performance statistics and resource usage on a per vDisk level and we know the impact on the appliance (CPU usage, network bandwidth usage and required IO’s) the device is able to tell the administrator what percentage of total systems resources is being used per vDisk/VM, they call this the performance reserve. Subtracting all the performance reserves from the total available performance results in available performance.
On the Tintri dashboard it will show the administrator how much performance is being used and how much more performance is available. This takes the guessing out of capacity management and helps businesses to be confidently about how much more VM’s they can run on their infrastructure. This helps with agility and time to market, since no complex calculations have to be done, the storage just tells you what you need to know.
Replication and remote cloning (per VM!)
The upcoming 2.0 release of Tintri’s operating system will support replication with VM granularity. Replication will support one-to-one, bi-directional or many-to-one topologies and other exciting topologies will be added in subsequent releases. Replicas like clones can be made from the current VM state or an earlier snapshot, different retention policies will be available on the source and target sides. Snapshots, clones and replicas can be crash or VM consistent, and the scheduling can be done on the appliance down to per VM level granularity. With replication enabled, it is possible to create a clone of a VM and have it built on a remote appliance.
If the VM has been replicated to the targeted appliance before, the last common replica will be used and only delta’s will be transferred. Data gets deduplicated and compressed before sending it over the WAN, on a global scale. So deduplication for replication works on the appliance and between appliances, even in a many-to-one scenario. The Tintri 2.0 UI will show you some new ubercool metrics to visualize how much network bandwidth you save due to dedupe and compression!
Tintri Dashboard, easy and full of valuable information
Every metric on the dashboard is clickable to zoom in for more details. I personally never have seen a dashboard where you can see every important metric you need!