Watch out with VSAN FTT=1 and Maintenance Windows… Lessons Learned the Hard Way

This post has already been read 30207 times!

A couple of weeks back I had a less fantastic Friday evening… When performing a one hour maintenance window to update some filter drivers on some of our ESXi 5.5  hosts.. this evening turned out to be not so pleasant Friday the 13th…

In our company, for some workloads we run VSAN. And I must say that I love the simplicity of the product. Not much to configure and if everything is setup right, and with the right hardware you have a bunch of reasonably fast Terabytes of storage on your cluster.

But back to my maintenance window. We currently run VSAN with FTT=1.

We have done this multiple times. Set the node in maintenance mode, wait for all VMs to be moved away from the node and wait till it’s in maintenance mode. Update and reboot. I have a nice script on my vCenter that checks if there is any data left to replicate before we put the next node in maintenance mode.

So, This is where things got to a nightmare. while I was in the middle of rebooting the last node of the 5 nodes, a disk failed on my 4th node….

This is how that looks like in vCenter… one disk Absent, one disk Degraded. your raid 1 is screwed:

11-13-2015 9-30-53 PM

But this stroke me… I never thought about this scenario. I  know that with FTT=1 you can handle only one host outage but I never thought about the short maintenance window while rebooting.

It sure makes sense that when you a rebooting a single host, the data on that host is not in sync anymore with his RAID 1 partner. so when this RAID1 partner breaks down. Your in big trouble.

Thank god we could easily restore the data from the couple of VMs living on that 2 hosts… so nothing was lost. but this sure was a lessons learned the hard way.

The only way to  prevent this particular scenario is do a full data migration every time you have a maintenance window while running FTT=1 with VSAN.

Also… this scenario is applicable for every Hyper-Converged vendor who runs a raid 5 disk configuration between servers.

Needless to say that we are going to reconfigure this setup to support FTT=2. (thanks to Rawlinson for pointing me to the point that FTT=2 does require only 5 nodes, I thought it was 6…)

I borrowed the below table from Duncan’s blog to more things more visible:
how many hosts do I need at a minimum? How many mirror copies will be created and how many witnesses? Also, how many hosts will I need when I want to take “maintenance mode” in to consideration?

Number of Failures Mirror copies Witnesses Min. Hosts Hosts + Maintenance
0 1 0 1 host n/a
1 2 1 3 hosts 4 hosts
2 3 2 5 hosts 6 hosts
3 4 3 7 hosts 8 hosts

 

Thanks for reading

 

About Marco Broeken

Marco Broeken is Author of this blog and owner of vSpecialist Consulting and has 20 years experience in IT. Marco has been rewarded with the vExpert status from 2011 - 2018.

Comments

  1. Marco one quick correction on your post. The minimum requirement for FTT=2 is 5 nodes not 6.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.