Enabled failback setting is by default on for vSwitch Network teaming and in combination with the default failover detection of “Link Status” failover, switch and switch port failures can cause Host isolation.
I have seen these default settings trip up alot of people and thought it best to spell it out, use portfast on all VMware infrastructure switch interfaces or turn off failback/don’t use Link Status.
The issue occurs where an interface on the vSwitch with the Management port group goes down. With Active/Passive the Mgmt Interface will flip to the other interface (assumng the primary fails), this will occur immediately and there is no issue.
When the failed interface comes back up without port fast, failback will flip the port groups traffic back to the that interface immediately (detection of Link Status is that only, when the interface is enabled). The Switch will put the port in to listening and learning modes first, learning MAC addresses and watching for spanning tree loops. Therefore the traffic from the mgmt interface is not forwarded. How long this lasts depends on the switch configuration “the forward delay” but it will always last at least 15 seconds which means with the default host isolation timeout will be triggered.
The easy option is to disable failback but is it better to use port fast to ensure active interfaces in a vSwitch are always used if they are available.
Thick/flat/monolithic – Space required for the virtual disk is allocated during creation. This type of formatting doesn’t zero out any old data that might be present on this allocated space. A virtual disk described as monolithic and flat consists of two files. One file contains the descriptor. The other file is the extent used to store virtual machine data.sparse flat
zeroedthick (default) – Space required for the virtual disk is allocated during creation. Any data remaining on the physical device is not erased during creation, but will be zeroed out at a later time during virtual machine read and write operations.
eagerzeroedthick – Space required for the virtual disk is allocated at creation time. Unlike with the zeroedthick format, the data remaining on the physical device is zeroed out during creation. Disks in this format might take much longer to create than other types of disks.
Thin/sparse – Thin‐provisioned virtual disk. Unlike with the thick format, space required for the virtual disk is not allocated during creation, but is supplied, zeroed out, on demand at a later time.
RDM/Raw Disk Mapping – Virtual compatibility mode raw disk mapping, a LUN not formatted VMFS and dedicated to a VM.
RDMP/Physical Raw Disk Mapping – Physical compatibility mode (pass‐through) raw disk mapping, a LUN not formatted VMFS and dedicated to a VM.
Hosted Sparse extent/2gbsparse – A sparse disk with 2GB maximum extent size. Disks in this format can be used with other VMware products like workstation or server, hosted formats and is not compatible with ESX (must be converted).
Although some people may find this obvious, it’s not something that everyone has experienced so I thought I should mention this requirement. This is especially true when setting up test labs or greenfield production systems where you are using the vSphere client on a separate workstation.
When first building your vSphere vCenter and adding the hosts ensure you create DNS A records for you hosts, define the DNS server in your hosts and the vCenter Server then add your hosts using their fully qualified name. Any tasks that perform operations like adding hosts, require the client machine running the vSphere Client to be able to resolve the host names. These tasks will fail even if the vCenter Server can resolve the fully qualified host name.
Therefore if the client machine can’t resolve the hosts, remotely login to the vCenter Server to perform the tasks. Then once the hosts are added you can switch back the client machine. A host file won’t help and the only other alternative is to point your client machine at a DNS server that can resolve the fully qualified name for the hosts.