
Well, everything in a virtualized environment is automated
linked to one another with pre coded instruction derived from ‘n’ number of
algorithms designed to perform millions of permutations and combinations a human
factor could think to make this happen. Friends! You know the irony, we
build a technology to eliminate ourselves. This is the price we pay to build
and design an automated system that will take care of itself with a minimal
intervention from us. Sometimes, I think, so far the down time we all have
experienced in past can definitely be blamed on a person who did not do his/her
job carefully. Else, we would have never thought of a fool proof system that
will take care of itself. In fact, an automated system that can fix itself
before we even come to know. The amount of energy, resources,
efforts, and technology used, algorithms running, information documented and
processed, security checks, cross overs, swapping of machines and much more
happens just to provide us an environment, where our single key stroke may not
go waste either. Or a frustrated devil may not come out of us to hit the
monitor or a keyboard on a smallest glitch that can cost us, a million dollar worth
of loss at work. It could be a trigger to ‘an opportunity cost is an
opportunity……?’

Basically, it provides operational continuity and high levels of uptime to an information technology’s infrastructure environment, with simplicity and at a low cost. Let’s try to understand how it works, so all of us can get hang of it? It works with existing VMware’s high availability (HA) or (Distributed Resource Scheduler - DRS) clusters and can be simply turned on or turned off for virtual machines. When applications require operational continuity during critical periods such as month end or quarter end time periods for financial applications, the fault tolerance feature can be turned on with the click of a button to provide extra assurance. The operational simplicity of this ‘fault tolerance’ component is embedded in the vSphere and is a big life saver and cost at times.
High availability is commonly is understood as a method to ensure a resource is always available. But, the fact of the matter is that the resource may get affected with a few minor downtimes. For instance, with Hyper-V also has high availability feature. Because, in the event a host fails, the guest operating system just stop. And, it doesn’t give enough time to migrate the up and running state to another host. Thus, it results into a minor downtime. Irrespective of the technology owners, it the same scenario with VMware High Availability (HA). Despite of, VMware’s vMotion capabilities it cannot be used because, the host stops right at that time itself. And, it leaves us with no live memory to move the guest OS. Thus, we lose the in-memory application state with high availability.
On the other hand, ‘fault
tolerance’ mean we don't lose the in-memory application state in the event of a
failure such as occurrence of a host crash. If we see ‘Fault Tolerance’ is much
stronger than high availability in a virtual environment. But, it forces us to maintain
two copies of a virtual machine, each on separate hosts. In the event of a
change in the state of memory and device status on the primary host, these
changes get automatically recoded and are replayed simultaneously, on the
secondary copy of the VM copied earlier.
Currently, only VMware vSphere
has this fault tolerance capabilities, but it only supports a single logical
processor on the VM is supported. Fault tolerance also has very high network
requirements, but it provides the capability for a fault tolerant solution that
results in no downtime, even if a host fails.
No comments:
Post a Comment