Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
VMware Performance and Capacity Management, Second Edition

You're reading from   VMware Performance and Capacity Management, Second Edition Master SDDC Operations with proven best practices

Arrow left icon
Product type Paperback
Published in Mar 2016
Publisher
ISBN-13 9781785880315
Length 546 pages
Edition 2nd Edition
Tools
Arrow right icon
Authors (2):
Arrow left icon
 Dua Dua
Author Profile Icon Dua
Dua
 Rahabok Rahabok
Author Profile Icon Rahabok
Rahabok
Arrow right icon
View More author details
Toc

Table of Contents (28) Chapters Close

VMware Performance and Capacity Management Second Edition
Credits
Foreword
Foreword
About the Author
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
Part 1
Part 2
Part 3
1. VM – It Is Not What You Think! FREE CHAPTER 2. Software-Defined Data Centers 3. SDDC Management 4. Performance Monitoring 5. Capacity Monitoring 6. Performance-Monitoring Dashboards 7. Capacity-Monitoring Dashboards 8. Specific-Purpose Dashboards 9. Infrastructure Monitoring Using Blue Medora 10. Application Monitoring Using Blue Medora 11. SDDC Key Counters 12. CPU Counters 13. Memory Counters 14. Storage Counters 15. Network Counters Index

Physical server versus Virtual Machine


Hopefully, I've driven home the point that a VM is different from a physical server. I'll now list the differences from a management point of view. The following table shows the differences that impact how you manage your infrastructure. Let's begin with the core properties:

Properties

Physical server

Virtual Machine

BIOS

Every brand and model has a unique BIOS. Even the same model (for example, HP DL 380 Generation 9) can have multiple BIOS versions.

The BIOS needs updates and management, often with physical access to a data center. This requires downtime.

This is standardized in a VM. There is only one type, which is the VMware motherboard. This is independent from the ESXi motherboard.

The VM BIOS needs far fewer updates and management. The inventory management system no longer needs the BIOS management module.

Virtual HW

Not applicable.

This is a new layer below the BIOS.

It needs an update after every vSphere release. A data center management system needs to be aware of this as it requires a deep knowledge of vSphere. For example, to upgrade the virtual hardware, the VM has to be in the powered-off state.

Drivers

Many drivers are loaded and bundled with the OS. Often, you need to get the latest drivers from their respective hardware vendors.

All these drivers need to be managed. This can be a complex operation, as they vary from model to model and brand to brand. The management tool has rich functionalities, such as being able to check compatibility, roll out drivers, roll them back if there is an issue, and so on.

Relatively fewer drivers are loaded with the Guest OS; some drivers are replaced by the ones provided by VMware Tools.

Even with NPIV, the VM does not need the FC HBA driver. VMware Tools needs to be managed, with vCenter being the most common management tool.

How do all these differences impact the hardware upgrade process? Let's take a look:

Physical server

Virtual Machine

Downtime is required. It is done offline and is complex.

OS reinstallation and updates are required, hence it is a complex project in physical systems. Sometimes, a hardware upgrade is not even possible without upgrading the application.

It is done online and is simple. Virtualization decouples the application from hardware dependencies.

A VM can be upgraded from 5-year-old hardware to a new one, moving from the local SCSI disk to 10 Gigabit Fiber Channel over Ethernet (FCoE), from a dual-core to an 18-core CPU. So yes, MS-DOS can run on 10 Gigabit Ethernet, accessing SSD storage via the PCIe lane. You just need to migrate to the new hardware with vMotion. As a result, the operation is drastically simplified.

In the preceding table, we compared the core properties of a physical server with a VM. Every server needs storage, so let's compare their storage properties:

Physical server

Virtual Machine

Servers connected to a SAN can see the SAN and FC fabric. They need HBA drivers and have FC PCI cards, and they have multipathing software installed.

They normally need an advanced file system or volume manager to Redundant Array of Inexpensive Disks (RAID) local disk.

No VM is connected to the FC fabric or SAN. The VM only sees the local disk. Even with N_Port ID Virtualization (NPIV) and physical Raw Device Mapping (RDM), the VM does not send FC frames. Multipathing is provided by vSphere, transparent to the VM.

There is no need for a RAID local disk. It is one virtual disk, not two. Availability is provided at the hardware layer.

A backup agent and backup LAN are required in a majority of cases.

These are not needed in a majority of cases, as backup is done via the vSphere VADP API, which is a VMware vStorage API that backs up and restores vSphere VMs. An agent is only required for application-level backup.

There's a big difference in storage. How about network and security? Let's see:

Physical server

Virtual Machine

NIC teaming is common. This typically requires two cables per server.

NIC teaming is provided by ESXi. The VM is not aware and only sees one vNIC.

The Guest OS is VLAN-aware. It is configured inside the OS. Moving the VLAN requires reconfiguration.

The VLAN is generally provided by vSphere and not done inside the Guest OS. This means the VM can be moved from one VLAN to another with no downtime.

With network virtualization, the VM moves from a VLAN to VXLAN.

The AV agent is installed on the Guest and can be seen by an attacker.

An AV agent runs on the ESXi host as a VM (one per ESXi). It cannot be seen by the attacker from inside the Guest OS.

The AV consumes OS resources. AV signature updates cause high storage usage.

The AV consumes minimal Guest OS resources as it is offloaded to the ESXi Agent VM. AV signature updates do not require high Input/Output Operations Per Second (IOPS) inside the Guest OS. The total IOPS is also lower at the ESXi host level as it is not done per VM.

Finally, let's take a look at the impact on management. As can be seen here, even the way we manage a server changes once it is converted into a VM:

Property

Physical server

Virtual Machine

Monitoring approach

An agent is commonly deployed. It is typical for a server to have multiple agents.

In-Guest counters are accurate as the OS can see the physical hardware.

A physical server has an average of 5 percent CPU utilization due to a multicore chip. As a result, there is no need to monitor it closely.

An agent is typically not deployed. Certain areas, such as application and Guest OS monitoring, are still best served by an agent.

The key in-Guest counters are not accurate as the Guest OS does not see the physical hardware.

A VM has an average of 50 percent CPU utilization as it is rightsized. This is 10 times higher compared to a physical server. As a result, there is a need to monitor it closely, especially when physical resources are oversubscribed. Capacity management becomes a discipline in itself.

Availability approach

HA is provided by clusterware, such as Microsoft Windows Server Failover Clusters (WSFC) and Veritas Cluster Server (VCS). Clusterware tends to be complex and expensive.

Cloning a physical server is a complex task and requires the boot drive to be on the SAN or LAN, which is not typical.

A snapshot is rarely made, due to cost and complexity. Only very large IT departments are found to perform physical server snapshots.

HA is a built-in core component of vSphere. From what I see, most clustered physical servers end up as just a single VM since vSphere HA is good enough.

Cloning can be done easily. It can even be done live. The drawback is that the clone becomes a new area of management.

Snapshots can be made easily. In fact, this is done every time as part of the backup process. Snapshots also become a new area of management.

Company asset

The physical server is a company asset and it has book value in the accounting system. It needs proper asset management as components vary among servers.

Here, an annual stock-take process is required.

A VM is not an asset as it has no accounting value. It is like a document. It is technically a folder with files in it.

A stock-take process is no longer required as the VM cannot exist outside vSphere.

You have been reading a chapter from
VMware Performance and Capacity Management, Second Edition - Second Edition
Published in: Mar 2016
Publisher:
ISBN-13: 9781785880315
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £13.99/month. Cancel anytime
Visually different images