Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-deploying-new-hosts-vcenter
Packt
04 Jun 2015
8 min read
Save for later

Deploying New Hosts with vCenter

Packt
04 Jun 2015
8 min read
In this article by Konstantin Kuminsky author of the book, VMware vCenter Cookbook, we will review some options and features available in vCenter to improve an administrator's efficiency. (For more resources related to this topic, see here.) Deploying new hosts faster with scripted installation Scripted installation is an alternative way to deploy ESXi hosts. It can be used when several hosts need to be deployed or upgraded. The installation script contains ESXi settings and can be accessed by a host during the ESXi boot from the following locations: FTP HTTP or HTTPS NFS USB flash drive or CD-ROM How to do it... The following sections describe the process of creating an installation script and using it to boot the ESXi host. Creating an installation script An installation script contains installation options for ESXi. It's a text file with the .cfg extension. The best way to create an installation script is to use the default script supplied with the ESXi installer and modify it. The default script is located in the /etc/vmware/weasel/ folder location and is called ks.cfg. Commands that can be modified include, but are not limited to: The install, installorupgrade, or upgrade commands define the ESXi disk—location, where the installation or upgrade will be installed. The available options are: --disk: This option is the disk name which can be specified as path (/vmfs/devices/disks/vmhbaX:X:X), VML name (vml.xxxxxxxx) or as LUN UID (vmkLUM_UID) –overwritevmfs: This option wipes the existing datastore. --preservevmfs: This option keeps the existing datastore. --novmfsondisk: This option prevents a new partition from being created. The Network command, which specifies the network settings. Most of the available options are self-explanatory: --bootproto=[dhcp|static] --device: MAC address of NIC to use --ip --gateway --nameserver --netmask --hostname --vlanid A full list of installation and upgrade commands can be found in the vSphere5 documentation on the VMware website at https://www.vmware.com/support/pubs/. Use the installation script to configure ESXi In order to use the installation script, you will need to use additional ESXi boot options. Boot a host from the ESXi installation disk. When the ESXi installer screen appears, press Shift + O to provide additional boot options. In the command prompt, type the following: ks=<location of the script> <additional boot options> The valid locations are as follows: ks=cdrom:/path ks=file://path ks=protocol://path ks=usb:/path The additional options available are as follows: gateway: This option is the default gateway ip: This option is the IP address nameserver: This option is the DNS server netmask: This option is the subnet mask vlanid: This option is the VLAN ID netdevice: This option is the MAC address of NIC to use bootif: This option is the MAC address of NIC to use in PXELINUX format For example, for the HTTP location, the command will look like this: ks=http://XX.XX.XX.XX/scripts/ks-v1.cfg nameserver=XX.XX.XX.XX ip=XX.XX.XX.XX netmask=255.255.255.0 gateway=XX.XX.XX.XX Deploying new hosts faster with auto deploy vSphere Auto Deploy is VMware's solution to simplify the deployment of large numbers of ESXi hosts. It is one of the available options for ESXi deployment along with an interactive and scripted installation. The main difference of Auto Deploy compared to other deployment options is that the ESXi configuration is not stored on the host's disk. Instead, it's managed with image and host profiles by the Auto Deploy server. Getting ready Before using Auto Deploy, confirm the following: The Auto Deploy server is installed and registered with vCenter. It can be installed as a standalone server or as part of the vCenter installation. The DHCP server exists in the environment. The DHCP server is configured to point to the TFTP server for PXE boot (option 66) with the boot filename undionly.kpxe.vmw-hardwired. The TFTP server that will be used for PXE boot exists and is configured properly. The machine where Auto Deploy cmdlets will run has the following installed: Microsoft .NET 2.0 or later PowerShell 2.0 or later PowerCLI including Auto Deploy cmdlets New hosts that will be provisioned with Auto Deploy must: Meet the hardware requirements for ESXi 5 Have network connectivity to vCenter, preferably 1 Gbps or higher Have PXE boot enabled How to do it... Once prerequisites are met, the following steps are required to start deploying hosts. Configuring the TFTP server In order to configure the TFTP server with the correct boot image for ESXi, execute the following steps: In vCenter, go to Home | Auto Deploy. Switch to the Administration tab. From the Auto Deploy page, click on Download TFTP Boot ZIP. Download the file and unzip it to the appropriate folder on the TFTP server. Creating an image profile Image profies are created using Image Builder PowerCLI cmdlets. Image Builder requires PowerCLI and can be installed on a machine that's used to run administrative tasks. It doesn't have to be a vCenter server or Auto Deploy server and the only requirement for this machine is that it must have access to the software depot—a file server that stores image profiles. Image profiles can be created from scratch or by cloning an existing profile. The following steps outline the process of creating an image profile by cloning. The steps assume that: The Image Builder has been installed. The appropriate software depot has been downloaded from the VMware website by going to http://www.vmware.com/downloads and searching for the software depot. Cloning an existing profile included in the depot is the easiest way to create a new profile. The steps to do so are as follows: Add a depot with the image profile to be cloned: Add-EsxSoftwareDepot -DepotUrl <Path to softwaredepot> Find the name of the profile to be cloned using Get-ESXImageProfile. Clone the profile: New-EsxImageProfile -CloneProfile <Existing profile name> - Name <New profile name> Add a software package to the new image profile: Add-EsxSoftwarePackage -ImageProfile <New profile name> - SoftwarePackage <Package> At this point, the software package will be validated and in case of errors, or if there are any dependencies that need to be resolved, an appropriate message will be displayed. Assigning an image profile to hosts To create a rule that assigns an image profile to a host, execute the following steps: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Add the software depot with the correct image profile to the PowerCLI session: Add-EsxSoftwareDepot <depot URL> Locate the image profile using the Get-EsxImageProfile cmdlet. Define a rule that assigns hosts with certain attributes to an image profile. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Profile name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> Assigning a host profile to hosts Optionally, the existing host profile can be assigned to hosts. To accomplish this, execute the following steps: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Locate the host profile name using the Get-VMhostProfile command. Define a rule that assigns hosts with certain attributes to a host profile. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Profile name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> Assigning a host to a folder or cluster in vCenter To make sure a host is placed in a certain folder or cluster once it boots, do the following: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Define a rule that assigns hosts with certain attributes to a folder or cluster. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Folder name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> If a host is assigned to a cluster it inherits that cluster's host profile. How it works... Auto Deploy utilizes the PXE boot to connect to the Auto Deploy server and get an image profile, vCenter location, and optionally, host profiles. The detailed process is as follows: The host gets gPXE executable and gPXE configuration files from the PXE TFTP server. As gPXE executes, it uses instructions from the configuration file to query the Auto Deploy server for specific information. The Auto Deploy server returns the requested information specified in the image and host profiles. The host boots using this information. Auto Deploy adds a host to the specified vCenter server. The host is placed in maintenance mode when additional information such as IP address is required from the administrator. To exit maintenance mode, the administrator will need to provide this information and reapply the host profile. When a new host boots for the first time, vCenter creates a new object and stores it together with the host and image profiles in the database. For any subsequent reboots, the existing object is used to get the correct host profile and any changes that have been made. More details can be found in the vSphere 5 documentation on the VMware website at https://www.vmware.com/support/pubs/. Summary In this article we learnt how new hosts can be deployed with scripted installation and auto deploy techniques. Resources for Article: Further resources on this subject: VMware vRealize Operations Performance and Capacity Management [Article] Backups in the VMware View Infrastructure [Article] Application Packaging in VMware ThinApp 4.7 Essentials [Article]
Read more
  • 0
  • 0
  • 1831

article-image-storage-policy-based-management
Packt
07 Sep 2015
10 min read
Save for later

Storage Policy-based Management

Packt
07 Sep 2015
10 min read
In this article by Jeffery Taylor, the author of the book VMware Virtual SAN Cookbook, we have a functional VSAN cluster, we can leverage the power of Storage Policy-based Management (SPBM) to control how we deploy our virtual machines (VMs).We will discuss the following topics, with a recipe for each: Creating storage policies Applying storage policies to a new VM or a VM deployed from a template Applying storage policies to an existing VM migrating to VSAN Viewing a VM's storage policies and object distribution Changing storage policies on a VM already residing in VSAN Modifying existing storage policies (For more resources related to this topic, see here.) Introduction SPBM is where the administrative power of converged infrastructure becomes apparent. You can define VM-thick provisioning on a sliding scale, define how fault tolerant the VM's storage should be, make distribution and performance decisions, and more. RAID-type decisions for VMs resident on VSAN are also driven through the use of SPBM. VSAN can provide RAID-1 (mirrored) and RAID-0 (striped) configurations, or a combination of the two in the form of RAID-10 (mirrored set of stripes). All of this is done on a per-VM basis. As the storage and compute infrastructures are now converged, you can define how you want a VM to run in the most logical place—at the VM level or its disks. The need for a datastore-centric configuration, storage tiering, and so on is obviated and made redundant through the power of SPBM. Technically, the configuration of storage policies is optional. If you choose not to define any storage policies, VSAN will create VMs and disks according to its default cluster-wide storage policy. While this will provide for generic levels of fault tolerance and performance, it is strongly recommended to create and apply storage policies according to your administrative need. Much of the power given to you through a converged infrastructure and VSAN is in the policy-driven and VM-centric nature of policy-based management.While some of these options will be discussed throughout the following recipes, it is strongly recommended that you review the storage-policy appendix to familiarize yourself with all the storage-policy options prior to continuing. Creating VM storage policies Before a storage policy can be applied, it must be created. Once created, the storage policy can be applied to any part of any VM resident on VSAN-connected storage. You will probably want to create a number of storage policies to suit your production needs. Once created, all storage policies are tracked by vCenter and enforced/maintained by VSAN itself. Because of this, your policy selections remain valid and production continues even in the event of a vCenter outage. In the example policy that we will create in this recipe, the VM policy will be defined as tolerating the failure of a single VSAN host. The VM will not be required to stripe across multiple disks and it will be 30percent thick-provisioned. Getting ready Your VSAN should be deployed and functional as per the previous article. You should be logged in to the vSphere Web Client as an administrator or as a user with rights to create, modify, apply, and delete storage policies. How to do it… From the vSphere 5.5 Web Client, navigate to Home | VM Storage Policies. From the vSphere 6.0 Web Client, navigate to Home | Policies and Profiles | VM Storage Policies. Initially, there will be no storage policies defined unless you have already created storage policies for other solutions. This is normal. In VSAN 6.0, you will have the VSAN default policy defined here prior to the creation of your own policies. Click the Create a new VM storage policy button:   A wizard will launch to guide you through the process. If you have multiple vCenter Server systems in linked-mode, ensure that you have selected the appropriate vCenter Server system from the drop-down menu. Give your storage policy a name that will be useful to you and a description of what the policy does. Then, click Next:   The next page describes the concept of rule sets and requires no interaction. Click the Next button to proceed. When creating the rule set, ensure that you select VSAN from the Rules based on vendor-specific capabilities drop-down menu. This will expose the <Add capability> button. Select Number of failures to tolerate from the drop-down menu and specify a value of 1:   Add other capabilities as desired. For this example, we will specify a single stripe with 30% space reservation. Once all required policy definitions have been applied, click Next:   The next page will tell you which datastores are compatible with the storage policy you have created. As this storage policy is based on specific capabilities exposed by VSAN, only your VSAN datastore will appear as a matching resource. Verify that the VSAN datastore appears, and then click Next. Review the summary page and ensure that the policy is being created on the basis of your specifications. When finished, click Finish. The policy will be created. Depending on the speed of your system, this operation should be nearly instantaneous but may take several seconds to finish. How it works… The VSAN-specific policy definitions are presented through VMware Profile-Driven Storage service, which runs with vCenter Server. Profile-Driven Storage Service determines which policy definitions are available by communicating with the ESXi hosts that are enabled for VSAN. Once VSAN is enabled, each host activates a VASA provider daemon, which is responsible for communicating policy options to and receiving policy instructions from Profile-Driven Storage Service. There's more… The nature of the storage policy definitions enabled by VSAN is additive. No policy option mutually excludes any other, and they can be combined in any way that your policy requirements demand. For example, specifying a number of failures to tolerate will not preclude the specification cache reservation. See also For a full explanation of all policy options and when you might want to use them Applying storage policies to a new VM or a VM deployed from a template When creating a new VM on VSAN, you will want to apply a storage policy to that VM according to your administrative needs. As VSAN is fully integrated into vSphere and vCenter, this is a straightforward option during the normal VM deployment wizard. The workflow described in this recipe is for creating a new VM on VSAN. If deployed from a template, the wizard process is functionally identical from step 4 of the How to do it… section in this recipe. Getting ready You should be logged into vSphere Web Client as an administrator or a user authorized to create virtual machines. You should have at least one storage policy defined (see previous recipe). How to do it… Navigate to Home | Hosts and Clusters | Datacenter | Cluster. Right-click the cluster, and then select New Virtual Machine…:   In the subsequent screen, select Create a new virtual machine. Proceed through the wizard through Step 2b. For the compute resource, ensure that you select your VSAN cluster or one of its hosts:   On the next step, select one of the VM storage policies that you created in the previous recipe. Once you select a VSAN storage policy, only the VSAN datastore will appear as compatible. Any other datastores that you have present will be ineligible for selection:   Complete the rest of the VM-deployment wizard as you normally would to select the guest OS, resources, and so on. Once completed, the VM will deploy and it will populate in the inventory tree on the left side. The VM summary will reflect that the VM resides on the VSAN storage:   How it works… All VMs resident on the VSAN storage will have a storage policy applied. Selecting the appropriate policy during VM creation means that the VM will be how you want it to be from the beginning of the VM's life. While policies can be changed later, this could involve a reconfiguration of the object, which can take time to complete and can result in increased disk and network traffic once it is initiated. Careful decision making during deployment can help you save time later. Applying storage policies to an existing VM migrating to VSAN When introducing VSAN into an existing infrastructure, you may have existing VMs that reside on the external storage, such as NFS, iSCSI, or Fibre Channel (FC). When the time comes to move these VMs into your converged infrastructure and VSAN, we will have to make policy decisions about how these VMs should be handled. Getting ready You should be logged into vSphere Web Client as an administrator or a user authorized to create, migrate, and modify VMs. How to do it… Navigate to Home | Hosts and Clusters | Datacenter | Cluster. Identify the VM that you wish to migrate to VSAN. For the example used in this recipe, we will migrate the VM called linux-vm02 that resides on NFS Datastore. Right-click the VM and select Migrate… from the context menu:   In the resulting page, select Change datastore or Change both host and datastore as applicable, and then click Next. If the VM does not already reside on one of your VSAN-enabled hosts, you must choose the Change both host and datastore option for your migration. In the next step, select one of the VM storage policies that you created in the previous recipe. Once you select a VSAN storage policy, only the VSAN datastore will appear as compatible. Any other datastores that you have present will be ineligible for selection:   You can apply different storage policies to different VM disks. This can be done by performing the following steps: Click on the Advanced >> button to reveal various parts of the VM: Once clicked, the Advanced >> button will change to << Basic.   In the Storage column, click the existing datastore to reveal a drop-down menu. Click Browse. In the subsequent window, select the desired policy from the VM Storage Policy drop-down menu. You will find that the only compatible datastore is your VSAN datastore. Click OK:   Repeat the preceding step as needed for other disks and the VM configuration file. After performing the preceding steps, click on Next. Review your selection on the final page, and then click Finish. Migrations can potentially take a long time, depending on how large the VM is, the speed of the network, and other considerations. Please monitor the progress of your VM relocation tasks using the Recent Tasks pane:   Once the migration task finishes, the VM's Summary tab will reflect that the datastore is now the VSAN datastore. For the example of this VM, the VM moved from NFS Datastore to vsanDatastore:   How it works… Much like the new VM workflow, we select the storage policy that we want to use during the migration of the VM to VSAN. However, unlike the deploy-from-template or VM-creation workflows, this process requires none of the VM configuration steps. We only have to select the storage policy, and then SPBM instructs VSAN how to place and distribute the objects. All object-distribution activities are completely transparent and automatic. This process can be used to change the storage policy of a VM already resident in the VSAN cluster, but it is more cumbersome than modifying the policies by other means. Summary In this article, we learned that storage policies give you granular control over how the data for any given VM or VM disk is handled. Storage policies allow you to define how many mirrors (RAID-1) and how many stripes (RAID-0) are associated with any given VM or VM disk. Resources for Article: Further resources on this subject: Working with Virtual Machines [article] Virtualization [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 1814

article-image-windows-8-vmware-view
Packt
10 Sep 2013
3 min read
Save for later

Windows 8 with VMware View

Packt
10 Sep 2013
3 min read
(For more resources related to this topic, see here.) Deploying VMware View on Windows 8 (Advanced) If you want to get a hands-on experience with Windows 8 on VMware View and get ready for future View deployments, this should be a must-take guide for deploying it. Getting ready Let's keep the following requirements ready: You should have VMware vSphere 5.1 deployed You should have VMware View 5.1 Connection Server deployed You should have Windows 8 Release Preview installer and license keys for 32-bit version How to do it... Let's list the steps required to complete the task. To create a Windows 8 Virtual Machine, perform the following steps: Create a standard virtual hardware version 9 VM with Windows 8 as the guest operating system. As this is a testing phase, keep the memory and disk size optimal. Edit the settings in VM and under Video card , select the Enable 3D support checkbox (this step is to make sure graphics, and adobe Flash content works with Windows 8 using VMware driver). Mount the ISO image of Windows 8 in the virtual machine and proceed with the Windows 8 Installation. Enter Windows license keys appropriately available with you. Install VMware tools in the VM; Shutdown and restart the VM. Install a VMware View 5.1 agent in VM, uncheck Persona Management during agent installation. Power-on the VM, set the network option to DHCP, and disable Windows Defender. Create a snapshot of VM and power the VM down. Now, we are ready with Windows 8 Parent virtual machine with Snapshot. To create a pool for Windows 8 in View Admin console, perform the following steps: Launch the Connection Server Admin console and navigate to the Pool Creation wizard. Select the Automated pool type (you can use either Dedicated or Floating ). Choose a View Composer linked-clone-based pool. Navigate through the rest of the wizard accepting all defaults, and choosing the snapshot of your Windows 8 VM with the View Agent installed. Use QuickPrep for instant customization. You may require to manually restart the VM if Quickprep doesn't get initiated by itself once the VM boots. Allow provisioning to proceed. Make sure you set Allow users to choose protocol: to No , else 3D rendering gets disabled automatically. If you want to set Allow users to choose protocol: to Yes , make sure you stick to the RDP protocol and not PCoIP in the Default display protocol: field, else you will end up with a black screen. To install View Client, perform the following step: Install View Client 5.1 on any device with iOS/Android/Linux/Windows. To view a list of supported clients, visit http://www.vmware.com/support/viewclients/doc/viewclients_pubs.html. How it works... Once all the preceding steps are performed, you should have Windows 8 with VMware View 5.x ready. You should be able to see the VM in ready status under Desktop Resources in VMware View Admin console. You should be able to launch Windows 8 with View Client now. Please note that you have to entitle the users to the respective pools before users can access the VM. More information You can even refer to http://kb.vmware.com/kb/2033640 to know more on how to install Windows 8 in a VM. Resources for Article : Further resources on this subject: Cloning and Snapshots in VMware Workstation [Article] Creating an Image Profile by cloning an existing one [Article] Building a bar graph cityscape [Article]
Read more
  • 0
  • 0
  • 1812

article-image-setting-citrix-components
Packt
03 Nov 2015
4 min read
Save for later

Setting Up the Citrix Components

Packt
03 Nov 2015
4 min read
In this article by Sunny Jha, the author of the book Mastering XenApp, we are going to implement the Citrix XenApp infrastructure components, which are going to work together to deliver the applications. The components we will be implementing are as follows: Setting up Citrix License Server Setting up Delivery Controller Setting up Director Setting up StoreFront Setting up Studio Once you will complete this article, you will be able to understand how to install the Citrix XenApp infrastructure components for the effective delivery of applications. (For more resources related to this topic, see here.) Setting up the Citrix infrastructure components You must be aware of the fact that Citrix reintroduced Citrix XenApp in the version of Citrix XenApp 7.5 with the new FMA-based architecture, replacing IMA. In this article, we will be setting up different Citrix components so that they can deliver the applications. As this is the proof of concept, I will be setting up almost all the Citrix components on the single Microsoft Windows 2012 R2 machine, where it is recommended that in the production environment, you should keep the Citrix components such as License Server, Delivery Controller, and StoreFront. These need to be installed on the separate servers to avoid the single point of failure and better performance. The components that we will be setting up in this article are: Delivery Controller: This Citrix component will act as broker, and the main function is to assign users to a server, based on their selection of application published. License Server: This will assign the license to the Citrix components as every Citrix product requires license in order to work. Studio: This will act as control panel for Citrix XenApp 7.6 delivery. Inside Citrix, studio administrator makes all the configuration and changes. Director: This component is basically for monitoring and troubleshooting, which is web-based application. StoreFront: This is the frontend of the Citrix infrastructure by which users connect to their application, either via receiver or web based. Installing of Citrix components In order to start the installation, we need the Citrix XenApp 7.6 DVD or ISO image. You can always download, from the Citrix website, all you need to have in the MyCitrix account. Follow these steps: Mount the disc/ISO you have downloaded. When you will double-click on the mounted disc, it will bring up a nice screen where you have to make the selection between XenApp Deliver applications or XenDesktop Deliver application and desktops: Once you have made the selection, it will show you the next option related to the product. Here, we need to select XenApp. Choose Delivery Controller from the options: The next screen will show you the License Agreement. You can go through it and accept the terms and click on Next: As described earlier, this is the proof of concept. We will install all the components on single server, but it is recommended to put each component on different server for better performance. Select all the components and click on Next: The next screen will show you the features that can be installed. As we have already installed the SQL server, we don't have to select the SQL Express, but we will choose Install Windows Remote Assistance. Click on Next: The next screen will show you the firewall ports that needs to be allowed to communicate, and it can be adjusted by Citrix as well. Click on Next: The next screen will show you the summary of your selection. Here, you can review your selection and click on Install to install the components: After you click on Install, it will go through the installation procedure, and once the installation is complete, click on Next. By following these steps, we completed the installation of the Citrix components such as Delivery Controller, Studio, Director, and StoreFront. We also adjusted the firewall ports as per the Citrix XenApp requirement. Summary In this article, you learned about setting up the Citrix infrastructure components and also how to install Citrix Director, License Server, Citrix Studio, and Citrix Director, and Citrix StoreFront. Resources for Article: Further resources on this subject: Getting Started – Understanding Citrix XenDesktop and its Architecture [article] High Availability, Protection, and Recovery using Microsoft Azure [article] A Virtual Machine for a Virtual World [article]
Read more
  • 0
  • 0
  • 1787

article-image-xen-virtualization-work-mysql-server-ruby-rails-and-subversion
Packt
22 Oct 2009
7 min read
Save for later

Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion

Packt
22 Oct 2009
7 min read
Base Appliance Image We will use an Ubuntu Feisty domain image as the base image for creating these appliances. This image should be made as sparse and small as possible, and free of any cruft. A completely stripped down version of Linux with only the bare necessities would be a great start. In this case, we will not need any graphical desktop environments, so we can completely eliminate software packages like the X11 and any window manager like Gnome or KDE. Once we have a base image, we can back it up and then start using it for creating Xen appliances. In this article we will use an Ubuntu Feisty domain as the base image. Once this domain image is ready we are going to update it and clean it up a little bit so it can be our base. Edit the sources list for apt and add in other repositories that we will need to get software packages we will need when creating these appliances. Update your list of software. This will connect to the apt repositories and get the latest list of packages. We will do the actual update in the next step. Upgrade the distribution to ensure that you have the latest versions of all the packages. Automatically clean the image so all unused packages are removed. This will ensure that the image stays free of cruft.   Now we have the base appliance image ready, we will use it to create some Xen appliances. You can make a backup of the original base image and every time you create an appliance you can use a copy as the starting point or template. The images are nothing but domU images, which are customized for running only specific applications. You start them up and run them like ay other Xen guest domains. MySQL Database Server MySQL is one of the most popular open-source databases in the world. It is a key component of the LAMP architecture – (Linux Apache MySQL and PHP). It is also very easy to get started with MySQL and is one of the key factors driving its adoption across the enterprise. In this section we will create a Xen appliance that will run a MySQL database server and also provide the ability to automatically backup the database on a given schedule. Time for Action – Create our first Xen appliance We will use our base Ubuntu Feisty domain image, and add MySQL and other needed software to it. Please ensure that you have updated your base image to the latest versions of the repositories and software packages before creating this appliance. Install mysql-server using apt. Once it is installed, Ubuntu will automatically start the database server. So before we make our other changes, stop MySQL. Edit the /etc/mysql/my.cnf and comment out the line for the bind-address parameter. This will ensure that MySQL will accept connections from external machines and not just the localhost. Start a mysql console session to test that everything is installed and working correctly. Next we will install the utility for doing the automated backups. In order to do that we will first need to install the wget utility for transferring files. This is not a part of the base Ubuntu Feisty installation. Download the automysqlbackup script from the website. Copy this script to wherever you like, maybe /opt. Create a link to this location so it’s easy to do future updates. # cp automysqlbackup.sh.2.5 /opt# ln -s automysqlbackup.sh.2.5 automysqlbackup.sh Edit the script and modify the parameters at the top of the script to match your environment. Here are the changes to be made in our case. # Username to access the MySQL server e.g. dbuserUSERNAME=pchaganti# Username to access the MySQL server e.g. passwordPASSWORD=password# Host name (or IP address) of MySQL server e.g localhostDBHOST=localhost# List of DBNAMES for Daily/Weekly Backup e.g. "DB1 DB2 DB3"DBNAMES="all"# Backup directory location e.g /backupsBACKUPDIR="/var/backup/mysql"# Mail setupMAILCONTENT="quiet" Schedule this backup script to be run daily by creating a crontab entry for it, in the following format. 45 5 * * * root  /opt/automysqlbackup.sh >/dev/null 2>&1 Now we have a MySQL database server with automatic daily backups as a nice reusable Xen appliance. What just happened? We created our first Xen appliance! It is running the open-source MySQL database server along with an automated backup of the database as per the given schedule. This image is essentially a domU image and it can be uploaded along with its configuration file to a repository somewhere, and can be used by anyone in the enterprise or elsewhere with their Xen server. You can either start up the domain manually as and when you need it or set it up to boot automatically when your xend server starts. Ruby on Rails Appliance Ruby on Rails is one of the hottest web development frameworks around. It is simple to use and you can use all the expressive power of the Ruby language. It provides a great feature set and has really put the Ruby language on the map. Ruby on Rails is gaining rapid adoption across the IT landscape and for a wide variety of web applications. In this section, we are going to create a Rails appliance that contains Ruby, Rails, and the Mongrel cluster for serving the Rails application and nginx web server for the static content. This appliance gives you a great starting point for your explorations into the world of Ruby on Rails and can be an excellent learning resource. Time for Action – Rails on Xen We will use our base Ubuntu Feisty domain image and add Rails and other needed software to it. Please ensure that you have updated your base image to the latest versions of the repositories and software packages before creating this appliance. Install the packages required for compiling software on an Ubuntu system. This is required as we will be compiling some native extensions. Once the image is done, you can always remove this package if you want to save space. Install Ruby and other packages that are needed for it. Download the RubyGems package from RubyForge. We will use this to install any Ruby libraries or packages that we will need, including Rails. Now install Rails. The first time when you run this command on a clean Ubuntu Feisty system, you will get the following error. Ignore this error and just run the command once again and it will work fine. This will install Rails and all of its dependencies. Create a new Rails application. This will create everything needed in a directory named xenbook. $ rails xenbook  Change into the directory of the application that we created in the previous step and start the server up. This will start Ruby’s built-in web server, webrick by default. Launch a web browser and navigate to the web page for our xenbook application. We have everything working for a simple Rails install. However, we are using webrick, which is a bit slow. So let’s install the Mongrel server and use it with Rails. We will actually install mongrel_cluster that will let us use a cluster of Mongrel processes for serving up our Rails application.
Read more
  • 0
  • 0
  • 1773

article-image-high-availability-scenarios
Packt
26 Nov 2014
14 min read
Save for later

High Availability Scenarios

Packt
26 Nov 2014
14 min read
"Live Migration between hosts in a Hyper-V cluster is very straightforward and requires no specific configuration, apart from type and amount of simultaneous Live Migrations. If you add multiple clusters and standalone Hyper-V hosts into the mix, I strongly advise you to configure Kerberos Constrained Delegation for all hosts and clusters involved." Hans Vredevoort – MVP Hyper-V This article written by Benedict Berger, the author of Hyper-V Best Practices, will guide you through the installation of Hyper-V clusters and their best practice configuration. After installing the first Hyper-V host, it may be necessary to add another layer of availability to your virtualization services. With Failover Clusters, you get independence from hardware failures and are protected from planned or unplanned service outages. This article includes prerequirements and implementation of Failover Clusters. (For more resources related to this topic, see here.) Preparing for High Availability Like every project, a High Availability (HA) scenario starts with a planning phase. Virtualization projects are often turning up the question for additional availability for the first time in an environment. In traditional data centers with physical server systems and local storage systems, an outage of a hardware component will only affect one server hosting one service. The source of the outage can be localized very fast and the affected parts can be replaced in a short amount of time. Server virtualization comes with great benefits, such as improved operating efficiency and reduced hardware dependencies. However, a single component failure can impact a lot of virtualized systems at once. By adding redundant systems, these single points of failure can be avoided. Planning a HA environment The most important factor in the decision whether you need a HA environment is your business requirements. You need to find out how often and how long an IT-related production service can be interrupted unplanned, or planned, without causing a serious problem to your business. Those requirements are defined in a central IT strategy of a business as well as in process definitions that are IT-driven. They include Service Level Agreements of critical business services run in the various departments of your company. If those definitions do not exist or are unavailable, talk to the process owners to find out the level of availability needed. High Availability is structured in different classes, measured by the total uptime in a defined timespan, that is 99.999 percent in a year. Every nine in this figure adds a huge amount of complexity and money needed to ensure this availability, so take time to find out the real availability needed by your services and resist the temptation to plan running every service on multi-redundant, geo-spread cluster systems, as it may not fit in the budget. Be sure to plan for additional capacity in a HA environment, so you can lose hardware components without the need to sacrifice application performance. Overview of the Failover Cluster A Hyper-V Failover Cluster consists of two or more Hyper-V Server compute nodes. Technically, it's possible to use a Failover Cluster with just one computing node; however, it will not provide any availability advantages over a standalone host and is typically only used for migration scenarios. A Failover Cluster is hosting roles such as Hyper-V virtual machines on its computing nodes. If one node fails due to a hardware problem, it will not answer any more to cluster heartbeat communication, even though the service interruption is almost instantly detected. The virtual machines running on the particular node are powered off immediately due to the hardware failure on their computing node. The remaining cluster nodes then immediately take over these VMs in an unplanned failover process and start them on their respective own hardware. The virtual machines will be the backup running after a successful boot of their operating systems and applications in just a few minutes. Hyper-V Failover Clusters work under the condition that all compute nodes have access to a shared storage instance, holding the virtual machine configuration data and its virtual hard disks. In case of a planned failover, that is, for patching compute nodes, it's possible to move running virtual machines from one cluster node to another without interrupting the VM. All cluster nodes can run virtual machines at the same time, as long as there is enough failover capacity running all services when a node goes down. Even though a Hyper-V cluster is still called a Failover Cluster—utilizing the Windows Server Failover-Clustering feature—it is indeed capable of running an Active/Active Cluster. To ensure that all these capabilities of a Failover Cluster are indeed working, it demands an accurate planning and implementation process. Failover Cluster prerequirements To successfully implement a Hyper-V Failover Cluster, we need suitable hardware, software, permissions, and network and storage infrastructure as outlined in the following sections. Hardware The hardware used in a Failover Cluster environment needs to be validated against the Windows Server Catalogue. Microsoft will only support Hyper-V clusters when all components are certified for Windows Server 2012 R2. The servers used to run our HA virtual machines should ideally consist of identical hardware models with identical components. It is possible, and supported, to run servers in the same cluster with different hardware components, that is, different size of RAM; however, due to a higher level of complexity, this is not best practice. Special planning considerations are needed to address the CPU requirements of a cluster. To ensure maximum compatibility, all CPUs in a cluster should be exactly the same model. While it's possible from a technical point of view to mix even CPUs from Intel and AMD in the same cluster through to different architecture, you will lose core cluster capabilities such as Live Migration. Choosing a single vendor for your CPUs is not enough, even when using different CPU models your cluster nodes may be using a different set of CPU instruction set extensions. With different instructions sets, Live Migrations won't work either. There is a compatibility mode that disables most of the instruction set on all CPUs on all cluster nodes; however, this leaves you with a negative impact on performance and should be avoided. A better approach to this problem would be creating another cluster from the legacy CPUs running smaller or non-production workloads without affecting your high-performance production workloads. If you want to extend your cluster after some time, you will find yourself with the problem of not having the exact same hardware available to purchase. Choose the current revision of the model or product line you are already using in your cluster and manually compare the CPU instruction sets at http://ark.intel.com/ and http://products.amd.com/, respectively. Choose the current CPU model that best fits the original CPU features of your cluster and have this design validated by your hardware partner. Ensure that your servers are equipped with compatible CPUs, the same amount of RAM, and the same network cards and storage controllers. The network design Mixing different vendors of network cards in a single server is fine and best practice for availability, but make sure all your Hyper-V hosts are using an identical hardware setup. A network adapter should only be used exclusively for LAN traffic or storage traffic. Do not mix these two types of communication in any basic scenario. There are some more advanced scenarios involving converged networking that can enable mixed traffic, but in most cases, this is not a good idea. A Hyper-V Failover Cluster requires multiple layers of communication between its nodes and storage systems. Hyper-V networking and storage options have changed dramatically through the different releases of Hyper-V. With Windows Server 2012 R2, the network design options are endless. In this article, we will work with a typically seen basic set of network designs. We have at least six Network Interface Cards (NICs) available in our servers with a bandwidth of 1 Gb/s. If you have more than five interface cards available per server, use NIC Teaming to ensure the availability of the network or even use converged networking. Converged networking will also be your choice if you have less than five network adapters available. The First NIC will be exclusively used for Host Communication to our Hyper-V host and will not be involved in the VM network traffic or cluster communication at any time. It will ensure Active Directory and management traffic to our Management OS. The second NIC will ensure Live Migration of virtual machines between our cluster nodes. The third NIC will be used for VM traffic. Our virtual machines will be connected to the various production and lab networks through this NIC. The fourth NIC will be used for internal cluster communication. The first four NICs can either be teamed through Windows Server NIC Teaming or can be abstracted from the physical hardware through to Windows Server network virtualization and converged fabric design. The fifth NIC will be reserved for storage communication. As advised, we will be isolating storage and production LAN communication from each other. If you do not use iSCSI or SMB3 storage communication, this NIC will not be necessary. If you use Fibre Channel SAN technology, use a FC-HBA instead. If you leverage Direct Attached Storage (DAS), use the appropriate connector for storage communication. The sixth NIC will also be used for storage communication as a redundancy. The redundancy will be established via MPIO and not via NIC Teaming. There is no need for a dedicated heartbeat network as in older revisions of Windows Server with Hyper-V. All cluster networks will automatically be used for sending heartbeat signals throughout the other cluster members. If you don't have 1 Gb/s interfaces available, or if you use 10 GbE adapters, it’s best practice to implement a converged networking solution. Storage design All cluster nodes must have access to the virtual machines residing on a centrally shared storage medium. This could be a classic setup with a SAN, a NAS, or a more modern concept with Windows Scale Out File Servers hosting Virtual Machine Files SMB3 Fileshares. In this article, we will use a NetApp SAN system that's capable of providing a classic SAN approach with LUNs mapped to our Hosts as well as utilizing SMB3 Fileshares, but any other Windows Server 2012 R2 validated SAN will fulfill the requirements. In our first setup, we will utilize Cluster Shared Volumes (CSVs) to store several virtual machines on the same storage volume. It's not good these days to create a single volume per virtual machine due to a massive management overhead. It's a good rule of thumb to create one CSV per cluster node; in larger environments with more than eight hosts, a CSV per two to four cluster nodes. To utilize CSVs, follow these steps: Ensure that all components (SAN, Firmware, HBAs, and so on) are validated for Windows Server 2012 R2 and are up to date. Connect your SAN physically to all your Hyper-V hosts via iSCSI or Fibre Channel connections. Create two LUNs on your SAN for hosting virtual machines. Activate Hyper-V performance options for these LUNs if possible (that is, on a NetApp, by setting the LUN type to Hyper-V). Size the LUNs for enough capacity to host all your virtual hard disks. Label the LUNs CSV01 and CSV02 with appropriate LUN IDs. Create another small LUN with 1 GB in size and label it Quorum. Make the LUNs available to all Hyper-V hosts in this specified cluster by mapping it on the storage device. Do not make these LUNs available to any other hosts or cluster. Prepare storage DSMs and drivers (that is, MPIO) for Hyper-V host installation. Refresh disk configuration on hosts, install drivers and DSMs, and format volumes as NTFS (quick). Install Microsoft Multipath IO when using redundant storage paths: Install-WindowsFeature -Name Multipath-IO –Computername ElanityHV01, ElanityHV02 In this example, I added the MPIO feature to two Hyper-V hosts with the computer names ElanityHV01 and ElanityHV02. SANs typically are equipped with two storage controllers for redundancy reasons. Make sure to disperse your workloads over both controllers for optimal availability and performance. If you leverage file servers providing SMB3 shares, the preceding steps do not apply to you. Perform the following steps instead: Create a storage space with the desired disk-types, use storage tiering if possible. Create a new SMB3 Fileshare for applications. Customize the Permissions to include all Hyper-V servers from the planned clusters as well as the Hyper-V cluster object itself with full control. Server and software requirements To create a Failover Cluster, you need to install a second Hyper-V host. Use the same unattended file but change the IP address and the hostname. Join both Hyper-V hosts to your Active Directory domain if you have not done this until yet. Hyper-V can be clustered without leveraging Active Directory but it's lacking several key components, such as Live Migration, and shouldn't be done on purpose. The availability to successfully boot up a domain-joined Hyper-V cluster without the need to have any Active Directory domain controller present during boot time is the major benefit from the Active Directory independency of Failover Clusters. Ensure that you create a Hyper-V virtual switch as shown earlier with the same name on both hosts, to ensure cluster compatibility and that both nodes are installed with all updates. If you have System Center 2012 R2 in place, use the System Center Virtual Machine Manager to create a Hyper-V cluster. Implementing Failover Clusters After preparing our Hyper-V hosts, we will now create a Failover Cluster using PowerShell. I'm assuming your hosts are installed, storage and network connections are prepared, and the Hyper-V role is already active utilizing up-to-date drivers and firmware on your hardware. First, we need to ensure that Servername, Date, and Time of our Hosts are correct. Time and Timezone configurations should occur via Group Policy. For automatic network configuration later on, it's important to rename the network connections from default to their designated roles using PowerShell, as seen in the following commands: Rename-NetAdapter -Name "Ethernet" -NewName "Host" Rename-NetAdapter -Name "Ethernet 2" -NewName "LiveMig" Rename-NetAdapter -Name "Ethernet 3" -NewName "VMs" Rename-NetAdapter -Name "Ethernet 4" -NewName "Cluster" Rename-NetAdapter -Name "Ethernet 5" -NewName "Storage" The Network Connections window should look like the following screenshot: Hyper-V host Network Connections Next, IP configuration of the network adapters. If you are not using DHCP for your servers, manually set the IP configuration (different subnets) of the specified network cards. Here is a great blog post on how to automate this step: http://bit.ly/Upa5bJ Next, we need to activate the necessary Failover Clustering features on both of our Hyper-V hosts: Install-WindowsFeature -Name Failover-Clustering-IncludeManagementTools –Computername ElanityHV01, ElanityHV02 Before actually creating the cluster, we are launching a cluster validation cmdlet via PowerShell: Test-Cluster ElanityHV01, ElanityHV02 Test-Cluster cmdlet Open the generated .mht file for more details, as shown in the following screenshot: Cluster validation As you can see, there are some warnings that should be investigated. However, as long as there are no errors, the configuration is ready for clustering and fully supported by Microsoft. However, check out Warnings to be sure you won't run into problems in the long run. After you have fixed potential errors and warnings listed in the Cluster Validation Report, you can finally create the cluster as follows: New-Cluster-Name CN=ElanityClu1,OU=Servers,DC=cloud,DC=local-Node ElanityHV01, ElanityHV02-StaticAddress 192.168.1.49 This will create a new cluster named ElanityClu1 consisting of the nodes ElanityHV01 and ElanityHV02 and using the cluster IP address 192.168.1.49. This cmdlet will create the cluster and the corresponding Active Directory Object in the specified OU. Moving the cluster object to a different OU later on is no problem at all; even renaming is possible when done the right way. After creating the cluster, when you open the Failover Cluster Management console, you should be able to connect to your cluster: Failover Cluster Manager You will see that all your cluster nodes and Cluster Core Resources are online. Rerun the Validation Report and copy the generated .mht files to a secure location if you need them for support queries. Keep in mind that you have to rerun this wizard if any hardware or configuration changes occurring to the cluster components, including any of its nodes. The initial cluster setup is now complete and we can continue with post creation tasks. Summary With the knowledge from this article, you are now able to design and implement Hyper-V Failover Clusters as well as guest clusters. You are aware of the basic concepts of High Availability and the storage and networking options necessary to achieve this. You have seen real-world proven configurations to ensure a stable operating environment. Resources for Article: Further resources on this subject: Planning Desktop Virtualization [Article] Backups in the VMware View Infrastructure [Article] Virtual Machine Design/a> [Article]
Read more
  • 0
  • 0
  • 1740
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
Packt
27 Jan 2014
5 min read
Save for later

Understanding Citrix®Provisioning Services 7.0

Packt
27 Jan 2014
5 min read
(For more resources related to this topic, see here.) The following diagram provides a high-level view of the basic Provisioning Services infrastructure and clarifies how Provisioning Services components might appear within the datacenter post installation and implementation: Provisioning Service License server The License Server either should be installed within the shared infrastructure or an existing Citrix license server can be selected. However, we have to ensure the Provisioning Service license is configured in your existing Citrix Enterprise License servers. A License Server can be selected when the Provisioning Service Configuration Wizard is run on a planned server. All Provisioning Servers within the farm must be able to communicate with the License Server. Provisioning Service Database server The database stores all system configuration settings that exist within a farm. Only one database can exist within a provisioning service farm. We can choose an existing SQL Server database or install an SQL Server in cluster for High Availability from a redundancy business continuities perspective. The Database server can be selected when the Provisioning Service Configuration Wizard runs on a planned server. All Provisioning Servers within the farm must be able to communicate with the Database server, and only one database can exist within a Provisioning Service farm Provisioning Service Admin Console Citrix Provisioning Service Admin Console is a tool that is used to control your Provisioning Services implementation. After logging on to the console, we can select the farm that we want to connect to. Our role determines what we can look at in the console and operate in the Provisioning Service farm. Shared storage service Citrix Provisioning Service requires shared storage for vDisks that are accessible by all of the users in a network. They are intended for file storage and allowing simultaneous access by multiple users without the need to replicate files to their machines' vDisk. The supported shared storages are SAN, NAS, iSCSI, and CIFS. Active Directory Server Citrix Provisioning service requires Microsoft's Active Directory. It provides authentication and authorization mechanisms as well as a framework, within which other related services can be deployed. Microsoft Active Directory is an LDAP-compliant database that contains objects. The most commonly used objects are users, computers, and groups Network services Dynamic Host Control Protocol (DHCP) is used for the purpose of getting IP addresses for servers and systems. Trivial File Transfer Protocol (TFTP) is used for automated transfer of boot configuration files between servers and a system in a network. Preboot Execution Environment (PXE) is a standard used for client/server interface that allows networked computers that boot remotely to boot locally instead. System requirements Citrix Provisioning Service can be installed with following requirements: Citrix Provisioning Server Requirement Description Operation system Windows 2012: Standard, Essential, and Datacenter editions; Windows 2008 R2; Windows 2008 R2 SP1: Standard, Enterprise, and DataCenter editions; and all editions of Windows 2008 (32 or 64-bit) Processor Intel or AMD x86 or x64 compatible 2 GHz / 3 GHz (preferred) / 3.5 GHz Dual Core / HT or an equal one for growing capacity fulfiller Memory 2 GB RAM; 4 GB (greater than 250 vDisks) Hard disk To determine IOPS needed along RAID Level, please plan your sizing based on the following formula: Total Raw IOPS = Disk Speed IOPS x # of Disks Functional IOPS = ((Total Raw IOPS * Write %)/RAID Penalty ) + (Total Raw IOPS*Read %) For more, please refer to http://support.citrix.com/servlet/KbServlet/download/24559-102-647931/ Network adapter IP assignment to servers should be static. 1 GB is recommended for less than 250 target devices. If you are planning for more than 250 devices, Dual 1 GB is recommended. For High Availability, please have two NICs for redundancy purposes. Pre-requisite software components Microsoft .NET 4.0 and Microsoft Powershell 3.0 loaded on a fresh OS The Infrastructure components required are described as follows: Requirement Description Supported database Microsoft SQL 2008, Microsoft SQL 2008 R2, and Microsoft SQL 2012 Server (32-bit or 64-bit editions) databases can be used for the Provisioning ServicesDB sizing. Please refer to http://msdn.microsoft.com/en-us/library/ms187445.aspx. For HA Planning, please refer to http://support.citrix.com/proddocs/topic/provisioning-7/pvs-installtask1-plan-6-0.html. Supported hypervisor XenServer 6.0, Microsoft SCVMM 2012 SP1 with Hyper-V 3.0; SCVMM 2012 with Hyper-V 2.0, VMware ESX 4.1, ESX 5, or ESX 5 Update 1; vSphere 5.0, 5.1, 5.1 Update 1; along with Physical Devices for 3D Pro Graphics (Blade Servers, Windows Server OS machines, and Windows Desktop OS machines with XenDesktop VDA installed). Provisioning Console Hardware requirement: Processor 2 GHz, Memory 2 GB ,Hard Disk 500 MB Supported Operating Systems: all editions of Windows Server 2008 (32-bit or 64- bit); Windows Server 2008 R2: Standard, DataCenter, and Enterprise editions; Windows Server 2012: Standard, Essential, and Datacenter editions; Windows 7 (32-bit or 64-bit); Windows XP Professional (32-bit or 64-bit); Windows Vista (32-bit or 64-bit): Business, Enterprise, and Ultimate (retail licensing); and all editions of Windows 8 (32-bit or 64-bit). Pre-Requisite Software: MMC 3.0, Microsoft .NET 4.0, and Windows PowerShell 2.0 In case we are using Provisioning Services, we would require XenDesktop and, NET 3.5 SP1, and in the event that we are using Provisioning Services then we would require SCVMM 2012 SP1 and PowerShell 3.0. Supported ESD Apply only in case VDisk Update Management is used; ESD supports WSUS Server-3.0 SP2 and Microsoft System Center Configuration Management 2007 SP2, 2012, and 2012 SP1 Supported target device Supported Operating Systems: all editions of Windows 8 (32 or 64-bit); Windows 7 SP1 (32 bits or 64 bits): Enterprise, Professional, and Ultimate (Support alone in Private Mode); Windows XP Professional SP3 32-bit and Windows XP Professional SP2 64-bit; Windows Server 2008 R2 SP1: Standard, DataCenter, and Enterprise editions; Windows Server 2012: Standard, Essential, and Datacenter editions. Summary This article has thus covered the several components that make up a Citrix Provisioning Services farm and the system requirements that need to be met to run the software. Resources for Article: Further resources on this subject: Introduction to XenConvert [article] Citrix XenApp Performance Essentials [article] Getting Started with XenApp 6 [article]
Read more
  • 0
  • 0
  • 1740

article-image-essentials-vmware-vsphere
Packt
09 Jul 2015
7 min read
Save for later

Essentials of VMware vSphere

Packt
09 Jul 2015
7 min read
In this article by Puthiyavan Udayakumar, author of the book VMware vSphere Design Essentials, we will cover the following topics: Essentials of designing VMware vSphere The PPP framework The challenges and encounters faced on virtual infrastructure (For more resources related to this topic, see here.) Let's get started with understanding the essentials of designing VMware vSphere. Designing is nothing but assembling and integrating VMware vSphere infrastructure components together to form the baseline for a virtualized datacenter. It has the following benefits: Saves power consumption Decreases the datacenter footprint and helps towards server consolidation Fastest server provisioning On-demand QA lab environments Decreases hardware vendor dependency Aids to move to the cloud Greater savings and affordability Superior security and High Availability Designing VMware vSphere Architecture design principles are usually developed by the VMware architect in concurrence with the enterprise CIO, Infrastructure Architecture Board, and other key business stakeholders. From my experience, I would always urge you to have frequent meetings to observe functional requirements as much as possible. This will create a win-win situation for you and the requestor and show you how to get things done. Please follow your own approach, if it works. Architecture design principles should be developed by the overall IT principles specific to the customer's demands, if they exist. If not, they should be selected to ensure positioning of IT strategies in line with business approaches. In nutshell, architect should aim to form an effective architecture principles that fulfills the infrastructure demands, following are high level principles that should be followed across any design: Design mission and plans Design strategic initiatives External influencing factors When you release a design to the customer, keep in mind that the design must have the following principles: Understandable and robust Complete and consistent Stable and capable of accepting continuous requirement-based changes Rational and controlled technical diversity Without the preceding principles, I wouldn't recommend you to release your design to anyone even for peer review. For every design, irrespective of the product that you are about to design, try the following approach; it should work well but if required I would recommend you make changes to the approach. The following approach is called PPP, which will focus on people's requirements, the product's capacity, and the process that helps to bridge the gap between the product capacity and people requirements: The preceding diagram illustrates three entities that should be considered while designing VMware vSphere infrastructure. Please keep in mind that your design is just a product designed by a process that is based on people's needs. In the end, using this unified framework will aid you in getting rid of any known risks and its implications. Functional requirements should be meaningful; while designing, please make sure there is a meaning to your design. Selecting VMware vSphere from other competitors should not be a random pick, you should always list the benefits of VMware vSphere. Some of them are as follows: Server consolidation and easy hardware changes Dynamic provisioning of resources to your compute node Templates, snapshots, vMotion, DRS, DPM, High Availability, fault tolerance, auto monitoring, and solutions for warnings and alerts Virtual Desktop Infrastructure (VDI), building a disaster recovery site, fast deployments, and decommissions The PPP framework Let's explore the components that integrate to form the PPP framework. Always keep in mind that the design should consist of people, processes, and products that meet the unified functional requirements and performance benchmark. Always expect the unexpected. Without these metrics, your design is incomplete; PPP always retains its own decision metrics. What does it do, who does it, and how is it done? We will see the answers in the following diagrams: The PPP Framework helps you to get started with requirements gathering, design vision, business architecture, infrastructure architecture, opportunities and solutions, migration planning, fixing the tone for implementing and design governance. The following table illustrates the essentials of the three-dimensional approach and the basic questions that are required to be answered before you start designing or documenting about designing, which will in turn help to understand the real requirements for a specific design: Phase Description Key components Product Results of what? In what hardware will the VM reside? What kind of CPU is required? What is the quantity of CPU, RAM, storage per host/VM? What kind of storage is required? What kind of network is required? What are the standard applications that need to be rolled out? What kind of power and cooling are required? How much rack and floor space is demanded? People Results of who? Who is responsible for infrastructure provisioning? Who manages the data center and supplies the power? Who is responsible for implementation of the hardware and software patches? Who is responsible for storage and back up? Who is responsible for security and hardware support? Process Results of how? How should we manage the virtual infrastructure? How should we manage hosted VMs? How should we provision VM on demand? How should a DR site be active during a primary site failure? How should we provision storage and backup? How should we take snapshots of VMs? How should we monitor and perform periodic health checks? Before we start to apply the PPP framework on VMware vSphere, we will discuss the list of challenges and encounters faced on the virtual infrastructure. List of challenges and encounters faced on the virtual infrastructure In this section, we will see a list of challenges and encounters faced with virtual infrastructure due to the simple reason that we fail to capture the functional and non-functional demands of business users, or do not understand the fit-for-purpose concept: Resource Estimate Misfire: If you underestimate the amount of memory required up-front, you could change the number of VMs you attempt to run on the VMware ESXi host hardware. Resource unavailability: Without capacity management and configuration management, you cannot create dozens or hundreds of VMs on a single host. Some of the VMs could consume all resources, leaving other VMs unknown. High utilization: An army of VMs can also throw workflows off-balance due to the complexities they can bring to provisioning and operational tasks. Business continuity: Unlike a PC environment, VMs cannot be backed up to an actual hard drive. This is why 80 percent of IT professionals believe that virtualization backup is a great technological challenge. Security: More than six out of ten IT professionals believe that data protection is a top technological challenge. Backward compatibility: This is especially challenging for certain apps and systems that are dependent on legacy systems. Monitoring performance: Unlike physical servers, you cannot monitor the performance of VMs with common hardware resources such as CPU, memory, and storage. Restriction of licensing: Before you install software on virtual machines, read the license agreements; they might not support this; hence, by hosting on VMs, you might violate the agreement. Sizing the database and mailbox: Proper sizing of databases and mailboxes is really critical to the organization's communication systems and for applications. Poor design of storage and network: A poor storage design or a networking design resulting from a failure to properly involve the required teams within an organization is a sure-fire way to ensure that this design isn't successful. Summary In this article we covered a brief introduction of the essentials of designing VMware vSphere which focused on the PPP framework. We also had look over the challenges and encounters faced on the virtual infrastructure. Resources for Article: Further resources on this subject: Creating and Managing VMFS Datastores [article] Networking Performance Design [article] The Design Documentation [article]
Read more
  • 0
  • 0
  • 1704

article-image-article-vmware-view-5-desktop-virtualization-vcenter-vdesktop
Packt
15 Jun 2012
8 min read
Save for later

VMware View 5 Desktop Virtualization

Packt
15 Jun 2012
8 min read
Core components of VMware View This book assumes a familiarity with server virtualization, more specifically, VMware vSphere (sometimes referred to as ESX by industry graybeards). Therefore, this article will focus on: The VMware vCenter Server The types of View Connection Server Agent and client software vCenter Server VMware vCenter is a required component of a VMware View solution. This is because the View Connection Server interacts with the underlying Virtual Infrastructure (VI) through vCenter Web Service (typically over port 443). vCenter is also responsible for the complementary components of a VMware View solution provided by the underlying VMware vSphere, including VMotion and DRS (used to balance the virtual desktop load on the physical hosts). When an end customer purchases VMware View bundles, VMware vCenter is automatically included and does not need to be purchased via a separate Stock Keeping Unit (SKU). In the environments leveraging vSphere for server virtualization, vCenter Server is likely to already exist. To ensure a level set on the capabilities that VMware vCenter Server provides, the key terminologies are listed as follows: vMotion: It is the ability to live migrate a running virtual machine from one physical server to another with no downtime. DRS: It is the vCenter Server capability that balances virtual machines across physical servers participating in the same vCenter Server cluster. Cluster: It is a collection of physical servers that have access to the same networks and shared storage. The physical servers participating in a vCenter cluster have their resources (for example, CPU, memory, and so on) logically pooled for virtual machine consumption. HA: It is the vCenter Server capability that protects against the failure of a physical server. HA will power up virtual machines that reside on the failed physical server on available physical servers in the same cluster. Folder: It is a logical grouping of virtual machines, displayed within the vSphere Client. vSphere Client: It is the client-side software used to connect to vCenter Servers (or physical servers running vSphere) for management, monitoring, configuration, and other related tasks. Resource pool: It is a logical pool of resources (for example, CPU, memory, and so on). The virtual machines (or the groups of virtual machines) residing in the same resource pool will share a predetermined amount of resources. Designing a VMware View solution often touches on typical server virtualization design concepts such as the proper cluster design. Owing to this overlap in design concepts between server virtualization and VDI, many server virtualization engineers apply exactly the same principles from one solution to the other. The first misstep that a VDI architect can take is that VDI is not server virtualization and should not be treated as such. Server virtualization is the virtualization of server operating systems. While it is true that VDI does use some server virtualization (for the connection infrastructure, for example), there are many concepts that are new and critical to understand for success. The second misstep a VDI architect can make is in understanding the pure scale of some VDI solutions. For the average server virtualization administrator with no VDI in their environment, he/she may be tasked with managing a dozen physical servers with a few hundred virtual machines. The authors of this book have been involved in VDI solutions involving tens of thousands of vDesktops, well beyond the limits of a traditional VMware vSphere design. VDI is often performed on a different scale. The concepts of architectural scaling are covered later in this book, but many of the scaling concepts revolve around the limits of VMware vCenter Server. It should be noted that VMware vCenter Server was originally designed to be the central management point for the enterprise server virtualization environments. While VMware continues to work on its ability to scale, designing around VMware vCenter server will be important. So why do we need VMware vCenter in the first place, for the VDI architect? VMware vCenter is the gateway for all virtual machine tasks in a VMware View solution. This includes the following tasks: The creation of virtual machine folders to organize vDesktops The creation of resource pools to segregate physical resources for different groups of vDesktops The creation of vDesktops The creation of snapshots VMware vCenter is not used to broker the connection of an end device to a vDesktop. Therefore, an outage of VMware vCenter should not impact inbound connections to already-provisioned vDesktops as it will prevent additional vDesktops from being built, refreshed, or deleted. Because of vCenter Server's importance in a VDI solution, additional steps are often taken to ensure its availability even beyond the considerations made in a typical server virtualization solution. Later in this book, there is a question asking whether an incumbent vCenter Server should be used for an organization's VDI or whether a secondary vCenter Server infrastructure should be built. View Connection Server View Connection Server is the primary component of a VMware View solution; if VMware vCenter Server is the gateway for management communication to the virtual infrastructure and the underlying physical servers, the VMware View Connection Server is the gateway that end users pass through to connect to their vDesktop. In classic VDI terms, it is VMware's broker that connects end users with workspaces (physical or virtual). View Connection Server is the central point of management for the VDI solution and is used to manage almost the entire solution infrastructure. However, there will be times when the architect will need to make considerations to vCenter cluster configurations, as discussed later in this book. In addition, there may be times when the VMware View administrator will need access to the vCenter Server. The types of VMware View Connection Servers There are several options available when installing the VMware View Connection Server. Therefore, it is important to understand the different types of View Connection Servers and the role they play in a given VDI solution. Following are the three configurations in which View Connection Server can be installed: Full: This option installs all the components of View Connection Server, including a fresh Lightweight Directory Access Protocol (LDAP) instance. Security: This option installs only the necessary components for the View Connection portal. View Security Servers do not need to belong to an Active Directory domain (unlike the View Connection Server) as they do not access any authentication components (for example, Active Directory). Replica: This option creates a replica of an existing View Connection Server instance for load balancing or high availability purposes. The authentication/ LDAP configuration is copied from the existing View Connection Server. Our goal is to design the solutions that are highly available for our end customers. Therefore, all the designs will leverage two or more View Connection Servers (for example, one Full and one Replica). The following services are installed during a Full installation of View Connection Server: VMware View Connection Server VMware View Framework Component VMware View Message Bus Component VMware View Script Host VMware View Security Gateway Component VMware View Web Component VMware VDMDS VMware VDMDS provides the LDAP directory services. View Agent View Agent is a component that is installed on the target desktop, whether physical (seldom) or virtual (almost always). View Agent allows the View Connection Server to establish a connection to the desktop. View Agent also provides the following capabilities: USB redirection: It is defined as making a USB device—that is connected locally—appear to be connected to vDesktop Single Sign-On (SSO): It is done by using intelligent credential handling, which requires only one secured and successful authentication login request, as opposed to logging in multiple times (for example, at the connection server, vDesktop, and so on) Virtual printing via ThinPrint technology: It is the ability to streamline printer driver management through the use of ThinPrint (OEM) PCoIP connectivity: It is the purpose-built VDI protocol made by Teradici and used by VMware in their VMware View solution Persona management: It is the ability to manage a user profile across an entire desktop landscape; the technology comes via the recovery time objective (RTO) acquisition by VMware View Composer support: It is the ability to use linked clones and thin provisioning to drastically reduce operational efforts in managing a mid-to-large-scale VMware View environment View Client View Client is a component that is installed on the end device (for example, the user's laptop). View Client allows the device to connect to a View Connection Server, which then directs the device to an available desktop resource. Following are the two types of View Clients: View Client View Client with Local Mode These separate versions have their own unique installation bits (only one may be installed at a time). View Client provides all of the functionality needed for an online and connected worker. If Local Mode will be leveraged in the solution, View Client with Local Mode should be installed. VMware View Local Mode is the ability to securely check out a vDesktop to a local device for use in disconnected scenarios (for example, in the middle of the jungle). There is roughly an 80 MB difference in the installed packages (View Client with Local Mode being larger). For most scenarios, 80 MB of disk space will not make or break the solution as even flash drives are well beyond an 80 MB threshold. In addition to providing the functionality of being able to connect to a desktop, View Client talks to View Agent to perform the following tasks: USB redirection Single Sign-On
Read more
  • 0
  • 0
  • 1682

article-image-your-first-step-towards-hyper-v-replica
Packt
11 Oct 2013
12 min read
Save for later

Your first step towards Hyper-V Replica

Packt
11 Oct 2013
12 min read
(For more resources related to this topic, see here.) The Server Message Block protocol When an enterprise starts to build a modern datacenter, the first thing that should be done is to set up the storage. With the introduction of Windows Server 2012, a new improved version of the Server Message Block (SMB) protocol is introduced. The SMB is a file sharing protocol. This new version is 3.0 and is designed for modern datacenters. It allows administrators to create file shares and deploy critical systems on them. This is really good, because now administrators have to deal with file shares and security permissions, instead of complex connections to storage arrays. The idea is to set up one central SMB file-sharing server and attach the underlying storage to it. This SMB server initiates connection to the underlying storage. The logical disks created on the storage are attached to this SMB server. Then different file shares are created on it with different access permissions. These file shares can be used by different systems, such as Hyper-V storage space for virtual machine files, MS SQL server database files, Exchange Server database files, and so on. It is an advantage, because all of the data is stored on one location, which means easier administration of data files. It is important to say that this is a new concept and is only available with Windows Server 2012. It comes with no performance degradation on critical systems, because SMB v3.0 was designed for this type of data traffic. Setting up security permissions on SMB file shares SMB file shares contain sensitive data files whether they are virtual machines or SQL server database files, proper security permissions need to be applied to them in order to ensure that only authorized users and machines have access to them. Because of this, SMB File Sharing server has to be connected to the LAN part of the infrastructure as well. Security permissions are read from an Active Directory server. For example, if Hyper-V hosts have to read and write on a share, then only the computer accounts of those hosts need permissions on that share, and no one else. Another example is, if the share holds MS SQL server database files, then only the SQL Server computer accounts and SQL Server service account need permissions on that share. Migration of virtual machines Virtual Machine High Availability is the reason why failover clusters are deployed. High availability means that there is no system downtime or there is minimal accepted system downtime. This is different from system uptime. A system can be up and running but it may not be available. Hyper-V hosts in modern datacenters run many virtual machines, depending on the underlying hardware resources. Each of these systems is very important to the consumer. Let's say that a Hyper-V hosts malfunctions at some bank, and let's say that this host, hosts several critical systems and one of them may be the ATM system. If this happens, the users won't be able to use the ATMs. This is where Virtual Machine High Availability comes into picture. It is achieved through the implementation of failover cluster. A failover cluster ensures that when a node of the cluster becomes unavailable, all of the virtual machines on that node will be safely migrated to another node of the same cluster. Users can even set rules to specify to which host the virtual machines failover should go. Migration is also useful when some maintenance tasks should be done on some of the nodes of the cluster. The node can safely be shut down and all of the virtual machines, or at least the most critical, will be migrated to another host. Configuring Hyper-V Replica Enterprises tend to increase their system availability and deliver end user services. There are various ways how this can be done, such as making your virtual machines highly available, disaster recovery methods, and back up of critical systems. In case of system malfunction or disasters, the IT department needs to react fast, in order to minimize system downtime. Disaster recovery methods are valuable to the enterprise. This is why it is imperative that the IT department implements them. When these methods are built in the existing platform that the enterprise uses and it is easy to configure and maintain, then you have a winning combination. This is a suitable scenario for Hyper-V Replica to step up. It is easy to configure and maintain, and it is integrated with the Hyper-V 3.0, which comes with Windows Server 2012. This is why Hyper-V Replica is becoming more attractive to the IT departments when it comes to disaster recovery methods. In this article, we will learn what are the Hyper-V Replica prerequisites and configuration steps for Hyper-V Replica in different deployment scenarios. Because Hyper-V Replica can be used with failover clusters, we will learn how to configure a failover cluster with Windows Server 2012. And we will introduce a new concept for virtual machine file storage called SMB. Hyper-V Replica requirements Before we can start with the implementation of Hyper-V Replica, we have to be sure we have met all the prerequisites. In order to implement Hyper-V Replica, we have to install Windows Server 2012 on our physical machines. Windows Server 2012 is a must, because Hyper-V Replica is functionality available only with that version of Windows Server. Next, you have to install Hyper-V on each of the physical machines. Hyper-V Replica is a built-in feature of Hyper-V 3.0 that comes with Windows Server 2012. If you plan to deploy Hyper-V on non-domain servers, you don't require an Active Directory Domain. If you want to implement a failover cluster on your premise, then you must have Active Directory Domain. In addition, if you want your replication traffic to be encrypted, you can use self-signed certificates from local servers or import a certificate generated from a Certificate Authority (CA). This is a server running Active Directory Certificate Services, which is a Windows Server Role that should be installed on a separate server. Certificates from such CAs are imported to Hyper-V Replica-enabled hosts and associated with Hyper-V Replica to encrypt traffic generated from a primary site to a replica site. A primary site is the production site of your company, and a replica site is a site which is not a part of the production site and it is where all the replication data will be stored. If we have checked and cleared all of these prerequisites, then we are ready to start with the deployment of Hyper-V Replica. Virtual machine replication in Failover Cluster environment Hyper-V Replica can be used with Failover Clusters, whether they reside in the primary or in the replica site. You can have the following deployment scenarios: Hyper-V host to a Failover Cluster Failover Cluster to a Failover Cluster Failover Cluster to a Hyper-V node Hyper-V Replica configuration when Failover Clusters are used is done with the Failover Cluster Management console. For replication to take place, the Hyper-V Replica Broker role must be installed on the Failover Clusters, whether they are in primary or replica sites. The Hyper-V Replica Broker role is installed like any other Failover Cluster roles. Failover scenarios In Hyper-V Replica there are three failover scenarios: Test failover Planned failover Unplanned failover Test failover As the name says, this is only used for testing purposes, such as health validation and Hyper-V Replica functionality. When test failover is performed, there is no downtime on the systems in the production environment. Test failover is done at the replica site. When test failover is in progress, a new virtual machine is created which is a copy of the virtual machine for which you are performing the test failover. It is easily distinguished because the new virtual machine has Test added to the name. It is safe for the Test Virtual Machine to be started because there is no network adapter on it. So no one can access it. It serves only for testing purposes. You can log in on it and check the application consistency. When you have finished testing, right-click on the virtual machine and choose Stop Test Failover, and then the Test virtual machine is deleted. Planned failover Planned failover is the safest and the only type that should be performed. Planned failover is usually done when Hyper-V hosts have to be shut down for various reasons such as transport or maintenance. This is similar to Live Migration. You make a planned failover so that you don't lose virtual machine availability. The first thing you have to do is check whether the replication process for the virtual machine is healthy. To do this, you have to start the Hyper-V Management console in the primary site. Choose the virtual machine, and then at the bottom, click on the Replication tab. If the replication health status is Healthy, then it is fine to do the planned failover. If the health status doesn't show Healthy, then you need to do some maintenance until it says Healthy. Unplanned failovers Unplanned failover is used only as a last resort. It always results in data loss because any data that has not been replicated is lost during the failover. Although planned failover is done at the primary site, the unplanned failover is done at the replica site. When performing unplanned failover, the replica virtual machine is started. At that moment Hyper-V checks to see if the primary virtual machine is on. If it is on, then the failover process is stopped. If the primary virtual machine is off, then the failover process is continued and the replica virtual machine becomes the primary virtual machine. What is virtualization? Virtualization is a concept in IT that has its root back in 1960 when mainframes were used. In recent years, virtualization became more available because of different user-friendly tools, such as Microsoft Hyper-V, were introduced to customers. These tools allow the administrator to configure and administer a virtualized environment easily. Virtualization is a concept where a hypervisor, which is a type of middleware, is deployed on a physical device. This hypervisor allows the administrator to deploy many virtual servers that will execute its workload on that same physical machine. In other words, you get many virtual servers on one physical device. This concept gives better utilization of resources and thus it is cost effective. Hyper-V 3.0 features With the introduction of Windows Server 2008 R2, two new concepts regarding virtual machine high availability were introduced. Virtual machine high availability is a concept that allows the virtual machine to execute its workload with minimum downtime. The idea is to have a mechanism that will transfer the execution of the virtual machine to another physical server in case of node malfunctioning. In Windows Server 2008 R2, a virtual machine can be live migrated to another Hyper-V host. There is also quick migration, which allows multiple migrations from one host to another host. In Windows Server 2012, there are new features regarding Virtual Machine Mobility. Not only can you live migrate a virtual machine but you can also migrate all of its associated fi les, including the virtual machine disks to another location. Both mechanisms improve high availability. Live migration is a functionality that allows you to transfer the execution of a virtual machine to another server with no downtime. Previous versions of Windows Server lacked disaster recovery mechanisms. Disaster recovery mechanism is any tool that allows the user to configure policy that will minimize the downtime of systems in case of disasters. That is why, with the introduction of Windows Server 2012, Hyper-V Replica is installed together with Hyper-V and can be used in clustered and in non-clustered environments. Windows Failover Clustering is a Windows feature that is installed from the Add Roles and Features Wizard from Server Manager. It makes the server ready to be joined to a failover cluster. Hyper-V Replica gives enterprises great value, because it is an easy to implement and configure a Business Continuity and Disaster Recovery (BCDR) solution. It is suitable for Hyper-V virtualized environments because it is built in the Hyper-V role of Windows Server 2012. The outcome of this is for virtual machines running at one site called primary site to be easily replicated to another backup site called replica site, in case of disasters. The replication between the sites is done over an IP network, so it can be done in LAN environments or across WAN link. This BCDR solution provides efficient and periodical replication. In case of disaster it allows the production servers to be failed over to a replica server. This is very important for critical systems because it reduces downtime of those systems. It also allows the Hyper-V administrator to restore virtual machines to a specific point in time regarding recovery history of a certain virtual machine. Security considerations Restricting access to Hyper-V is very important. You want only authorized users to have access to the management console of Hyper-V. When Hyper-V is installed, a local security group on the server is created. It is named Hyper-V Administrators. Every user that is member of this group can access and configure Hyper-V settings. Another way to increase security of Hyper-V is to change the default port numbers of Hyper-V Authentication. By default, Kerberos uses port number 80, and Certificate Authentication uses port number 443. Certificated also encrypts the traffic generated from primary to replica site. And at last, you can create a list of authorized servers from which replication traffic will be received. Summary There are new concepts and useful features that make the IT administrators' life easier. Windows Server 2012 is designed for enterprises that want to deploy modern datacenters with state-of-the-art capabilities. The new user interface, the simplified configuration, and all of the built-in features are what that makes Windows Server 2012 appealing to the IT administrators. Resources for Article: Further resources on this subject: Dynamically enable a control (Become an expert) [Article] Choosing the right flavor of Debian (Simple) [Article] So, what is Microsoft © Hyper-V server 2008 R2? [Article]
Read more
  • 0
  • 0
  • 1677
article-image-managing-pools-desktops
Packt
07 Oct 2015
14 min read
Save for later

Managing Pools for Desktops

Packt
07 Oct 2015
14 min read
In this article by Andrew Alloway, the author of VMware Horizon View High Availability, we will review strategies for providing High Availability for various types of VMware Horizon View desktop pools. (For more resources related to this topic, see here.) Overview of pools VMware Horizon View provides administrators with the ability to automatically provision and manage pools of desktops. As part of our provisioning of desktops, we must also consider how we will continue service for the individual users in the event of a host or storage failure. Generally High Availability requirements fall into two categories for each pool. We can have stateless desktops where the user information is not stored on the VM between sessions and Stateful desktops where the user information is stored on the desktop between sessions. Stateless desktops In a stateless configuration, we are not required to store data on the Virtual Desktops between user sessions. This allows us to use Local Storage instead of shared storage for our HA strategies as we can tolerate host failures without the use of shared disk. We can achieve a stateless desktop configuration using roaming profiles and/or View Persona profiles. This can greatly reduce cost and maintenance requirements for View Deployments. Stateless desktops are typical in the following environments: Task Workers: A group of workers where the tasks are well known and they all share a common set of core applications. Task workers can use roaming profiles to maintain data between user sessions. In a multi shift environment, having stateless desktops means we only need to provision as many desktops that will be used consecutively. Task Worker setups are typically found in the following scenarios: Data entry Call centers Finance, Accounts Payables, Accounts Receivables Classrooms (in some situations) Laboratories Healthcare terminals Kiosk Users: A group of users that do not login. Logins are typically automatic or without credentials. Kiosk users are typically untrusted users. Kiosk VMs should be locked down and restricted to only the core applications that need to be run. Kiosks are typically refreshed after logoff or at scheduled times after hours. Kiosks can be found in situations such as the following: Airline Check-In stations Library Terminals Classrooms (in some situations) Customer service terminals Customer Self-Serve Digital Signage Stateful desktops Statefull desktops have some advantages from reduced iops and higher disk performance due to the ability to choose thick provisioning. Stateful desktops are desktops that require user data to be stored on the VM or Desktop Host between user sessions. These machines typically are required by users who will extensively customize their desktop in non-trivial ways, require complex or unique applications that are not shared by a large group or require the ability to modify their VM Stateful Desktops are typically used for the following situations: Users who require the ability to modify the installed applications Developers IT Administrators Unique or specialized users Department Managers VIP staff/managers Dedicated pools Dedicated pools are View Desktops provisioned using thin or thick provisioning. Dedicated pools are typically used for Stateful Desktop deployments. Each desktop can be provisioned with a dedicated persistent disk used for storing the User Profile and data. Once assigned a desktop that user will always log into the same desktop ensuring that their profile is kept constant. During OS refresh, balances and recomposes the OS disk is reverted back to the base image. Dedicated Pools with persistent disks offer simplicity for managing desktops as minimal profile management takes place. It is all managed by the View Composer/View Connection Server. It also ensures that applications that store profile data will almost always be able to retrieve the profile data on the next login. Meaning that the administrator doesn't have to track down applications that incorrectly store data outside the roaming profile folder. HA considerations for dedicated pools Dedicated pools unfortunately have very difficult HA requirements. Storing the user profile with the VM means that the VM has to be stored and maintained in an HA aware fashion. This almost always results in a shared disk solution being required for Dedicated Pools. In the event of a host outage other hosts connected to the same storage can start up the VM. For shared storage, we can use NFS, iSCSI, Fibre Channel, or VMware Virtual SAN storage. Consider investing in storage systems with primary and backup controllers as we will be dependent on the disk controllers being always available. Backups are also a must with this system as there is very little recovery options in the event of a storage array failure. Floating Pools Floating pools are a pool of desktops where any user can be assigned to any desktop in the pool upon login. Floating pools are generally used for stateless desktop deployments. Floating pools can be used with roaming profiles or View Persona to provide a consistent user experience on login. Since floating pools are treated as disposable VMs, we open up additional options for HA. Floating pools are given 2 local disks, the OS disk which is a replica from the assigned base VM, and the Disposable Disk where the page file, hibernation file, and temp drive are located. When Floating pools are refreshed, recomposed or rebalanced, all changes made to the desktop by the users are lost. This is due to the Disposable Disk being discarded between refreshes and the OS disk being reverted back to the Base Image. As such any session information such as Profile, Temp directory, and software changes are lost between refreshes. Refreshes can be scheduled to occure after logoff, after every X days or can be manually refreshed. HA considerations for floating pools Floating pools can be protected in several ways depending on the environment. Since floating pools can be deployed on local storage we can protect against a host failure by provisioning the Floating Pool VMs on multiple separate hosts. In the event of a host failure the remaining Virtual Desktops will be used to log users in. If there is free capacity in the cluster more Virtual Desktops will be provisioned on other hosts. For environments with shared storage Floating Pools can still be deployed on the shared storage but it is a good idea to have a secondary shared storage device or a highly available storage device. In the event of a storage failure the VMs can be started on the secondary storage device. VMware Virtual SAN is inherently HA safe and there is no need for a secondary datastore when using Virtual SAN. Many floating pool environments will utilize a profile management solution such as Roaming Profiles or View Persona Management. In these situations it is essential to setup a redundant storage location for View Profiles and or Roaming Profiles. In practice a Windows DFS share is a convenient and easy way to guard profiles against loss in the event of an outage. DFS can be configured to replicate changes made to the profile in real time between hosts. If the Windows DFS server is provisioned as VMs on shared storage make sure to create a DRS rule to separate the VMs onto different hosts. Where possible DFS servers should be stored on separate disk arrays to ensure they data is preserved in the event of the Disk Array, or Storage Processor failure. For more information regarding Windows DFS you can visit the link below https://technet.microsoft.com/en-us/library/jj127250.aspx Manual pools Manual pools are custom dedicated desktops for each user. A VM is manually built for each user who is using the manual pool. Manual Pools are Stateful pools that generally do not utilize profile management technologies such as View Persona or Roaming Profiles. Like Dedicated pools once a user is assigned to a VM they will always log into the same VM. As such HA requirements for manual pools are very similar to dedicated pools. Manual desktops can be configured in almost any maner desired by the administrator. There are no requirements for more than one disk to be attached to the Manual Pool desktop. Manual pools can also be configured to utilize physical hardware as the Desktop such as Blade Servers, Desktop Computers or even Laptops. In this situation there are limited high availability options without investing in exotic and expensive hardware. As best practice the physical hosts should be built with redundant power supplies, ECC RAM, mirrored hard disks pending budget and HA requirements. There should be a good backup strategy for managing physical hosts connected to the Manual Pools. HA considerations for manual pools Manual pools like dedicated pools have a difficult HA requirement. Storing the user profile with the VM means that the VM has to be stored and maintained in an HA aware fashion. This almost always results in a shared disk solution being required for Manual Pools. In the event of a host outage other hosts connected to the same storage can start up the VM. For shared storage, we can use NFS, iSCSI, Fibre Channel, or VMware VSAN storage. Consider investing in storage systems with primary and backup controllers as we will be dependent on the disk controllers being always available. Backups are also a must with this system as there is very little recovery options in the event of a storage array failure. VSAN deployments are inherently HA safe and are excellent candidates for Manual Pool storage. Manual pools given their static nature also have the option of using replication technology to backup the VMs onto another disk. You can use VMware vSphere Replication to do automatic replication or use a variety of storage replication solutions offered by storage and backup vendors. In some cases it may be possible to use fault tolerance on the Virtual Desktops for truly high availability. Note that this would limit the individual VMs to a single vCPU which may be undesirable. Remote Desktop services pools Remote Desktop Services Pools (RDS Pools) are pools where the remote session or application is hosted on a Windows Remote Desktop Server. The application or remote session is run under the users' credentials. Usually all the user data is stored locally on the Remote Desktop Server but can also be stored remotely using Roaming Profiles or View Persona Profiles. Folder Redirection to a central network location is also used with RDS Pools. Typical uses for Remote Desktop Services is for migrating users off legacy RDS environments, hosting applications, and providing access to troublesome applications or applications with large memory foot prints. The Windows Remote Desktop Server can be either a VM or a standalone physical host. It can be combined with Windows Clustering technology to provide scalability and high availability. You can also deploy a load balancer solution to manage connections between multiple Windows Remote Desktop Servers. Remote Desktop services pool HA considerations Remote Desktop services HA revolves around protecting individual RDS VMs or provisioning a cluster of RDS servers. When a single VM is deployed wilth RDS generally it is best to use vSphere HA and clustering featurs to protect the VM. If the RDS resources are larger than practical for a VM then we must focus on protecting individual host or clustering multiple hosts. When the Windows Remote Desktop Server is deployed as a VM the following options are available: Protect the VM with VMware HA, using shared storage This allows vCenter to fail over the VM to another host in the event of a host failure. vSphere will be responcible for starting the VM on another host. The VM will resume from a crashed state. Replicate the Virtual Machine to separate disks on separate hosts using VMware Virtual SAN: Same as above but in this case the VM has been replicated to another host using Virtual SAN technology. The remote VM will be started up from a crashed state, using the last consistent harddrive image that was replicated. Using replication technologies such as vSphere Replication: The VM will be periodically synchronized to a remote host. In the event of a host failure we can manually activate the remotely synchronized VM. Use a Vendors Storage Level replication: In this case we allow our storage vendor to provide replication technology to provide a redundant backup. This protects us in the event of a storage or host failure. These failures can be automated or manual. Consult with your Storage Vendor for more information. Protect the VM using backup technologies: This provides redundancy in the sense that we won't loose the VM if it fails. Unfortuantely you are at the mercy of your restore process to bring the VM back to life. The VM will resume from a crashed state. Always keep backups of production servers. For RDS servers running on a dedicated server we could utilize the following: Redundant power supplies: Redundant power supplies will keep the server going while a PSU is being replaced or becomes defective. It is also a good idea to have 2 separate power sources for each power supply. Simple things like a power bar going faulty or triping a breaker could bring down the server if there are not two independent power sources. Uninteruptable Power Supply: Battery backups are always a must for production level equipment. Make sure to scale the UPS to provide adequate power and duration for your environment. Redundant network interfaces: In rare sucumstances a NIC can go bad or a cable can be damaged. In this case redundant NICs will prevent a server outage. Remember that to protect against a switch outage we should plug the NICs into separate switches. Mirrored or redundant disks: Harddrives are one of the most common failures in computers. Mirrored harddrives or RAID configurations are a must for production level equipment. 2 or more hosts: Clustering physical servers will ensure that host failures won't cause downtime. Consider multi site configurations for even more redundancy. Shared Strategies for VMs and Hardware: Provide High Availability to the RDS using Microsoft Network Load Balancer (NLB): Microsoft Network Load Balancer can provide load balancing to the RDS servers directy. In this situation the clients would connect to a single IP managed by the NLB which would randomly be assigned to a server. Provide High Availability using a load balancer to manage sessions between RDS servers: Using a hardware or software load balancer is can be used instead of Microsoft Network Load Balancers. Load Balancer vendors provide a high variety of capabilities and features for their load balancers. Consult your load balancer vendor for best practices. Use DNS Round Robin to alternate between RDS hosts: On of the most cost effective load balancing methods. It has the drawback of not being able to balance the load or to direct clients away from failed hosts. Updating DNS may delay adding new capacity to the cluster or delay removing a failed host from the cluster. Remote Desktop Connection Broker with High Availability: We can provide RDS failover using the Connection Broker feature of our RDS server. For more details see the link below. For more information regarding Remote Desktop Connection Broker with High Availability see: https://technet.microsoft.com/en-us/library/ff686148%28WS.10%29.aspx Here is an example topology using physical or virtual Microsoft RDS Servers. We use a load balancing technology for the View Connection Servers as described in the previous chapter. We then will connect to the RDS via either a load balancer, DNS round robin, or Cluster IP. Summary In this article, we covered the concept of stateful and stateless desktops and the consequences and techniques for supporting each in a highly available environment. Resources for Article: Further resources on this subject: Working with Virtual Machines[article] Storage Scalability[article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 1675

Packt
14 Jan 2014
3 min read
Save for later

Installing Virtual Desktop Agent – server OS and desktop OS

Packt
14 Jan 2014
3 min read
(For more resources related to this topic, see here.) You need to allow your Windows master image to communicate with your XenDesktop infrastructure. You can accomplish this task by installing Virtual Desktop Agent. In this latest release of the Citrix platform, VDA has been redeployed in three different versions: desktop operating systems, server operating systems, and Remote PC, a way to link an existing physical or virtual machine to your XenDesktop infrastructure. Getting ready You need to install and configure the described software with domain administrative credentials within both the desktop and server operating systems. How to do it... In the following section, we are going to explain the way to install and configure the three different types of Citrix Virtual Desktop Agents. Installing VDA for a server OS machine Connect to the server OS master image with domain administrative credentials. Mount the Citrix XenDesktop 7.0 ISO on the server OS machine by right-clicking on it and selecting the Mount option Browse the mounted Citrix XenDesktop 7.0 DVD-ROM, and double-click on the AutoSelect.exe executable file. On the Welcome screen, click on the Start button to continue. On the XenDesktop 7.0 menu, click on the Virtual Delivery Agent for Windows Server OS link, in the Prepare Machines and Images section. In the Environment section, select Create a master image if you want to create a master image for the VDI architecture (MCS/PVS). Or enable a direct connection to a physical or virtual server. After completing this step, click on Next. In the Core Components section, select a valid location to install the agent; then flag the Citrix Receiver component; and click on the Next button. In the Delivery Controller section, select Do it manually from the drop-down list in order to manually configure Delivery Controller; type a valid controller FQDN; and click on the Add button, as shown in the following screenshot. To continue with the installation, click on Next. To verify that you have entered a valid address, click on the Test connection...button. In the Features section flag, choose the optimization options that you want to enable, and then click on Next to continue, as shown in the following screenshot: In the Firewall section, select the correct radio button to open the required firewall ports automatically if you're using the Windows Firewall, or manually if you've got a firewall other than that on board. After completing this action, click on the Next button as shown in the following screenshot: If the options in the Summary screen are correct, click on the Install button to complete the installation procedure. In order to complete the procedure, you'll need to restart the server OS machine several times.
Read more
  • 0
  • 0
  • 1646

article-image-network-access-control-lists
Packt
27 Nov 2014
6 min read
Save for later

Network Access Control Lists

Packt
27 Nov 2014
6 min read
In this article by Ryan Boud, author of Hyper-V Network Virtualization Cookbook, we will learn to lock down a VM for security access. (For more resources related to this topic, see here.) Locking down a VM for security access This article will show you how to apply ACLs to VMs to protect them from unauthorized access. Getting ready You will need to start two VMs in the Tenant A VM Network: in this case, Tenant A – VM 10, to test the gateway and as such should have IIS installed) and Tenant A – VM 11. How to do it... Perform the following steps to lock down a VM: In the VMM console, click on the Home tab in the ribbon bar and click on the PowerShell button. This will launch PowerShell with the VMM module already loaded and the console connected to the current VMM instance. To obtain the Virtual Subnet IDs for all subnets in the Tenant A VM Network, enter the following PowerShell: $VMNetworkName = "Tenant A" $VMNetwork = Get-SCVMNetwork | Where-Object -Property Name -EQ $VMNetworkName Get-SCVMSubnet -VMNetwork $VMNetwork | Select-Object VMNetwork,Name,SubnetVlans,VMSubnetID You will be presented with the list of subnets and the VMSubnetID for each. The VMSubnetID will used later in this article; in this case, the VMSubnetID is 4490741, as shown in the following screenshot: Your VMSubnet ID value may be different to the one obtained here; this is normal behavior. In the PowerShell Console, run the following PowerShell to get the IP addresses of Tenant A – VM 10 and Tenant A – VM 11: $VMs = @() $VMs += Get-SCVirtualMachine -Name "Tenant A - VM 10" $VMs += Get-SCVirtualMachine -Name "Tenant A - VM 11" ForEach($VM in $VMs){    Write-Output "$($VM.Name): $($VM.VirtualNetworkAdapters.IPv4Addresses)"    Write-Output "Host name: $($VM.HostName)" } You will be presented with the IPv4 addresses for the two VMs as shown in the following screenshot: Please leave this PowerShell console open. Your IP addresses and host names may differ from those shown here; this is normal behavior. In the VMM console, open the VMs and Services workspace and navigate to All Hosts | Hosts | hypvclus01. Right-click on Tenant A – VM 11, navigate to Connect or View, and then click on Connect via Console. Log in to the VM via the Remote Console. Open Internet Explorer and go to the URL http://10.0.0.14, where 10.0.0.14 is the IP address of Tenant A – VM 10, as we discussed in step 4. You will be greeted with default IIS page. This shows that there are currently no ACLs preventing Tenant A – VM 11 accessing Tenant A – VM 10 within Hyper-V or within the Windows Firewall. Open a PowerShell console on Tenant A – VM 11 and enter the following command: Ping 10.0.0.14 –t Here, 10.0.0.14 is the IP address of Tenant A – VM 10. This will run a continuous ping against Tenant A – VM10. In the PowerShell console left open in Step 4, enter the following PowerShell: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Deny -Direction      Inbound -VMName "Tenant A - VM 10" -Weight 1 -        IsolationID 4490741 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4 and the Isolation ID needs to be VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding base rules such as a Deny All, it is suggested to apply a weight of 1 to allow other rules to override it if appropriate. Return to the PowerShell console left open on Tenant A – VM 11 in step 10. You will see that Tenant A – VM 10 has stopped responding to pings. This has created a Hyper-V Port ACL that will deny all inbound traffic to Tenant A – VM10. In the same PowerShell console, enter the following PowerShell: Test-NetConnection -CommonTCPPort HTTP -ComputerName 10.0.0.14 -InformationLevel Detailed Here, 10.0.0.14 is the IP address of Tenant A – VM 10. This shows that you cannot access the IIS website on Tenant A – VM 10. Return to the PowerShell console left open on the VMM console in step 11 and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Allow -      Direction Inbound -VMName "Tenant A - VM 10" -Weight        10 -IsolationID 4490741 -LocalPort 80 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4, and the Isolation ID needs to be set to VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding rules it is suggested to use weight increments of 10 to allow other rules to be inserted between rules if necessary. On Tenant A – VM 11, repeat step 13. You will see that TCPTestSucceeded has changed to True. Return to the PowerShell console left open on the VMM console in step 14, and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Deny -Direction      Outbound -VMName "Tenant A - VM 10" -Weight 1 -        IsolationID 4490741 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4, and the Isolation ID needs to be set to VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding base rules such as a Deny All, it is suggested to apply a weight of 1 to allow other rules to override it if appropriate. On Tenant A – VM 11 repeat step 14. You will see that TCPTestSucceeded has changed to False. This is because all outbound connections have been denied. Return to the PowerShell console left open on the VMM console in step 17, and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Remove-VMNetworkAdapterExtendedAcl -Direction Inbound -      VMName "Tenant A - VM 10" -Weight 10 } This removes the inbound rule for port 80. In the same PowerShell console enter the following cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Allow -      Direction Inbound -VMName "Tenant A - VM 10" -Weight        10 -IsolationID 4490741 -LocalPort 80 -Stateful          $True -Protocol TCP } This adds a stateful ACL rule; this ensures that the switch dynamically creates an outbound rule to allow the traffic to return to the requestor. On Tenant A – VM 11 repeat step 14. You will see that the TCPTestSucceeded has changed to True. This is because the stateful ACL is now in place. How it works... Extended ACLs are applied as traffic ingresses and egresses the VM into and out of the Hyper-V switch. As the ACLs are VM-specific, they are stored in the VM's configuration file. This ensures that the ACLs are moved with the VM ensuring continuity of ACL. For the complete range of options, it is advisable to review the TechNet article at http://technet.microsoft.com/en-us/library/dn464289.aspx. Summary In this article we learned how to lock down a VM for security access. Resources for Article: Further resources on this subject: High Availability Scenarios [Article] Performance Testing and Load Balancing [Article] Your first step towards Hyper-V Replica [Article]
Read more
  • 0
  • 0
  • 1632
article-image-virtual-machine-design
Packt
21 May 2014
8 min read
Save for later

Virtual Machine Design

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) Causes of virtual machine performance problems In a perfect virtual infrastructure, you will never experience any performance problems and everything will work well within the budget that you allocated. But should there be circumstances that happen in this perfect utopian datacenter you've designed, hopefully this section will help you to identify and resolve the problems easier. CPU performance issues The following is a summary list of some the common CPU performance issues you may experience in your virtual infrastructure. While the following is not an exhaustive list of every possible problem you can experience with CPUs, it can help guide you in the right direction to solve CPU-related performance issues: High ready time: When your ready time is above 10 percent, this could indicate CPU contention and could be impacting the performance of any CPU-intensive applications. This is not a guarantee of a problem; however, applications which are not nearly as sensitive can still report high values and perform well within guidelines. Whether your application is CPU-ready is measured in milliseconds to get percentage conversion; see KB 2002181. High costop time: The costop time will often correlate to contention in multi-vCPU virtual machines. Costop time exceeding 10 percent could cause challenges when vSphere tries to schedule all vCPUs in your multi-vCPU servers. CPU limits: As discussed earlier, you will often experience performance problems if your virtual machine tries to use more resources than have been configured in your limits. Host CPU saturation: When the vSphere host utilization runs at above 80 percent, you may experience host saturation issues. This can introduce performance problems across the host as the CPU scheduler tries to assign resources to virtual machines. Guest CPU saturation: This is experienced on high utilization of vCPU resources within the operating system of your virtual machines. This can be mitigated, if required, by adding additional vCPUs to improve the performance of the application. Misconfigured affinity: Affinity is enabled by default in vSphere; however, if manually configured to be assigned to a specific physical CPU, problems can be encountered. This can often be experienced when creating a VM with affinity settings and then cloning the VM. VMware advises against manually configuring affinity. Oversizing vCPUs: When assigning multiple vCPUs to a virtual machine, you would want to ensure that the operating system is able to take advantage of the CPUs, threads, and your applications can support them. The overhead associated with unused vCPUs can impact other applications and resource scheduling within the vSphere host. Low guest usage: Sometimes poor performance problems with low CPU utilization will help identify the problem existing as I/O or memory. This is often a good guiding indicator that your CPU being underused can be caused by additional resources or even configuration. Memory performance issues Additionally, the following list is a summary of some common memory performance issues you may experience in your virtual infrastructure. The way VMware vSphere handles memory management, there is a unique set of challenges with troubleshooting and resolving performance problems as they arise: Host memory: Host memory is both a finite and very limited resource. While VMware vSphere incorporates some creative mechanisms to leverage and maximize the amount of available memory through features such as page sharing, memory management, and resource-allocation controls, there are several memory features that will only take effect when the host is under stress. Transparent page sharing: This is the method by which redundant copies of pages are eliminated. TPS, enabled by default, will break up regular pages into 4 KB chunks for better performance. When virtual machines have large physical pages (2 MB instead of 4 KB), vSphere will not attempt to enable TPS for these as the likelihood of multiple 2 MB chunks being similar is less than 4 KB. This can cause a system to experience memory overcommit and performance problems may be experienced; if memory stress is then experienced, vSphere may break these 2 MB chunks into 4 KB chunks to allow TPS to consolidate the pages. Host memory consumed: When measuring utilization for capacity planning, the value of host memory consumed can often be deceiving as it does not always reflect the actual memory utilization. Instead, the active memory or memory demand should be used as a better guide of actual memory utilized as features such as TPS can reflect a more accurate picture of memory utilization. Memory over-allocation: Memory over-allocation will usually be fine for most applications in most environments. It is typically safe to have over 20 percent memory allocation especially with similar applications and operating systems. The more similarity you have between your applications and environment, the higher you can take that number. Swap to disk: If you over-allocate your memory too high, you may start to experience memory swapping to disk, which can result in performance problems if not caught early enough. It is best, in those circumstances, to evaluate which guests are swapping to disk to help correct either the application or the infrastructure as appropriate. For additional details on vSphere Memory management and monitoring, see KB 2017642. Storage performance issues When it comes to storage performance issues within your virtual machine infrastructure, there are a few areas you will want to pay particular attention to. Although most storage-related problems you are likely to experience will be more reliant upon your backend infrastructure, the following are a few that you can look at when identifying if it is the VM's storage or the SAN itself: Storage latency: Latency experienced at the storage level is usually expressed as a combination of the latency of the storage stack, guest operating system, VMkernel virtualization layer, and the physical hardware. Typically, if you experience slowness and are noticing high latencies, one or more aspects of your storage could be the cause. Three layers of latency: ESXi and vCenter typically report on three primary latencies. These are Guest Average Latency (GAVG), Device Average Latency (DAVG), and Kernel Average Latency (KAVG). Guest Average Latency (GAVG): This value is the total amount of latency that ESXi is able to detect. This is not to say that it is the total amount of latency being experienced but is just the figure of what ESXi is reporting against. So if you're experiencing a 5 ms latency with GAVG and a performance application such as Perfmon is identifying a storage latency of 50 ms, something within the guest operating system is incurring a penalty of 45 ms latency. In circumstances such as these, you should investigate the VM and its operating system to troubleshoot. Device Average Latency (DAVG): Device Average Latency tends to focus on the more physical of things aligned with the device; for instance, if the storage adapters, HBA, or interface is having any latency or communication backend to the storage array. Problems experienced here tend to fall more on the storage itself and less so as a problem which can be easily troubleshooted within ESXi itself. Some exceptions to this being firmware or adapter drivers, which may be introducing problems or queue depth in your HBA. More details on queue depth can be found at KB 1267. Kernel Average Latency (KAVG): Kernel Average Latency is actually not a specific number as it is a calculation of "Total Latency - DAVG = KAVG"; thus, when using this metric you should be wary of a few values. The typical value of KAVG should be zero, anything greater may be I/O moving through the kernel queue and can be generally dismissed. When your latencies are 2 ms or consistently greater, this may indicate a storage performance issue with your VMs, adapters, and queues should be reviewed for bottlenecks or problems. The following are some KB articles that can help you further troubleshoot virtual machine storage: Using esxtop to identify storage performance issues (KB1008205) Troubleshooting ESX/ESXi virtual machine performance issues (KB2001003) Testing virtual machine storage I/O performance for VMware ESX and ESXi (KB1006821) Network performance issues Lastly, when it comes to addressing network performance issues, there are a few areas you will want to consider. Similar to the storage performance issues, a lot of these are often addressed by the backend networking infrastructure. However, there are a few items you'll want to investigate within the virtual machines to ensure network reliability. Networking error, IP already assigned to another adapter: This is a common problem experienced post V2V or P2V migrations, which results in ghosted network adapters. VMware KB 1179 guides you through the steps to go about removing these ghosted network adapters. Speed or duplex mismatch within the OS: Left at defaults, the virtual machine will use auto-negotiation to get maximum network performance; if configured down from that speed, this can introduce virtual machine limitations. Choose the correct network adapter for your VM: Newer operating systems should support the VMXNET3 adapter while some virtual machines, either legacy or upgraded from previous versions, may run older network adapter types. See KB 1001805 to help decide which adapters are correct for your usage. The following are some KB articles that can help you further troubleshoot virtual machine networking: Troubleshooting virtual machine network connection issues (KB 1003893) Troubleshooting network performance issues in a vSphere environment (KB 1004097) Summary With this article, you should be able to inspect existing VMs while following design principles that will lead to correctly sized and deployed virtual machines. You also should have a better understanding of when your configuration is meeting your needs, and how to go about identifying performance problems associated with your VMs. Resources for Article: Further resources on this subject: Introduction to vSphere Distributed switches [Article] Network Virtualization and vSphere [Article] Networking Performance Design [Article]
Read more
  • 0
  • 0
  • 1612

article-image-upgrading-previous-versions
Packt
23 Jun 2014
8 min read
Save for later

Upgrading from Previous Versions

Packt
23 Jun 2014
8 min read
(For more resources related to this topic, see here.) This article is about guiding you through the requirements and steps necessary to upgrade your VMM 2008 R2 SP1 to VMM 2012 R2. There is no direct upgrade path from VMM 2008 R2 SP1 to VMM 2012 R2. You must first upgrade to VMM 2012 and then to VMM 2012 R2. VMM 2008 R2 SP1-> VMM 2012-> SCVMM 2012 SP1 -> VMM 2012 R2 is the correct upgrade path. Upgrade notes: VMM 2012 cannot be upgraded directly to VMM 2012 R2. Upgrading it to VMM 2012 SP1 is required VMM 2012 can be installed on a Windows 2008 Server VMM 2012 SP1 requires Windows 2012 VMM 2012 R2 requires minimum Windows 2012 (Windows 2012 R2 is recommended) Windows 2012 hosts can be managed by VMM 2012 SP1 Windows 2012 R2 hosts require VMM 2012 R2 System Center App Controller versions must match the VMM version To debug a VMM installation, the logs are located in %ProgramData%VMMLogs, and you can use the CMTrace.exe tool to monitor the content of the files in real time, including SetupWizard.log and vmmServer.log. VMM 2012 Architecture, VMM 2012 is a huge product upgrade, and there have been many improvements. This article only covers the VMM upgrade. If you have a previous version of System Center family components installed on your environment, make sure you follow the upgrade and installation. System Center 2012 R2 has some new components, in which the installation order is also critical. It is critical that you take the steps documented by Microsoft in Upgrade Sequencing for System Center 2012 R2 at http://go.microsoft.com/fwlink/?LinkId=328675 and use the following upgrade order: Service Management Automation Orchestrator Service Manager Data Protection Manager (DPM) Operations Manager Configuration Manager Virtual Machine Manager (VMM) App Controller Service Provider Foundation Windows Azure Pack for Windows Server Service Bus Clouds Windows Azure Pack Service Reporting Reviewing the upgrade options This recipe will guide you through the upgrade options for VMM 2012 R2. Keep in mind that there is no direct upgrade path from VMM 2008 R2 to VMM 2012 R2. How to do it... Read through the following recommendations in order to upgrade your current VMM installation. In-place upgrade from VMM 2008 R2 SP1 to VMM 2012 Use this method if your system meets the requirements for a VMM 2012 upgrade and you want to deploy it on the same server. The supported VMM version to upgrade from is VMM 2008 R2 SP1. If you need to upgrade VMM 2008 R2 to VMM 2008 R2 SP1, refer to http://go.microsoft.com/fwlink/?LinkID=197099. In addition, keep in mind that if you are running the SQL Server Express version, you will need to upgrade SQL Server to a fully supported version beforehand as the Express version is not supported in VMM 2012. Once the system requirements are met and all of the prerequisites are installed, the upgrade process is straightforward. To follow the detailed recipe, refer to the Upgrading to VMM 2012 R2 recipe. Upgrading from 2008 R2 SP1 to VMM 2012 on a different computer Sometimes, you may not be able to do an in-place upgrade to VMM 2012 or even to VMM 2012 SP1. In this case, it is recommended that you use the following instructions: Uninstall the current VMM that retains the database and then restore the database on a supported version of SQL Server. Next, install the VMM 2012 prerequisites on a new server (or on the same server, as long it meets the hardware and OS requirements). Finally, install VMM 2012, providing the retained database information on the Database configuration dialog, and the VMM setup will upgrade the database. When the install process is finished, upgrade the Hyper-V hosts with the latest VMM agents. The following figure illustrates the upgrade process from VMM 2008 R2 SP1 to VMM 2012: When performing an upgrade from VMM 2008 R2 SP1 with a local VMM database to a different server, the encrypted data will not be preserved as the encryption keys are stored locally. The same rule applies when upgrading from VMM 2012 to VMM 2012 SP1 and from VMM 2012 SP1 to VMM 2012 R2 and not using Distributed Key Management (DKM) in VMM 2012. Upgrading from VMM 2012 to VMM 2012 SP1 To upgrade to VMM 2012 SP1, you should already have VMM 2012 up and running. VMM 2012 SP1 requires a Windows Server 2012 and Windows ADK 8.0. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 SP1 and App Controller. Upgrading from VMM 2012 SP1 to VMM 2012 R2 To upgrade to VMM 2012 R2, you should already have VMM 2012 SP1 up and running. VMM 2012 R2 requires minimum Windows Server 2012 as the OS (Windows 2012 R2 is recommended) and Windows ADK 8.1. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 SP1 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 R2 and App Controller. Some more planning considerations are as follows: Virtual Server 2005 R2: VMM 2012 does not support Microsoft Virtual Server 2005 R2 anymore. If you have Virtual Server 2005 R2 or an unsupported ESXi version running and have not removed these hosts before the upgrade, they will be removed automatically during the upgrade process. VMware ESX and vCenter: For VMM 2012, the supported versions of VMware are from ESXi 3.5 to ESXi 4.1 and vCenter 4.1. For VMM 2012 SP1/R2, the supported VMware versions are from ESXi 4.1 to ESXi 5.1, and vCenter 4.1 to 5.0. SQL Server Express: This is not supported since VMM 2012. A full version is required. Performance and Resource Optimization (PRO): The PRO configurations are not retained during an upgrade to VMM 2012. If you have an Operations Manager (SCOM) integration configured, it will be removed during the upgrade process. Once the upgrade process is finished, you can integrate SCOM with VMM. Library server: Since VMM 2012, VMM does not support a library server on Windows Server 2003. If you have it running and continue with the upgrade, you will not be able to use it. To use the same library server in VMM 2012, move it to a server running a supported OS before starting the upgrade. Choosing a service account and DKM settings during an upgrade: During an upgrade to VMM 2012, on the Configure service account and distributed key management page of the setup, you are required to create a VMM service account (preferably a domain account) and choose whether you want to use DKM to store the encryption keys in Active Directory (AD). Make sure to log on with the same account that was used during the VMM 2008 R2 installation: This needs to be done because, in some situations after the upgrade, the encrypted data (for example, the passwords in the templates) may not be available depending on the selected VMM service account, and you will be required to re-enter it manually. For the service account, you can use either the Local System account or a domain account: This is the recommended option, but when deploying a highly available VMM management server, the only option available is a domain account. Note that DKM is not available with the versions prior to VMM 2012. Upgrading to a highly available VMM 2012: If you're thinking of upgrading to a High Available (HA) VMM, consider the following: Failover Cluster: You must deploy the failover cluster before starting the upgrade. VMM database: You cannot deploy the SQL Server for the VMM database on highly available VMM management servers. If you plan on upgrading the current VMM Server to an HA VMM, you need to first move the database to another server. As a best practice, it is recommended that you have the SQL Server cluster separated from the VMM cluster. Library server: In a production or High Available environment, you need to consider all of the VMM components to be High Available as well, and not only the VMM management server. After upgrading to an HA VMM management server, it is recommended, as a best practice, that you relocate the VMM library to a clustered file server. In order to keep the custom fields and properties of the saved VMs, deploy those VMs to a host and save them to a new VMM 2012 library. VMM Self-Service Portal: This is not supported since VMM 2012 SP1. It is recommended that you install System Center App Controller instead. How it works... There are two methods to upgrade to VMM 2012 from VMM 2008 R2 SP1: an in-place upgrade and upgrading to another server. Before starting, review the initial steps and the VMM 2012 prerequisites and perform a full backup of the VMM database. Uninstall VMM 2008 R2 SP1 (retaining the data) and restore the VMM database to another SQL Server running a supported version. During the installation, point to that database in order to have it upgraded. After the upgrade is finished, upgrade the host agents. VMM will be rolled back automatically in the event of a failure during the upgrade process and reverted to its original installation/configuration. There's more... The names of the VMM services have been changed in VMM 2012. If you have any applications or scripts that refer to these service names, update them accordingly as shown in the following table: VMM version VMM service display name Service name 2008 R2 SP1 Virtual Machine Manager vmmservice   Virtual Machine Manager Agent vmmagent 2012 / 2012 SP1/ 2012 R2 System Center Virtual Machine Manager scvmmservice   System Center Virtual Machine Manager Agent scvmmagent See also To move the file-based resources (for example, ISO images, scripts, and VHD/VHDX), refer to http://technet.microsoft.com/en-us/library/hh406929 To move the virtual machine templates, refer to Exporting and Importing Service Templates in VMM at http://go.microsoft.com/fwlink/p/?LinkID=212431
Read more
  • 0
  • 0
  • 1581
Modal Close icon
Modal Close icon