Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-creating-vm-using-virtualbox-ubuntu-linux
Packt
15 Apr 2010
12 min read
Save for later

Creating a VM using VirtualBox - Ubuntu Linux

Packt
15 Apr 2010
12 min read
Packt are due to launch a new Open Source brand, into which future VirtualBox titles will be published. For more information on that launch, look here. In this two-part article by Alfonso V. Romero, author of VirtualBox 3.1: Beginner's Guide, you shall: Create your first virtual machine in VirtualBox, using Ubuntu Linux Learn about your virtual machine's basic configuration Download and install Ubuntu Linux on your virtual machine Learn how to start and stop your virtual machine So let's get on with it... Getting started In this article, you'll need to download the Ubuntu Linux Desktop Live CD. It's a pretty big download (700 MB approximately), so I'd recommend you to start downloading it as soon as possible. With a 4 Mbps Internet connection, you'd need approximately 1 hour to download a copy of Ubuntu Linux Desktop. That's why I decided to change the order of some sections in this article. After all, action is what we're looking for, right? Downloading the Ubuntu Linux Live CD After finishing the exercise in this section, you can jump straight ahead into the next section while waiting for your Ubuntu Live CD to download. That way, you'll have to wait for less time, and your virtual machine will be ready for some action! Time for action – downloading the Ubuntu Desktop Live CD In the following exercise I'll show you how to download the Ubuntu Linux Desktop Edition from the official Ubuntu website. Open your web browser, and type http://www.ubuntu.com in the address bar. The Ubuntu Home Page will appear. Click on the Ubuntu Desktop Download link to continue: The Download Ubuntu page will show up next. Ubuntu's most recent version will be selected by default (Ubuntu Desktop 9.10, at the time of this writing). Select a location near you, and click on the Begin download button to continue. The Ubuntu Download page lets you download the 32-bit version automatically. If you want the 64-bit version of Ubuntu 9.10, you'll need to click on the Alternative download options link below the Download locations list box. The exercises in this article use the 32-bit version. You'll be taken to the download page. After a few seconds, your download will start automatically. Select the Save File option in your browser, and click on OK to continue. Now you just have to wait until the download process finishes. What just happened? I think the exercise pretty much explains itself, so I just want to add that you can also order a free Ubuntu CD or buy one from Ubuntu's website. Just click on the Get Ubuntu link at the front page and follow the instructions. I ordered mine when writing this article to see how long it takes to arrive at my front door. I hope it arrives before I finish the book! Have a go hero – doing more with the thing You can try downloading other Ubuntu versions, like Ubuntu 8.10 Hardy Heron. Just click on the Alternative download options link below the Download locations list box, and explore the other options available for download. Creating your Ubuntu Linux VM Now that you installed VirtualBox, it's time to learn how to work with it. I've used other virtualization products such as VMware besides VirtualBox, and in my humble opinion, the user interface in VirtualBox is a delight to work with. Time for action – creating a virtual machine At last you have the chance to use Windows and Linux side by side! This is one of the best features VirtualBox has to offer when you want the best of both worlds! Open VirtualBox, and click on the New button (or press Ctrl+N) to create a new virtual machine: The Welcome to the New Virtual Machine Wizard! dialog will show up. Click on Next to continue. Type UbuntuVB in the Name field, select Linux as the Operating System and Ubuntu as the Version in the VM Name and OS Type dialog. Click on Next to continue. You can leave the default 384 MB value in the Memory dialog box or choose a greater amount of RAM, depending on your hardware resources. Click on Next to continue. Leave the default values in the Virtual Hard Disk dialog, and click on Next twice to enter the Create New Virtual Disk wizard. Leave the default Dynamically Expanding Storage option in the Hard Disk Storage Type dialog, and click on Next to continue. Leave the default values chosen by VirtualBox for your Ubuntu Linux machine in the Virtual Disk Location and Size dialog (UbuntuVB and 8.00 GB), and click on Next to continue. A Summary dialog will appear, showing all the parameters you chose for your new virtual hard disk. Click on Finish to exit the Create New Virtual Hard Disk wizard. Now another Summary dialog will appear to show the parameters you chose for your new virtual machine. Click on Finish to exit the wizard and return to the VirtualBox main window: Your new UbuntuVB virtual machine will appear on the VirtualBox main screen, showing all its parameters in the Details tab. What just happened? Now your Ubuntu Linux virtual machine is ready to run! In this exercise, you created a virtual machine with all the parameters required for a typical Ubuntu Linux distribution. The first parameter to configure was the base memory or RAM. As you saw in step 3, the recommended size for a typical Ubuntu Linux installation is 384 MB. Exercise caution when selecting the RAM size for your virtual machine. The golden rule of thumb dictates that, if you have less than 1 GB of RAM, you can't assign more than one half of your physical RAM to a virtual machine because you'll get into trouble! Your host PC (the one that runs VirtualBox) will start to behave erratically and could crash! With 2 GB, for example, you can easily assign 1 GB to your virtual machine and 1 GB to your physical PC without any problems. Or you could even have three virtual machines with 512 MB of RAM each and leave 512 MB for your host PC. The best way to find out the best combination of RAM is to do some experimenting yourself and to know the minimum RAM requirements for your host and guest operating systems. Don't assume that assigning lots of RAM to your virtual machine will increase its performance. If, for example, you have 2 GB of RAM on your host and you assign 1 GB to an Ubuntu virtual machine, it's very unlikely there will be a bigger performance increase than if you were assigning only 512 MB to the same Ubuntu virtual machine. On the contrary, your host PC will work better if you only assign 512 MB to the Ubuntu virtual machine because it will use some of the extra RAM for disk caching, instead of recurring to the physical hard disk. However, it all depends on the guest operating system you plan to use and what applications you need to run. If you look closely at the Base Memory Size setting in the Memory dialog when creating a virtual machine, you'll notice that there are three memory areas below the slider control: the green-colored area indicates the amount of memory range you can safely choose for your virtual machine; the yellow-colored area indicates a dangerous memory range that you can choose, but nobody knows if your virtual machine and your host will be able to run without any problems; and the red-colored area indicates the memory range that your virtual machine can't use. It's wise to stick with the default values when creating a new virtual machine. Later on, if you need to run a memory-intensive application, you can add more RAM through the Settings button, as we'll see in the following section. Another setting to consider (besides memory) when creating a virtual machine is the virtual hard drive. Basically, a virtual hard disk is represented as a special file on your host computer's physical hard disk, and it's commonly known as a disk image file. This means that your host computer sees it as a 'large' file on its system, but your virtual machine sees it as a 'real' hard disk connected to it. Since virtual hard drives are completely independent from each other, there is no risk of accidental overwriting, and they can't be larger than the free space available on your real computer's physical hard drive. This is the most common way to handle virtual storage in VirtualBox; later on, we'll see more details about disk image files and the different formats available. VirtualBox assigns a default value based on the guest operating system you plan to install in your virtual machine. For Ubuntu Linux, the default value is 8 GB. That's enough space to experiment with Ubuntu and learn to use it, but if you really want to do some serious work —desktop publishing or movie production, for example—you should consider assigning your virtual machine more of your hard disk space. You can even get two hard drives on your physical machine, and assign one for your host system and the other one for your virtual machine! Or you can also create a new virtual hard disk image and add it as if it were a second hard drive! Before going to the next section, I'd like to talk about the two virtual hard disk storage types available in VirtualBox: dynamically expanding storage and fixed-size storage. The Hard Disk Storage Type dialog you saw in step 4 of the previous exercise contains a brief description for both types of storage. At first, the dynamically expanding option might seem more attractive because you don't see the space reduction in your hard drive immediately. When using dynamically expanding storage, VirtualBox needs to expand the storage size continuously, and that could mean a slight decrease in speed when compared to a fixed-size disk, but most of the time this is unnoticeable. Anyway, when the virtual hard disk is fully expanded, the differences between both types of storage disappear. Most people agree that a dynamically expanding disk represents a better choice than a fixed one, since it doesn't take up unnecessary space from your host's hard disk until needed. Personally, when experimenting with a new virtual machine, I use the dynamically expanding option, but when doing some real work, I like to set apart my virtual machine's hard disk space from the beginning, so I choose the fixed-size storage option in these cases. Have a go hero – experimenting with memory and hard disk storage types When creating a virtual machine, you can specify the amount of RAM to assign to it instead of using the default values suggested by VirtualBox. Create another virtual machine named UbuntuVB2, and try to assign the entire RAM available to it. You won't be able to continue until you select a lower value because the Next button will be grayed out, which means you're in the red-colored memory range. Now move back the slider until the Next button is active again; you'll probably be in the yellow-colored memory range. See if your virtual machine can start with that amount of memory and if you can use both your host PC and your VM without any problems. In case you encounter any difficulties, keep moving back the memory range until all problems disappear. Once you're done experimenting with the memory setting, use the UbuntuVB2 virtual machine with the same exact settings as the one you created in the previous exercise, but this time use a fixed-size hard drive. Just take into account that since VirtualBox must prepare all the storage space at once, this process may take a long time depending on the storage size you selected and the performance of your physical hard disk. Now go and try out different storage sizes with both types of disks: dynamically expanding and fixed size. Configuring basic settings for your Ubuntu Linux VM All right, you created your Ubuntu virtual machine and downloaded a copy of Ubuntu Desktop Live CD. Can you start installing Ubuntu now? I know you'll hate me, but nope, you can't. We need to tell your virtual machine where to boot the Live CD from, as if we were using a real PC. Follow me, and I'll show you the basic configuration settings for your VM, so you can start the Ubuntu installation ASAP! Time for action –basic configuration for your VM In this exercise you'll learn how to adjust some settings for your virtual machine, so you can install Ubuntu Linux on it. Open VirtualBox, select your UbuntuVB virtual machine, and click on the Settings button: The UbuntuVB – Settings dialog will appear, showing all the settings in the General tab. Click on the Storage category from the list in the left panel. Then select the Empty slot located just below the UbuntuVB.vdi hard disk image, under the IDE Controller element inside the Storage Tree panel, and click on the Invoke Virtual Media Manager button: The Virtual Media Manager dialog will appear next. Click on the Add button to add the Ubuntu Linux Live CD ISO image: The Select a CD/DVD-ROM disk image file dialog will show up next. Navigate to the directory where you downloaded the Ubuntu Desktop ISO image, select it, and click on the Open button to continue. The Ubuntu Desktop ISO image will appear selected in the CD/DVD Images tab from the Virtual Media Manager dialog. Click on the Select button to attach the Ubuntu ISO image to your virtual machine's CD/DVD drive. Next, the Ubuntu ISO image file will appear selected on the ISO Image File setting from the UbuntuVB – Settings dialog. Click on OK to continue. Now you're ready to start your virtual machine and install Ubuntu!
Read more
  • 0
  • 0
  • 4547

article-image-extending-oracle-vm-management
Packt
08 Oct 2009
3 min read
Save for later

Extending Oracle VM Management

Packt
08 Oct 2009
3 min read
The following topics were covered in the first part of this article series i.e (Oracle VM Management) Getting started with the Oracle VM Manager Managing Servers and Server Pools Let's continue from where we had left in the previous part of the article. Oracle VM Management: Managing VM Servers and Repositories There must be at least one physical server in the Server Pool that we have created. There are many things you can do with the VM Servers in the Server Pool such as changing the configurations or role or function of the server, restarting it, shutting it down, monitoring its performance, or even deleting it. The Server Pools are elastic and can adapt flexibly to the increase or decrease in the demand of workloads. It is possible to expand the pool with Oracle VM S5:42 PM 7/16/2009 servers and also possible to transfer the workloads or VMs to the VM Servers that are most capable of handling the workloads by throwing the available 4-core resources such as CPU, RAM, storage, and network capacity to the VMs. There is also a possibility of adding more Utility Servers to strengthen the capacity of the Server Pool and thus letting the Server Master handle the workload by assigning the server available to carry out the task. There can only be one Server Pool Master. However, there are basic tasks to perform before we can add the extra servers to the resource pool such as identifying them by their IP address and see if they are available to fulfill tasks as Oracle VM Server or Server Pool Master. Also we will need the Oracle VM Agent password to add them to the IntraCloud farm. Let's move on and start managing the servers. In this section, we will cover the following: How to add a Server Editing Server information Restart, shutdown, and deleting Servers How to add a Server In order to add Utility Servers or Oracle VM Servers to the array of the Oracle VM environment we will need to carry out the following actions: Click on the Add Server link on the Server Page: Search and select a Server Pool and then click Next. Enter the necessary information for Oracle VM parameters: Confirm the information, after testing the connection obviously, and you are done. However, ensure that the Oracle VM Servers are unique while registering in order to avoid any duplication of IP accounts. Editing Server information In order to update information on an existing Oracle VM Server, click on Edit. We can alternatively also click on the General Information tab. To monitor the performance of the Oracle VM Server we can click on the Monitor tab, where we get real time access to CPU, memory, and storage usage:
Read more
  • 0
  • 0
  • 4515

article-image-creating-custom-reports-and-notifications-vsphere
Packt
24 Mar 2015
34 min read
Save for later

Creating Custom Reports and Notifications for vSphere

Packt
24 Mar 2015
34 min read
In this article by Philip Sellers, the author of PowerCLI Cookbook, you will cover the following topics: Getting alerts from a vSphere environment Basics of formatting output from PowerShell objects Sending output to CSV and HTML Reporting VM objects created during a predefined time period from VI Events object Setting custom properties to add useful context to your virtual machines Using PowerShell native capabilities to schedule scripts (For more resources related to this topic, see here.) This article is all about leveraging the information available to you in PowerCLI. As much as any other topic, figuring out how to tap into the data that PowerCLI offers is as important as understanding the cmdlets and syntax of the language. However, once you obtain your data, you will need to alter the formatting and how it's returned to be used. PowerShell, and by extension PowerCLI, offers a big set of ways to control the formatting and the display of information returned by its cmdlets and data objects. You will explore all of these topics with this article. Getting alerts from a vSphere environment Discovering the data available to you is the most difficult thing that you will learn and adopt in PowerCLI after learning the initial cmdlets and syntax. There is a large amount of data available to you through PowerCLI, but there are techniques to extract the data in a way that you can use. The Get-Member cmdlet is a great tool for discovering the properties that you can use. Sometimes, just listing the data returned by a cmdlet is enough; however, when the property contains other objects, Get-Member can provide context to know that the Alarm property is a Managed Object Reference (MoRef) data type. As your returned objects have properties that contain other objects, you can have multiple layers of data available for you to expose using PowerShell dot notation ($variable.property.property). The ExtensionData property found on most objects has a lot of related data and objects to the primary data. Sometimes, the data found in the property is an object identifier that doesn't mean much to an administrator but represents an object in vSphere. In these cases, the Get-View cmdlet can refer to that identifier and return human-readable data. This topic will walk you through the methods of accessing data and converting it to usable, human-readable data wherever needed so that you can leverage it in scripts. To explore these methods, we will take a look at vSphere's built-in alert system. While PowerCLI has native cmdlets to report on the defined alarm states and actions, it doesn't have a native cmdlet to retrieve the triggered alarms on a particular object. To do this, you must get the datacenter, VMhost, VM, and other objects directly and look at data from the ExtensionData property. Getting ready To begin this topic, you will need a PowerCLI window and an active connection to vCenter. You should also check the vSphere Web Client or the vSphere Windows Client to see whether you have any active alarms and to know what to expect. If you do not have any active VM alarms, you can simulate an alarm condition using a utility such as HeavyLoad. For more information on generating an alarm, see the There's more... section of this topic. How to do it… In order to access data and convert it to usable, human-readable data, perform the following steps: The first step is to retrieve all of the VMs on the system. A simple Get-VM cmdlet will return all VMs on the vCenter you're connected to. Within the VM object returned by Get-VM, one of the properties is ExtensionData. This property is an object that contains many additional properties and objects. One of the properties is TriggeredAlarmState: Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} To dig into TriggeredAlarmState more, take the output of the previous cmdlet and store it into a variable. This will allow you to enumerate the properties without having to wait for the Get-VM cmdlet to run. Add a Select -First 1 cmdlet to the command string so that only a single object is returned. This will help you look inside without having to deal with multiple VMs in the variable: $alarms = Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} | Select -First 1 Now that you have extracted an alarm, how do you get useful data about what type of alarm it is and which vSphere object has a problem? In this case, you have VM objects since you used Get-VM to find the alarms. To see what is in the TriggeredAlarmState property, output the contents of TriggeredAlarmState and pipe it to Get-Member or its shortcut GM: $alarms.ExtensionData.TriggeredAlarmState | GM The following screenshot shows the output of the preceding command line: List the data in the $alarms variable without the Get-Member cmdlet appended and view the data in a real alarm. The data returned does tell you the time when the alarm was triggered, the OverallStatus property or severity of the alarm, and whether the alarm has been acknowledged by an administrator, who acknowledged it and at what time. You will see that the Entity property contains a reference to a virtual machine. You can use the Get-View cmdlet on a reference to a VM, in this case, the Entity property, and return the virtual machine name and other properties. You will also see that Alarm is referred to in a similar way and we can extract usable information using Get-View also: Get-View $alarms.ExtensionData.TriggeredAlarmState.Entity Get-View $alarms.ExtensionData.TriggeredAlarmState.Alarm You can see how the output from these two views differs. The Entity view provides the name of the VM. You don't really need this data since the top-level object contains the VM name, but it's good to understand how to use Get-View with an entity. On the other hand, the data returned by the Alarm view does not show the name or type of the alarm, but it does include an Info property. Since this is the most likely property with additional information, you should list its contents. To do so, enclose the Get-View cmdlet in parenthesis and then use dot notation to access the Info variable: (Get-View $alarms.ExtensionData.TriggeredAlarmState.Alarm).Info In the output from the Info property, you can see that the example alarm in the screenshot is a Virtual Machine CPU usage alarm. Your alarm can be different, but it should appear similar to this. After retrieving PowerShell objects that contain the data that you need, the easiest way to return the data is to use calculated expressions. Since the Get-VM cmdlet was the source for all lookup data, you will need to use this object with the calculated expressions to display the data. To do this, you will append a Select statement after the Get-VM and Where statement. Notice that you use the same Get-View statement, except that you change your variable to $_, which is the current object being passed into Select: Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} | Select Name, @{N="AlarmName";E={(Get-View $_.ExtensionData. TriggeredAlarmState.Alarm).Info.Name}}, @{N="AlarmDescription";E={(Get-View $_.ExtensionData. TriggeredAlarmState.Alarm).Info.Description}}, @ {N="TimeTriggered"; E={$_.ExtensionData.TriggeredAlarmState. Time}}, @{N="AlarmOverallStatus"; E={$_.ExtensionData. TriggeredAlarmState. OverallStatus}} How it works When the data you really need is several levels below the top-level properties of a data object, you need to use calculated expressions to return these at the top level. There are other techniques where you can build your own object with only the data you want returned, but in a large environment with thousands of objects in vSphere, the method in this topic will execute faster than looping through many objects to build a custom object. Calculated expressions are extremely powerful since nearly anything can be done with expressions. More than that, you explored techniques to discover the data you want. Data exploration can provide you with incredible new capabilities. The point is you need to know where the data is and how to pull that data back to the top level. There's more… It is likely that your test environment has no alarms and in this case, it might be up to you to create an alarm situation. One of the easiest to control and create is heavy CPU load with a CPU load-testing tool. JAM Software created software named HeavyLoad that is a stress-testing tool. This utility can be loaded into any Windows VM on your test systems and can consume all of the available CPU that the VM is configured with. To be safe, configure the VM with a single vCPU and the utility will consume all of the available CPU. Once you install the utility, go to the Test Options menu and you can uncheck the Stress GPU option, ensure that Stress CPU and Allocate Memory are checked. The utility also has shortcut buttons on the Menu bar to allow you to set these options. Click on the Start button (which looks like a Play button) and the utility begins to stress the VM. For users who wish to do the same test, but utilize Linux, StressLinux is a great option. StressLinux is a minimal distribution designed to create high load on an operating system. See also You can read more about the HeavyLoad Utility available under the JAM Software page at http://www.jam-software.com/heavyload/ You can read more about StressLinux at http://www.stresslinux.org/sl/ Basics of formatting output from PowerShell objects Anything that exists in a PowerShell object can be output as a report, e-mail, or editable file. Formatting the output is a simple task in PowerShell. Sometimes, the information you receive in the object is in a long decimal number format, but to make it more readable, you want to truncate the output to just a couple decimal places. In this section, you will take a look at the Format-Table, Format-Wide, and Format-List cmdlets. You will dig into the Format-Custom cmdlet and also take a look at the -f format operator. The truth is that native cmdlets do a great job returning data using default formatting. When we start changing and adding our own data to the list of properties returned, the formatting can become unoptimized. Even in the returned values of a native cmdlet, some columns might be too narrow to display all of the information. Getting ready To begin this topic, you will need the PowerShell ISE. How to do it… In order to format the output from PowerShell objects, perform the following steps: Run Add-PSSnapIn VMware.VimAutomation.Core in the PowerShell ISE to initialize a PowerCLI session and bring in the VMware cmdlet. Connect to your vCenter server. Start with a simple object from a Get-VM cmdlet. The default output is in a table format. If you pipe the object to Format-Wide, it will change the default output into a multicolumn with a single property, just like running a dir /w command at the Windows Command Prompt. You can also use FW, an alias for Format-Wide: Get-VM | Format-Wide Get-VM | FW If you take the same object and pipe it to Format-Table or its alias FT, you will receive the same output if you use the default output for Get-VM: Get-VM Get-VM | Format-Table However, as soon as you begin to select a different order of properties, the default formatting disappears. Select the same four properties and watch the formatting change. The default formatting disappears. Get-VM | Select Name, PowerState, NumCPU, MemoryGB | FT To restore formatting to table output, you have a few choices. You can change the formatting on the data in the object using the Select statement and calculated expressions. You can also pass formatting through the Format-Table cmdlet. While setting formatting in the Select statement changes the underlying data, using Format-Table doesn't change the data, but only its display. The formatting looks essentially like a calculated expression in a Select statement. You provide Label, Expression, and formatting commands: Get-VM | Select * | FT Name, PowerState, NumCPU, @{Label="MemoryGB"; Expression={$_.MemoryGB}; FormatString="N2"; Alignment="left"} If you have data in a number data type, you can convert it into a string using the ToString() method on the object. You can try this method on NumCPU: Get-VM | Select * | FT Name, PowerState, @{Label="Num CPUs"; Expression={($_.NumCpu).ToString()}; Alignment="left"}, @{Label="MemoryGB"; Expression={$_.MemoryGB}; FormatString="N2"; Alignment="left"} The other method is to format with the -f operator, which is basically a .NET derivative. To better understand the formatting and string, the structure is {<index>[,<alignment>][:<formatString>]}. Index sets that are a part of the data being passed, will be transformed. The alignment is a numeric value. A positive number will right-align those number of characters. A negative number will left-align those number of characters. The formatString parameter is the part that defines the format to apply. In this example, let's take a datastore and compute the percentage of free disk space. The format for percent is p: Get-Datastore | Select Name, @{N="FreePercent";E={"{0:p} -f ($_.FreeSpaceGB / $_.CapacityGB)}} To make the FreePercent column 15 characters wide, you add 0,15:p to the format string: Get-Datastore | Select Name, @{N="FreePercent";E={"{0,15:p} -f ($_.FreeSpaceGB / $_.CapacityGB)}} How it works… With the Format-Table, Format-List, and Format-Wide cmdlets, you can change the display of data coming from a PowerCLI object. These cmdlets all apply basic transformations without changing the data in the object. This is important to note because once the data is changed, it can prevent you from making changes. For instance, if you take the percentage example, after transforming the FreePercent property, it is stored as a string and no longer as a number, which means that you can't reformat it again. Applying a similar transformation from the Format-Table cmdlet would not alter your data. This doesn't matter when you're performing a one-liner, but in a more complex script or in a routine, where you might need to not only output the data but also reuse it, changing the data in the object is a big deal. There's more… This topic only begins to tap the full potential of PowerShell's native -f format operator. There are hundreds of blog posts about this topic, and there are use cases and examples of how to produce the formatting that you are looking for. The following link also gives you more details about the operator and formatting strings that you can use in your own code. See also For more information, refer to the PowerShell -f Format operator page available at http://ss64.com/ps/syntax-f-operator.html Sending output to CSV and HTML On the screen the output is great, but there are many times when you need to share your results with other people. When looking at sharing information, you want to choose a format that is easy to view and interpret. You might also want a format that is easy to manipulate and change. Comma Separated Values (CSV) files allow the user to take the output you generate and use it easily within a spreadsheet software. This allows you the ability to compare the results from vSphere versus internal tracking databases or other systems easily to find differences. It can also be useful to compare against service contracts for physical hosts as examples. HTML is a great choice for displaying information for reading, but not manipulation. Since e-mails can be in an HTML format, converting the output from PowerCLI (or PowerShell) into an e-mail is an easy way to assemble an e-mail to other areas of the business. What's even better about these cmdlets is the ease of use. If you have a data object in PowerCLI, all that you need to do is pipe that data object into the ConvertTo-CSV or ConvertTo-HTML cmdlets and you instantly get the formatted data. You might not be satisfied with the HTML-generated version alone, but like any other HTML, you can transform the look and formatting of the HTML using CSS by adding a header. In this topic, you will examine the conversion cmdlets with a simple set of Get- cmdlets. You will also take a look at trimming results using the Select statements and formatting HTML results with CSS. This topic will pull a list of virtual machines and their basic properties to send to a manager who can reconcile it against internal records or system monitoring. It will export to a CSV file that will be attached to the e-mail and you will use the HTML to format a list in an e-mail to send to the manager. Getting ready To begin this topic, you will need to use the PowerShell ISE. How to do it… In order to examine the conversion cmdlets using Get- cmdlets, trim results using the Select statements, and format HTML results with CSS, perform the following steps: Open the PowerShell ISE and run Add-PSSnapIn VMware.VimAutomation.Core to initialize a PowerCLI session within the ISE. Again, you will use the Get-VM cmdlet as the base for this topic. The fields that we care about are the name of the VM, the number of CPUs, the amount of memory, and the description: $VMs = Get-VM | Select Name, NumCPU, MemoryGB, Description In addition to the top-level data, you also want to provide the IP address, hostname, and the operating system. These are all available from the ExtensionData.Guest property: $VMs = Get-VM | Select Name, NumCPU, MemoryGB, Description, @{N="Hostname";E={$_.ExtensionData.Guest.HostName}}, @{N="IP";E={$_.ExtensionData.Guest.IPAddress}}, @{N="OS";E={$_.ExtensionData.Guest.GuestFullName}} The next step is to take this data and format it to be sent as an HTML e-mail. Converting the information to HTML is actually easy. Pipe the variable you created with the data into ConvertTo-HTML and store in a new variable. You will need to reuse the data to convert it to a CSV file to attach: $HTMLBody = $VMs | ConvertTo-HTML If you were to output the contents of $HTMLBody, you will see that it is very plain, inheriting the defaults of the browser or e-mail program used to display it. To dress this up, you need to define some basic CSS to add some style for the <body>, <table>, <tr>, <td>, and <th> tags. You can add this by running the ConvertTo-HTML cmdlet again with the -PreContent parameter: $css = "<style> body { font-family: Verdana, sans-serif; fontsize: 14px; color: #666; background: #FFF; } table{ width:100%; border-collapse:collapse; } table td, table th { border:1px solid #333; padding: 4px; } table th { text-align:left; padding: 4px; background-color:#BBB; color:#FFF;} </style>" $HTMLBody = $VMs | ConvertTo-HTML -PreContent $css It might also be nice to add the date and time generated to the end of the file. You can use the -PostContent parameter to add this: $HTMLBody = $VMs | ConvertTo-HTML -PreContent $css -PostContent "<div><strong>Generated:</strong> $(Get-Date)</div>" Now, you have the HTML body of your message. To take the same data from $VMs and save it to a CSV file that you can use, you will need a writable directory, and a good choice is to use your My Documents folder. You can obtain this using an environment variable: $tempdir = [environment]::getfolderpath("mydocuments") Now that you have a temp directory, you can perform your export. Pipe $VMs to Export-CSV and specify the path and filename: $VMs | Export-CSV $tempdirVM_Inventory.csv At this point, you are ready to assemble an e-mail and send it along with your attachment. Most of the cmdlets are straightforward. You set up a $msg variable that is a MailMessage object. You create an Attachment object and populate it with your temporary filename and then create an SMTP server with the server name: $msg = New-Object Net.Mail.MailMessage $attachment = new-object Net.Mail.Attachment("$tempdirVM_Inventory.csv") $smtpServer = New-Object Net.Mail.SmtpClient("hostname") You set the From, To, and Subject parameters of the message variable. All of these are set with dot notation on the $msg variable: $msg.From = "[email protected]" $msg.To.Add("[email protected]") $msg.Subject = "Weekly VM Report" You set the body you created earlier, as $HTMLBody, but you need to run it through Out-String to convert any other data types to a pure string for e-mailing. This prevents an error where System.String[] appears instead of your content in part of the output: $msg.Body = $HTMLBody | Out-String You need to take the attachment and add it to the message: $msg.Attachments.Add($attachment) You need to set the message to an HTML format; otherwise, the HTML will be sent as plain text and not displayed as an HTML message: $msg.IsBodyHtml = $true Finally, you are ready to send the message using the $smtpServer variable that contains the mail server object. Pass in the $msg variable to the server object using the Send method and it transmits the message via SMTP to the mail server: $smtpServer.Send($msg) Don't forget to clean up the temporary CSV file you generated. To do this, use the PowerShell Remove-Item cmdlet that will remove the file from the filesystem. Add a -Confirm parameter to suppress any prompts: Remove-Item $tempdirVM_Inventory.csv -Confirm:$false How it works… Most of this topic relies on native PowerShell and less on the PowerCLI portions of the language. This is the beauty of PowerCLI. Since it is based on PowerShell and only an extension, you lose none of the functions of PowerShell, a very powerful set of commands in its own right. The ConvertTo-HTML cmdlet is very easy to use. It requires no parameters to produce HTML, but the HTML it produces isn't the most legible if you display it. However, a bit of CSS goes a long way to improve the look of the output. Add some colors and style to the table and it becomes a really easy and quick way to format a mail message of data to be sent to a manager on a weekly basis. The Export-CSV cmdlet lets you take the data returned by a cmdlet and convert that into an editable format for use. You can place this onto a file share for use or you can e-mail it along, as you did in this topic. This topic takes you step by step through the process of creating a mail message, formatting it in HTML, and making sure that it's relayed as an HTML message. You also looked at how to attach a file. To send a mail, you define a mail server as an object and store it in a variable for reuse. You create a message object and store it in a variable and then set all of the appropriate configuration on the message. For an attachment, you create a third object and define a file to be attached. That is set as a property on the message object and then finally, the message object is sent using the server object. There's more… ConvertTo-HTML is just one of four conversion cmdlets in PowerShell. In addition to ConvertTo-HTML, you can convert data objects into XML. ConvertTo-JSON allows you to convert a data object into an XML format specific for web applications. ConvertTo-CSV is identical to Export-CSV except that it doesn't save the content immediately to a defined file. If you had a use case to manipulate the CSV before saving it, such as stripping the double quotes or making other alternations to the contents, you can use ConvertTo-CSV and then save it to a file at a later point in your script. Reporting VM objects created during a predefined time period from VI Events object An important auditing tool in your environment can be a report of when virtual machines were created, cloned, or deleted. Unlike snapshots, that store a created date on the snapshot, virtual machines don't have this property associated with them. Instead, you have to rely on the events log in vSphere to let you know when virtual machines were created. PowerCLI has the Get-VIEvents cmdlet that allows you to retrieve the last 1,000 events on the vCenter, by default. The cmdlet can accept a parameter to include more than the last 1,000 events. The cmdlet also allows you to specify a start date, and this can allow you to search for things within the past week or the past month. At a high level, this topic works the same in both PowerCLI and the vSphere SDK for Perl (VIPerl). They both rely on getting the vSphere events and selecting the specific events that match your criteria. Even though you are looking for VM creation events in this topic, you will see that the code can be easily adapted to look for many other types of events. Getting ready To begin this topic, you will need a PowerCLI window and an active connection to a vCenter server. How to do it… In order to report VM objects created during a predefined time period from VI Events object, perform the following steps: You will use the Get-VIEvent cmdlet to retrieve the VM creation events for this topic. To begin, get a list of the last 50 events from the vCenter host using the -MaxSamples parameter: Get-VIEvent -MaxSamples 50 If you pipe the output from the preceding cmdlet to Get-Member, you will see that this cmdlet can return a lot of different objects. However, the type of object isn't really what you need to find the VM's created events. Looking through the objects, they all include a GetType() method that returns the type of event. Inside the type, there is a name parameter. Create a calculated expression using GetType() and then group it based on this expression, you will get a usable list of events you can search for. This list is also good for tracking the number of events your systems have encountered or generated: Get-VIEvent -MaxSamples 2000 | Select @{N="Type";E={$_.GetType().Name}} | Group Type In the preceding screenshot, you will see that there are VMClonedEvent, VmRemovedEvent, and VmCreatedEvent listed. All of these have to do with creating or removing virtual machines in vSphere. Since you are looking for created events, VMClonedEvent and VmCreatedEvent are the two needed for this script. Write a Where statement to return only these events. To do this, we can use a regular expression with both the event names and the -match PowerShell comparison parameter: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} Next, you want to select just the properties that you want in your output. To do this, add a Select statement and you will reuse the calculated expression from Step 3. If you want to return the VM name, which is in a Vm property with the type of VMware.Vim.VVmeventArgument, you can create a calculated expression to return the VM name. To round out the output, you can include the FullFormattedMessage, CreatedTime, and UserName properties: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} | Select @{N="Type",E={$_.GetType().Name}}, @{N="VMName",E={$_.Vm.Name}},FullFormattedMessage, CreatedTime, UserName The last thing you will want to do is go back and add a time frame to the Get-VIEvent cmdlet. You can do this by specifying the -Start parameter along with (Get-Date).AddMonths(-1) to return the last month's events: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} | Select @{N="Type",E={$_.GetType().Name}}, @{N="VMName",E={$_.Vm.Name}},FullFormattedMessage, CreatedTime, UserName How it works… The Get-VIEvent cmdlet drives a majority of this topic, but in this topic you only scratched the surface of the information you can unearth with Get-VIEvent. As you saw in the screenshot, there are so many different types of events that can be reported, queried, and acted upon from the vCenter server. Once you discover and know which events you are looking for specifically, then it's a matter of scoping down the results with a Where statement. Last, you use calculated expressions to pull data that is several levels deep in the returned data object. One of the primary things employed here is a regular expression used to search for the types of events you were interested in: VmCreatedEvent and VmClonedEvent. By combining a regular expression with the -match operator, you were able to use a quick and very understandable bit of code to find more than one type of object you needed to return. There's more… Regular Expressions (RegEx) are big topics on their own. These types of searches can match any type of pattern that you can establish or in the case of this topic, a number of defined values that you are searching for. RegEx are beyond the scope of this article, but they can be a big help anytime you have a pattern you need to search for and match, or perhaps more importantly, replace. You can use the -replace operator instead of –match to not only to find things that match your pattern, but also change them. See also For more information on Regular Expressions refer to http://ss64.com/ps/syntax-regex.html The PowerShell.com page: Text and Regular Expressions is available at http://powershell.com/cs/blogs/ebookv2/archive/2012/03/20/chapter-13-text-and-regular-expressions.aspx Setting custom properties to add useful context to your virtual machines Building on the use case for the Get-VIEvent cmdlet, Alan Renouf of VMware's PowerCLI team has a useful script posted on his personal blog (refer to the See also section) that helps you pull the created date and the user who created a virtual machine and populate this into a custom attribute. This is a great use for a custom attribute on virtual machines and makes some useful information available that is not normally visible. This is a process that needs to be run often to pick up details for virtual machines that have been created. Rather than looking specifically at a VM and trying to go back and find its creation date as Alan's script does, in this article, you will take a different approach building on the previous article and populate the information from the found creation events. Maintenance in this form would be easier by finding creation events for the last week, running the script weekly, and updating the VMs with the data in the object rather than looking for VMs with missing data and searching through all of the events. This article assumes that you are using a Windows system that is joined to AD on the same domain as your vCenter. It also assumes that you have loaded the Remote Server Administration Tools for Windows so that the Active Directory PowerShell modules are available. This is a separate download for Windows 7. The Active Directory Module for PowerShell can be enabled on Windows 7, Windows 8, Windows Server 2008, and Windows Server 2012 in the Programs and Features control panel under Turn Windows features on or off. Getting ready To begin this script, you will need the PowerShell ISE. How to do it… I order to set custom properties to add useful context to your virtual machines, perform the following steps: Open the PowerShell ISE and run Add-PSSnapIn VMware.VimAutomation.Core to initialize a PowerCLI session within the ISE. The first step is to create a custom attribute in vCenter for the CreatedBy and CreateDate attributes: New-CustomAttribute -TargetType VirtualMachine -Name CreatedBy New-CustomAttribute -TargetType VirtualMachine -Name CreateDate Before you begin the scripting, you will need to run ImportSystemModules to bring in the Active Directory cmdlets that you will use later to lookup the username and reference it back to a display name: ImportSystemModules Next, you need to locate and pull out all of the creation events with the same code as the Reporting VM objects created during a predefined time period from VI Events object topic. You will assign the events to a variable for processing in a loop in this case; however, you will also want to change the period to 1 week (7 days) instead of 1 month: $Events = Get-VIEvent -Start (Get-Date).AddDays(-7) -MaxSamples25000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} The next step is to begin a ForEach loop to pull the data and populate it into a custom attribute: ForEach ($Event in $Events) { The first thing to do in the loop is to look up the VM referenced in the Event's Vm parameter by name using Get-VM: $VM = Get-VM -Name $Event.Vm.Name Next, you can use the CreatedTime parameter on the event and set this as a custom attribute on the VM using the Set-Annotation cmdlet: $VM | Set-Annotation -CustomAttribute "CreateDate" -Value $Event.CreatedTime Next, you can use the Username parameter to lookup the display name of the user account who created the VM using Active Directory cmdlets. For the Active Directory cmdlets to be available, your client system or server needs to have the Microsoft Remote Server Administration Tools (RSAT) installed to make the Active Directory cmdlets available. The data coming from $Event.Username is in DOMAINusername format. You need just the username to perform a lookup with Get-AdUser, so that you can split on the backslash and return only the second item in the array resulting from the split command. After the lookup, the display name that you will want to use is in the Name property. You can retrieve it with dot notation: $User = (($Event.UserName.split(""))[1]) $DisplayName = (Get-AdUser $User).Name To do this, you need to use a built-in on the event and set this as a custom attribute on the VM using the Set-Annotation cmdlet: $VM | Set-Annotation -CustomAttribute "CreatedBy" -Value $DisplayName Finally, close the ForEach loop. } <# End ForEach #> How it works… This topic works by leveraging the Get-VIEvent cmdlet to search for events in the log from the last number of days. In larger environments, you might need to expand the -MaxSamples cmdlet well beyond the number in this example. There might be thousands of events per day in larger environments. The topic looks through the log and the Where statement returns only the creation events. Once you have the object with all of the creation events, you can loop through this and pull out the username of the person who created each virtual machine and the time they were created. Then, you just need to populate the data into the custom attributes created. There's more… Combine this script with the next topic and you have a great solution for scheduling this routine to run on a daily basis. Running it daily would certainly cut down on the number of events you need to process through to find and update the virtual machines that have been created with the information. You should absolutely go and read Alan Renouf's blog post on which this topic is based. This primary difference between this topic and the one Alan presents is the use of native Windows Active Directory PowerShell lookups in this topic instead of the Quest Active Directory PowerShell cmdlets. See also Virtu-Al.net: Who created that VM? is available at http://www.virtu-al.net/2010/02/23/who-created-that-vm/ Using PowerShell native capabilities to schedule scripts There is potentially a better and easier way to schedule your processes to run from PowerShell and PowerCLI and those are known as Scheduled Jobs. Scheduled Jobs were introduced in PowerShell 3.0 and distributed as part of the Windows Management Framework 3.0 and higher. While Scheduled Tasks can execute any Windows batch file or executable, Scheduled Jobs are specific to PowerShell and are used to generate and create background jobs that run once or on a specified schedule. Scheduled Jobs appear in the Windows Task Scheduler and can be managed with the scheduled task cmdlets of PowerShell. The only difference is that the scheduled jobs cmdlets cannot manage scheduled tasks. These jobs are stored in the MicrosoftWindowsPowerShellScheduledJobs path of the Windows Task Scheduler. You can see and edit them through the management console in Windows after creation. What's even greater about Scheduled Jobs in PowerShell is that you are not forced into creating a .ps1 file for every new job you need to run. If you have a PowerCLI one-liner that provides all of the functionality you need, you can simply include it in a job creation cmdlet without ever needing to save it anywhere. Getting ready To being this topic, you will need a PowerCLI window with an active connection to a vCenter server. How to do it… In order to schedule scripts using the native capabilities of PowerShell, perform the following steps: If you are running PowerCLI on systems lower than Windows 8 or Windows Server 2012, there's a chance that you are running PowerShell 2.0 and you will need to upgrade in order to use this. To check, run Get-PSVersion to see which version is installed on your system. If less than version 3.0, upgrade before continuing this topic. Throw back a script you have already written, the script to find and remove snapshots over 30 days old: Get-Snapshot -VM * | Where {$_.Created -LT (Get-Date).AddDays(-30)} | Remove-Snapshot -Confirm:$false To schedule a new job, the first thing you need to think about is what triggers your job to run. To define a new trigger, you use the New-JobTrigger cmdlet: $WeeklySundayAt6AM = New-JobTrigger -Weekly -At "6:00 AM" -DaysOfWeek Sunday –WeeksInterval 1 Like scheduled tasks, there are some options that can be set for a scheduled job. These include whether to wake the system to run: $Options = New-ScheduledJobOption –WakeToRun –StartIfIdle –MultipleInstancePolicy Queue Next, you will use the Register-ScheduledJob cmdlet. This cmdlet accepts a parameter named ScriptBlock and this is where you will specify the script that you have written. This method works best with one-liners, or scripts that execute in a single line of piped cmdlets. Since this is PowerCLI and not just PowerShell, you will need to add the VMware cmdlets and connect to vCenter at the beginning of the script block. You also need to specify the -Trigger and -ScheduledJobOption parameters that are defined in the previous two steps: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"-ScriptBlock { Add-PSSnapIn VMware.VimAutomation.Core; Connect-VIServer servers; Get-Snapshot -VM * | Where {$_.Created -LT (Get-Date).AddDays(-30)} | Remove-Snapshot -Confirm:$false} -Trigger$WeeklySundayAt6AM-ScheduledJobOption $Options You are not restricted to only running a script block. If you have a routine in a .ps1 file, you can easily run it from ScheduledJob also. For illustration, if you have a .ps1 file stored in c:Scripts named 30DaySnaps.ps1, you can use the following cmdlet to register a job: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"–FilePath c:Scripts30DaySnaps.ps1 -Trigger $WeeklySundayAt6AM-ScheduledJobOption $Options Rather than scheduling the scheduled job and defining the PowerShell in the job, a more maintainable method can be to write the module and then call the function from the scheduled job. One other requirement is that Single Sign-On should be configured so that the Connect-VIServer works correctly in the script: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"-ScriptBlock {Add-PSSnapIn VMware.VimAutomation.Core; Connect-ViServer server; Import-Module 30DaySnaps; Remove-30DaySnaps -VM*} - How it works… This topic leverages the scheduled jobs framework developed specifically for running PowerShell as scheduled tasks. It doesn't require you to configure all of the extra settings as you have seen in previous examples of scheduled tasks. These are PowerShell native cmdlets that know how to implement PowerShell on a schedule. One thing to keep in mind is that these jobs will begin with a normal PowerShell session—one that knows nothing about PowerCLI, by default. You will need to include Add-PSSnapIn VMware.VimAutomation.Core in each script block or the .ps1 file that you use with a scheduled job. There's more… There is a full library of cmdlets to implement and maintain scheduled jobs. You have Set-ScheduleJob that allows you to change the settings of a registered scheduled job on a Windows system. You can disable and enable scheduled jobs using the Disable-ScheduledJob and Enable-Scheduled job cmdlets. This allows you to pause the execution of a job during maintenance, or for other reasons, without needing to remove and resetup the job. This is especially helpful since the script blocks are inside the job and not saved in a separate .ps1 file. You can also configure remote scheduled jobs on other systems using the Invoke-Command PowerShell cmdlet. This concept is shown in examples on Microsoft TechNet in the documentation for the Register-ScheduledJob cmdlet. In addition to scheduling new jobs, you can remove jobs using the Unregister-ScheduledJob cmdlet. This cmdlet requires one of three identifying properties to unschedule a job. You can pass -Name with a string, -ID with the number identifying the job, or an object reference to the scheduled job with -InputObject. You can combine the Get-ScheduledJob cmdlet to find and pass the object by pipeline. See also To read more about Microsoft TechNet: PSScheduledJob Cmdlets, refer to http://technet.microsoft.com/en-us/library/hh849778.aspx Summary This article was all about leveraging the information and data from PowerCLI as well as how we can format and display this information. Resources for Article: Further resources on this subject: Introduction to vSphere Distributed switches [article] Creating an Image Profile by cloning an existing one [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 4492
Visually different images

article-image-backups-vmware-view-infrastructure-0
Packt
17 Sep 2014
18 min read
Save for later

Backups in the VMware View Infrastructure

Packt
17 Sep 2014
18 min read
In this article, by Chuck Mills and Ryan Cartwright, authors of the book VMware Horizon 6 Desktop Virtualization Solutions, we will study about the back up options available in VMware. It also provides guidance on scheduling appropriate backups of a Horizon View environment. (For more resources related to this topic, see here.) While a single point of failure should not exist in the VMware View environment, it is still important to ensure regular backups are taken for a quick recovery when failures occur. Also, if a setting becomes corrupted or is changed, a backup could be used to restore to a previous point in time. The backup of the VMware View environment should be performed on a regular basis in line with an organization's existing backup methodology. A VMware View environment contains both files and databases. The main backup points of a VMware View environment are as follows: VMware View Connection Server—ADAM database VMware View Security Server VMware View Composer Database Remote Desktop Service host servers Remote Desktop Service host templates and virtual machines Virtual desktop templates and parent VMs Virtual desktops Linked clones (stateless) Full clones (stateful) ThinApp repository Persona Management VMware vCenter Restoring the VMware View environment Business Continuity and Disaster Recovery With a backup of all of the preceding components, the VMware View Server infrastructure can be recovered during a time of failure. To maximize the chances of success in a recovery environment, it is advised to take backups of the View ADAM database, View Composer, and vCenter database at the same time to avoid discrepancies. Backups can be scheduled and automated or can be manually executed; ideally, scheduled backups will be used to ensure that they are performed and completed regularly. Proper design dictates that there should always be two or more View Connection Servers. As all View Connection Servers in the same replica pool contain the same configuration data, it is only necessary to back up one View Connection Server. This backup is typically configured for the first View Connection Server installed in standard mode in an environment. VMware View Connection Server – ADAM Database backup View Connection Server stores the View Connection Server configuration data in the View LDAP repository. View Composer stores the configuration data for linked clone desktops in the View Composer database. When you use View Administrator to perform backups, the Connection Server backs up the View LDAP configuration data and the View Composer database. Both sets of backup files will be stored in the same location. The LDAP data is exported in LDAP data interchange format (LDIF). If you have multiple View Connection Server(s) in a replicated group, you only need to export data from one of the instances. All replicated instances contain the same configuration data. It is a not good practice to rely on replicated instances of View Connection Server as your backup mechanism. When the Connection Server synchronizes data across the instances of Connection Server, any data lost on one instance might be lost in all the members of the group. If the View Connection Server uses multiple vCenter Server instances and multiple View Composer services, then the View Connection Server will back up all the View Composer databases associated with the vCenter Server instances. View Connection Server backups are configured from the VMware View Admin console. The backups dump the configuration files and the database information to a location on the View Connection Server. Then, the data must be backed up through normal mechanisms, like a backup agent and scheduled job. The procedure for a View Connection Server backup is as follows: Schedule VMware View backup runs and exports to C:View_Backup. Use your third-party backup solution on the View Connection Server and have it back up the System State, Program Files, and C:View_Backup folders that were created in step 1. From within the View Admin console, there are three primary options that must be configured to back up the View Connection Server settings: Automatic backup frequency: This is the frequency at which backups are automatically taken. The recommendation is as follows: Recommendation (every day): As most server backups are performed daily, if the automatic View Connection Server backup is taken before the full backup of the Windows server, it will be included in the nightly backup. This is adjusted as necessary. Backup time: This displays the time based on the automatic backup frequency. (Every day produces the 12 midnight time.) Maximum number of backups: This is the maximum number of backups that can be stored on the View Connection Server; once the maximum number has been reached, backups will be rotated out based on age, with the oldest backup being replaced by the newest backup. The recommendation is as follows: Recommendation—30 days: This will ensure that approximately one month of backups are retained on the server. This is adjusted as necessary. Folder location: This is the location on the View Connection Server, where the backups will be stored. Ensure that the third-party backup solution is backing up this location. The following screenshot shows the Backup tab: Performing a manual backup of the View database Use the following steps to perform a manual backup of your View database: Log in to the View Administrator console. Expand the Catalog option under Inventory (on the left-hand side of the console). Select the first pool and right-click on it. Select Disable Provisioning, as shown in the following screenshot: Continue to disable provisioning for each of the pools. This will assure that no new information will be added to the ADAM database. After you disable provisioning for all the pools, there are two ways to perform the backup: The View Administrator console Running a command using the command prompt The View Administrator console Follow these steps to perform a backup: Log in to the View Administrator console. Expand View Configuration found under Inventory. Select Servers, which displays all the servers found in your environment. Select the Connection Servers tab. Right-click on one of the Connection Servers and choose Backup Now, as shown in the following screenshot After the backup process is complete, enable provisioning to the pools. Using the command prompt You can export the ADAM database by executing a built-in export tool in the command prompt. Perform the following steps: Connect directly to the View Connection Server with a remote desktop utility such as RDP. Open a command prompt and use the cd command to navigate to C:Program FilesVMwareVMware ViewServertoolsbin. Execute the vdmexport.exe command and use the –f option to specify a location and filename, as shown in the following screenshot (for this example, C:View_Backup is the location and vdmBackup.ldf is the filename): Once a backup has been either automatically run or manually executed, there will be two types of files saved in the backup location: LDF files: These are the LDIF exports from the VMware View Connection Server ADAM database and store the configuration settings of the VMware View environment SVI files: These are the backups of the VMware View Composer database The backup process of the View Connection Server is fairly straightforward. While the process is easy, it should not be overlooked. Security Server considerations Surprisingly, there is no option to back up the VMware View Security Server via the VMware View Admin console. For View Connection Servers, backup is configured by selecting the server, selecting Edit, and then clicking on Backup. Highlighting the View Security Server provides no such functionality. Instead, the security server should be backed up via normal third-party mechanisms. The installation directory is of primary concern, which is C:Program FilesVMwareVMware ViewServer by default. The .config file is in the …sslgatewayconf directory, and it includes the following settings: pcoipClientIPAddress: This is the public address used by the Security Server pcoipClientUDPPort: This is the port used for UDP traffic (the default is 4172) In addition, the settings file is located in this directory, which includes settings such as the following: maxConnections: This is the maximum number of concurrent connections the View Security Server can have at one time (the default is 2000) serverID: This is the hostname used by the security server In addition, custom certificates and logfiles are stored within the installation directory of the VMware View Security Server. Therefore, it is important to back up the data regularly if the logfile data is to be maintained (and is not being ingested into a larger enterprise logfile solution). The View Composer database The View Composer database used for linked clones is backed up using the following steps: Log in to the View Administrator console. Expand the Catalog option under Inventory (left-hand side of the console). Select the first pool and right-click on it. Select Disable Provisioning. Connect directly to the server where the View Composer was installed, using a remote desktop utility such as RDP. Stop the View Composer service, as shown in the following screenshot. This will prevent provisioning request that would change the composer database. After the service is stopped, use the standard practice for backed up databases in the current environment. Restart the Composer service after the backup completes. Remote Desktop Service host servers VMware View 6 uses virtual machines to deliver hosted applications and desktops. In some cases, tuning and optimization, or other customer specific configurations to the environment or applications may be built on the Remote Desktop Service (RDS) host. Use the Windows Server Backup tool or the current backup software deployed in your environment. RDS Server host templates and virtual machines The virtual machine templates and virtual machines are an important part of the Horizon View infrastructure and need protection in the event that the system needs to be recovered. Back up the RDS host templates when changes are made and the testing/validation is completed. The production RDS host machines should be backed up if they contains user data or any other elements that require protection at frequent intervals. Third-party backup solutions are used in this case. Virtual desktop templates and parent VMs Horizon View uses virtual machine templates to create the desktops in pools for full virtual machines and uses parent VMs to create the desktops in a linked clone desktop pool. These virtual machine templates and the parent VMs are another important part of the View infrastructure that needs protection. These backups are a crucial part of being able to quickly restore the desktop pools and the RDS hosts in the event of data loss. While frequent changes occur for standard virtual machines, the virtual machine templates and parent VMs only need backing up after new changes have been made to the template and parent VM images. These backups should be readily available for rapid redeployment when required. For environments that use full cloning as the provisioning technique for the vDesktops, the gold template should be backed up regularly. The gold template is the master vDesktop that all other vDesktops are cloned from. The VMware KB article, Backing up and restoring virtual machine templates using VMware APIs, covers the steps to both back up and restore a template. In short, most backup solutions will require that the gold template is converted from a template to a regular virtual machine and it can then be backed up. You can find more information at http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009395. Backing up the parent VM can be tricky as it is a virtual machine, often with many different point-in-time snapshots. The most common technique is to collapse the virtual machine snapshot tree at a given point-in-time snapshot, and then back up or copy the newly created virtual machine to a second datastore. By storing the parent VM on a redundant storage solution, it is quite unlikely that the parent VM will be lost. What's more likely is that a point-in-time snapshot of the parent VM may be created while it's in a nonfunctional or less-than-ideal state. Virtual desktops There are three types of virtual desktops in a Horizon View environment, which are as follows: Linked clone desktops Stateful desktops Stateless desktops Linked clone desktops Virtual desktops that are created by View Composer using the linked clone technology present special challenges with backup and restoration. In many cases, a linked clone desktop will also be considered as a stateless desktop. The dynamic nature of a linked clone desktop and the underlying structure of the virtual machine itself means the linked clone desktops are not a good candidate for backup and restoration. However, the same qualities that impede the use of a standard backup solution provide an advantage for rapid reprovisioning of virtual desktops. When the underlying infrastructure for things such as the delivery of applications and user data, along with the parent VMs, are restored, then linked clone desktop pools can be recreated and made available within a short amount of time, and therefore lessening the impact of an outage or data loss. Stateful desktops In the stateful desktop pool scenario, all of the virtual desktops retain user data when the user logs back in to the virtual desktop. So, in this case, backing up the virtual machines with third-party tools like any other virtual machine in vSphere is considered the optimal method for protection and recovery. Stateless desktops With the stateless desktop architecture, the virtual desktops do not retain the desktop state when the user logs back in to the virtual desktop. The nature of the stateless desktops does not require and nor do they directly contain any data that requires a backup. All the user data in a stateless desktop is stored on a file share. The user data includes any files the user creates, changes, or copies within the virtual infrastructure, along with the user persona data. Therefore, because no user data is stored within the virtual desktop, there will be no need to back up the desktop. File shares should be included in the standard backup strategy and all user data and persona information will be included in the existing daily backups. The ThinApp repository The ThinApp repository is similar in nature to the user data on the stateless desktops in that it should reside on a redundant file share that is backed up regularly. If the ThinApp packages are configured to preserve each user's sandbox, the ThinApp repository should likely be backed up nightly. Persona Management With the View Persona Management feature, the user's remote profile is dynamically downloaded after the user logs in to a virtual desktop. The secure, centralized repository can be configured in which Horizon View will store user profiles. The standard practice is to back up network shares on which View Persona Management stores the profile repository. View Persona Management will ensure that user profiles are backed up to the remote profile share, eliminating the need for additional tools to back up user data on the desktops. Therefore, backup software to protect the user profile on the View desktop is unnecessary. VMware vCenter Most established IT departments are using backup tools from the storage or backup vendor to protect the datastores where the VM's are stored. This will make the recovery of the base vSphere environment faster and easier. The central piece of vCenter is the vCenter database. If there is a total loss of database you will lose all your configuration information of vSphere, including the configuration specific to View (for examples, users, folders, and many more). Another important item to understand is that even if you rebuild your vCenter using the same folder and resource pool names, your View environment will not reconnect and use the new vCenter. The reason is that each object in vSphere has what is called a Managed object Reference (MoRef) and they are stored in the vSphere database. View uses the MoRef information to talk to vCenter. As View and vSphere rely on each other, making a backup of your View environment without backing up your vSphere environment doesn't make sense. Restoring the VMware View environment If your environment has multiple Connection Servers, the best thing to do would be delete all the servers but one, and then use the following steps to restore the ADAM database: Connect directly to the server where the View Connection Server is located using a remote desktop utility such as RDP. Stop the View Connection service, as shown in the following screenshot: Locate the backup (or exported) ADAM database file that has the .ldf extension. The first step of the import is to decrypt the file by opening a command prompt and use the cd command to navigate to C:Program FilesVMwareVMware ViewServertoolsbin. Use the following command: vdmimport –f View_BackupvdmBackup.ldf –d >View_BackupvmdDecrypt.ldf You will be prompted to enter the password from the account you used to create the backup file. Now use the vdmimport –f [decrypted file name] command (from the preceding example, the filename will be vmdDecrypt.ldf). After the ADAM database is updated, you can restart the View Connection Server service. Replace the delete Connection Servers by running the Connection Server installation and using the Replica option. To reinstall the View Composer database, you can connect to the server where Composer is installed. Stop the View Composer service and use your standard procedure for restoring a database. After the restore, start the View Composer service. While this provides the steps to restore the main components of the Connection server, the steps to perform a complete View Connection Server restore can be found in the VMware KB article, Performing an end-to-end backup and restore for VMware View Manager, at http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1008046. Reconciliation after recovery One of the main factors to consider when performing a restore in a Horizon View infrastructure is the possibility that the Connection Server environment could be out of sync with the current View state and a reconciliation is required. After restoring the Connection Server ADAM database, there may be missing desktops that are shown in the Connection Server Admin user interface if the following actions are executed after the backup but before a restore: The administrator deleted pools or desktops The desktop pool was recomposed, which resulted in the removal of the unassigned desktops Missing desktops or pools can be manually removed from the Connection Server Admin UI. Some of the automated desktops may become disassociated with their pools due to the creation of a pool between the time of the backup and the restore time. View administrators may be able to make them usable by cloning the linked clone desktop to a full clone desktop using vCenter Server. They would be created as an individual desktop in the Connection Server and then assign those desktops to a specific user. Business Continuity and Disaster Recovery It's important to ensure that the virtual desktops along with the application delivery infrastructure is included and prioritized as a Business Continuity and Disaster Recovery plan. Also, it's important to ensure that the recovery procedures are tested and validated on a regular cycle, as well as having the procedures and mechanisms in place that ensure critical data (images, software media, data backup, and so on) is always stored and ready in an alternate location. This will ensure an efficient and timely recovery. It would be ideal to have a disaster recovery plan and business continuity plan that recovers the essential services to an alternate "standby" data center. This will allow the data to be backed up and available offsite to the alternate facility for an additional measure of protection. The alternate data center could have "hot" standby capacity for the virtual desktops and application delivery infrastructure. This site would then address 50 percent capacity in the event of a disaster and also 50 percent additional capacity in the event of a business continuity event that prevents users from accessing the main facility. The additional capacity will also provide a rollback option if there were failed updates to the main data center. Operational procedures should ensure the desktop and server images are available to the alternate facility when changes are made to the main VMware View system. Desktop and application pools should also be updated in the alternate data center whenever maintenance procedures are executed and validated in the main data center. Summary As expected, it is important to back up the fundamental components of a VMware View solution. While a resilient design should mitigate most types of failure, there are still occasions when a backup may be needed to bring an environment back up to an operational level. This article covered the major components of View and provided some of the basic options for creating backups of those components. The Connection Server and Composer database along with vCenter were explained. There was a good overview of the options used to protect the different types of virtual desktops. The ThinApp repository and Persona Management was also explained. The article also covered the basic recovery options and where to find information on the complete View recovery procedures. Resources for Article: Further resources on this subject: Introduction to Veeam® Backup & Replication for VMware [article] Design, Install, and Configure [article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [article]
Read more
  • 0
  • 0
  • 4417

article-image-endpoint-operations-management-agent-vrealize-operations
Vijin Boricha
30 May 2018
10 min read
Save for later

How to ace managing the Endpoint Operations Management Agent with vROps

Vijin Boricha
30 May 2018
10 min read
In this tutorial, you will learn the functionalities of endpoint operations management agent and how you can effectively manage it. You can install the Endpoint Operations Management Agent from a tar.gz or .zip archive, or from an operating system-specific installer for Windows or for Linux-like systems that support RPM. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater.  The agent can come bundled with JRE or without JRE. Installing the Agent Before you start, you have to make sure you have vRealize Operations user credentials to ensure that you have enough permissions to deploy the agent. Let's see how can we install the agent on both Linux and Windows machines. Manually installing the Agent on a Linux Endpoint Perform the following steps to install the Agent on a Linux machine: Download the appropriate distributable package from https://my.vmware.com and copy it over to the machine. The Linux .rpm file should be installed with the root credentials. Log in to the machine and run the following command to install the agent: [root]# rpm -Uvh <DistributableFileName> In this example, we are installing on CentOS. Only when we start the service will it generate a token and ask for the server's details. Start the Endpoint Operations Management Agent service by running: service epops-agent start The previous command will prompt you to enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP: If the server is on the same machine as the agent, you can enter the localhost. If a firewall is blocking traffic from the agent to the server, allow the traffic to that address through the firewall. Default port: Specify the SSL port on the vRealize Operations server to which the agent must connect. The default port is 443. Certificate to trust: You can configure Endpoint Operations Management agents to trust the root or intermediate certificate to avoid having to reconfigure all agents if the certificate on the analytics nodes and remote collectors are modified. vRealize Operations credentials: Enter the name of a vRealize Operations user with agent manager permissions. The agent token is generated by the Endpoint Operations Management Agent. It is only created the first time the endpoint is registered to the server. The name of the token is random and unique, and is based on the name and IP of the machine. If the agent is reinstalled on the Endpoint, the same token is used, unless it is deleted manually. Provide the necessary information, as shown in the following screenshot: Go into the vRealize Operations UI and navigate to Administration | Configuration | Endpoint Operations, and select the Agents tab: After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can also verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page. If you encounter issues during the installation and configuration, you can check the /opt/vmware/epops-agent/log/agent.log log file for more information. We will examine this log file in more detail later in this book. Manually installing the agent on a Windows Endpoint On a Windows machine, the Endpoint Operations Management Agent is installed as a service. The following steps outline how to install the Agent via the .zip archive bundle, but you can also install it via an executable file. Perform the following steps to install the Agent on a Windows machine: Download the agent .zip file. Make sure the file version matches your Microsoft Windows OS. Open Command Prompt and navigate to the bin folder within the extracted folder. Run the following command to install the agent: ep-agent.bat install After the install, is complete, start the agent by executing: ep-agent.bat start If this is the first time you are installing the agent, a setup process will be initiated. If you have specified configuration values in the agent properties file, those will be used. Upon starting the service, it will prompt you for the same values as it did when we installed in on a Linux machine. Enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP Default port Certificate to trust vRealize Operations credentials If the agent is reinstalled on the endpoint, the same token is used, unless it is deleted manually. After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page: Because the agent is collecting data, the collection status of the agent is green. Alternatively, you can also verify the collection status of the Endpoint Operations Management Agent by navigating to Administration | Configuration | Endpoint Operations, and selecting the Agents tab. Automated agent installation using vRealize Automation In a typical environment, the Endpoint Operations Management Agent will not be installed on many/all virtual machines or physical servers. Usually, only those servers holding critical applications or services will be monitored using the agent. Of those servers which are not, manually installing and updating the Endpoint Operations Management Agent can be done with reasonable administrative effort. If we have an environment where we need to install the agent on many VMs, and we are using some kind of provisioning and automation engine like Microsoft System Center Configuration Manager or VMware vRealize Automation, which has application-provisioning capabilities, it is not recommended to install the agent in the VM template that will be cloned. If for whatever reason you have to do it, do not start the Endpoint Operations Management Agent, or remove the Endpoint Operations Management token and data before the cloning takes place. If the Agent is started before the cloning, a client token is created and all clones will be the same object in vRealize Operations. In the following example, we will show how to install the Agent via vRealize Automation as part of the provisioning process. I will not go into too much detail on how to create blueprints in vRealize Automation, but I will give you the basic steps to create a software component that will install the agent and add it to a blueprint. Perform the following steps to create a software component that will install the Endpoint Operations Management Agent and add it to a vRealize Automation Blueprint: Open the vRealize Automation portal page and navigate to Design, the Software Components tab, and click New to create a new software component. On the General tab, fill in the information. Click Properties and click New . Add the following six properties, as shown in the following table: Name Type Value Encrypted Overridable Required Computed RPMNAME String The name of the Agent RPM package No Yes No No SERVERIP String The IP of the vRealize Operations server/cluster No Yes No No SERVERLOGIN String Username with enough permission in vRealize Operations No Yes No No SERVERPWORD String Password for the user No Yes No No SRVCRTTHUMB String The thumbprint of the vRealize Operations certificate No Yes No No SSLPORT String The port on which to communicate with vRealize Operations No Yes No No These values will be used to configure the Agent once it is installed. The following screenshot illustrates some example values: Click Actions and configure the Install, Configure, and Start actions, as shown in the following table: Life Cycle Stage Script Type Script Reboot Install Bash rpm -i http://<FQDNOfThe ServerWhere theRPMFileIsLocated>/$RPMNAME No (unchecked) Configure Bash sed -i "s/#agent.setup.serverIP=localhost/agent.setup.serverIP=$SERVERIP/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverSSLPort=443/agent.setup.serverSSLPort=$SSLPORT/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverLogin=username/agent.setup.serverLogin=$SERVERLOGIN/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverPword=password/agent.setup.serverPword=$SERVERPWORD/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverCertificateThumbprint=/agent.setup.serverCertificateThumbprint=$SRVCRTTHUMB/" /opt/vmware/epops-agent/conf/agent.properties No (unchecked) Start Bash /sbin/service epops-agent start /sbin/chkconfig epops-agent on No (unchecked) The following screenshot illustrates some example values: Click Ready to Complete and click Finish. Edit the blueprint to where you want the agent to be installed. From Categories, select Software Components. Select the Endpoint Operations Management software component and drag and drop it over the machine type (virtual machine) where you want the object to be installed: Click Save to save the blueprint. This completes the necessary steps to automate the Endpoint Operations Management Agent installation as a software component in vRealize Automation. With every deployment of the blueprint, after cloning, the Agent will be installed and uniquely identified to vRealize Operations. Reinstalling the agent When reinstalling the agent, various elements are affected, like: Already-collected metrics Identification tokens that enable a reinstalled agent to report on the previously-discovered objects on the server The following locations contain agent-related data which remains when the agent is uninstalled: The data folder containing the keystore The epops-token platform token file which is created before agent registration Data continuity will not be affected when the data folder is deleted. This is not the case with the epops-token file. If you delete the token file, the agent will not be synchronized with the previously discovered objects upon a new installation. Reinstalling the agent on a Linux Endpoint Perform the following steps to reinstall the agent on a Linux machine: If you are removing the agent completely and are not going to reinstall it, delete the data directory by running: $ rm -rf /opt/epops-agent/data If you are completely removing the client and are not going to reinstall or upgrade it, delete the epops-token file: $ rm /etc/vmware/epops-token Run the following command to uninstall the Agent: $ yum remove <AgentFullName> If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. Reinstalling the Agent on a Windows Endpoint Perform the following steps to reinstall the Agent on a Windows machine: From the CLI, change the directory to the agent bin directory: cd C:epops-agentbin Stop the agent by running: C:epops-agentbin> ep-agent.bat stop Remove the agent service by running: C:epops-agentbin> ep-agent.bat remove If you are completely removing the client and are not going to reinstall it, delete the data directory by running: C:epops-agent> rd /s data If you are completely removing the client and are not going to reinstall it, delete the epops-token file using the CLI or Windows Explorer: C:epops-agent> del "C:ProgramDataVMwareEP Ops agentepops-token" Uninstall the agent from the Control Panel. If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. You've become familiar with the Endpoint Operations Management Agent and what it can do. We also discussed how to install and reinstall it on both Linux and Windows operating systems. To know more about vSphere and vRealize automation workload placement, check out our book  Mastering vRealize Operations Manager - Second Edition. VMware vSphere storage, datastores, snapshots What to expect from vSphere 6.7 KVM Networking with libvirt
Read more
  • 0
  • 0
  • 4351

article-image-installation-oracle-vm-virtualbox-linux
Packt
09 Dec 2013
4 min read
Save for later

Installation of Oracle VM VirtualBox on Linux

Packt
09 Dec 2013
4 min read
(For more resources related to this topic, see here.) Basic requirements The following are the basic requirements for VirtualBox: Processor: Any recent AMD/Intel processor is fine to run VirtualBox. Memory: This is dependent on the size of the RAM that is required for the host OS, plus the amount of RAM needed by the guest OS. In my test environment, I am using 4 GB RAM and going to run Oracle Enterprise Linux 6.2 as the guest OS. Hard disk space: VirtualBox doesn’t need much space, but you need to plan out how much the host OS needs and how much you need for the guest OS. Host OS: You need to make sure that the host OS is certified to run VirtualBox. Host OS At the time of writing this article, VirtualBox runs on the following host operating systems: Windows The following Microsoft Windows operating systems are compatible to run as host OS for VirtualBox: Windows XP, all service packs (32-bit) Windows Server 2003 (32-bit) Windows Vista (32-bit and 64-bit) Windows Server 2008 (32-bit and 64-bit) Windows 7 (32-bit and 64-bit) Windows 8 (32-bit and 64-bit) Windows Server 2012 (64-bit) Mac OS X The following Mac OS X operating systems are compatible to run as host OS for VirtualBox: 10.6 (Snow Leopard, 32-bit and 64-bit) 10.7 (Lion, 32-bit and 64-bit) 10.8 (Mountain Lion, 64-bit) Linux The following Linux operating systems are compatible to run as host OS for VirtualBox: Debian GNU/Linux 5.0 (lenny) and 6.0 (squeeze) Oracle Enterprise Linux 4 and 5, Oracle Linux 6 Red Hat Enterprise Linux 4, 5, and 6 Fedora Core 4 to 17 Gentoo Linux openSUSE 11.0, 11.1, 11.2, 11.3, 11.4, 12.1, and 12.2 Mandriva 2010 and 2011 Solaris Both 32-bit and 64-bit versions of Solaris are supported with some limitations. Please refer to www.virtualbox.org for more information. The following host OS are supported: Solaris 11 including Solaris 11 Express Solaris 10 (update 8 and higher) Guest OS You need to make sure that the guest OSis certified to run on VirtualBox. VirtualBox supports the following guest operating systems: Windows 3.x Windows NT 4.0 Windows 2000 Windows XP Windows Server 2003 Windows Vista Windows 7 DOS Linux (2.4, 2.6, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, and 3.7) Solaris OpenSolaris OpenBSD Installation I have downloaded and used the VirtualBox repository to install VirtualBox. Please ensure the desktop or laptop you are installing VirtualBox on is connected to the Internet. However, this is not mandatory. You can install VirtualBox using the RPM command, which generally adds the complexity of finding and installing the dependency packages, or you can have a central yum server where you can add VirtualBox repository. In my test lab, my laptop is connected to the Internet and I have used the wget command to download and add virtualbox.repository. Installing dependency packages Please ensure installing the packages seen in the following screenshot to make VirtualBox work: Installing VirtualBox 4.2.16 Once the preceding dependency is installed, we are ready to install VirtualBox 4.2.16. Use the command seen in the following screenshot to install VirtualBox: Rebuilding kernel modules The vboxdrv module is a special kernel module that helps to allocate the physical memory and to gain control of the processor for the guest system execution. Without this kernel module, you can still use the VirtualBox manager to configure the virtual machines, but they will not start. When you install VirtualBox by default, this module gets installed on the system. But to maintain future kernel updates, I suggest that you install Dynamic Kernel Module Support (DKMS). In most of the cases, this module installation is straightforward. You can use yum, apt-get, and so on depending on the Linux variant you are using, but ensure that the GNU compiler (GCC), GNU make (make), and packages containing header files for your kernel are installed prior to installing DKMS. Also, ensure that all system updates are installed. Once the kernel of your Linux host is updated and DKMS is not installed, the kernel module needs to be reinstalled by executing the following command as root: The preceding command not only rebuilds the kernel modules but also automatically creates the vboxusers group and the VirtualBox user. If you use Microsoft Windows on your desktop, then download the .exe file, install it, and start it from the desktop shortcut or from the program. Start VirtualBox Use the command seen in the following screenshot in Linux to run VirtualBox: If everything is fine, then the Oracle VM VirtualBox Manager screen appears as seen in the following screenshot: Update VirtualBox You can update VirtualBox with the help of the command seen in the following screenshot: Remove VirtualBox To remove VirtualBox, use the command seen in the following screenshot: Summary In this article we have covered the installation, update, and removal of Oracle VM VirtualBox in the Linux environment. Resources for Article: Further resources on this subject: Oracle VM Management [Article] Extending Oracle VM Management [Article] Troubleshooting and Gotchas in Oracle VM Manager 2.1.2 [Article]
Read more
  • 0
  • 0
  • 4259
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-designing-and-building-vrealize-automation-62-infrastructure
Packt
29 Sep 2015
16 min read
Save for later

Designing and Building a vRealize Automation 6.2 Infrastructure

Packt
29 Sep 2015
16 min read
 In this article by J. Powell, the author of the book Mastering vRealize Automation 6.2, we put together a design and build vRealize Automation 6.2 from POC to production. With the knowledge gained from this article, you should feel comfortable installing and configuring vRA. In this article, we will be covering the following topics: Proving the technology Proof of Concept Proof of Technology Pilot Designing the vRealize Automation architecture (For more resources related to this topic, see here.) Proving the technology In this section, we are going to discuss how to approach a vRealize Automation 6.2 project. This is a necessary component in order to assure a successful project, and it is specifically necessary when we discuss vRA, due to the sheer amount of moving parts that comprise the software. We are going to focus on the end users, whether they are individuals or business units, such as your company's software development department. These are the people that will be using vRA to provide the speed and agility necessary to deliver results that drive the business and make money. If we take this approach and treat our co-workers as customers, we can give them what they need to perform their jobs as opposed to what we perceive they need from an IT perspective. Designing our vRA deployment around the user and business requirements, first gives us a better plan to implement the backend infrastructure as well as the service offerings within the vRA web portal. This allows us to build a business case for vRealize Automation and will help determine which of the three editions will make sense to meet these needs. Once we have our business case created, validated, and approved, we can start testing vRealize Automation. There are three common phases to a testing cycle: Proof of Concept Proof of Technology Pilot implementation We will cover these phases in the following sections and explore whether you need them for your vRealize Automation 6.2 deployment. Proof of Concept A POC is typically an abbreviated version of what you hope to achieve during production. It is normally spun up in a lab, using old hardware, with a limited number of test users. Once your POC is set up, one of two things happen. First, nothing happens or it gets decommissioned. After all, it's just the IT department getting their hands dirty with new technology. This also happens when there is not a clear business driver, which provides a reason to have the technology in a production environment. The second thing that could happen is that the technology is proven, and it moves into a pilot phase. Of course, this is completely up to you. Perhaps, a demonstration of the technology will be enough, or testing some limited outcomes in VMware's HOL for vRealize Automation 6.2 will do the trick. Due to the number of components and features within vRA, it is strongly recommended that you create a POC, documenting the process along the way. This will give you a strong base if you take the project from POC to production. Proof of Technology The object of a POT project is to determine whether the proposed solution or technology will integrate in your existing IT landscape and add value. This is the stage where it is important to document any technical issues you encounter in your individual environment. There is no need to involve pilot users in this process as it is specifically to validate the technical merits of the software. Pilot implementation A pilot is a small scale and targeted roll out of the technology in a production environment. Its scope is limited, typically by a number of users and systems. This is to allow testing, so as to make sure the technology works as expected and designed. It also limits the business risk. A pilot deployment in terms of vRA is also a way to gain feedback from the users who will ultimately use it on a regular basis. vRealize Automation 6.2 is a product that empowers the end users to provision everything as a service. How the users feel about the layout of the web portal, user experience, and automated feedback from the system directly impacts how well the product will work in a full-blown production scenario. This also gives you time to make any necessary modifications to the vRA environment before providing access to additional users. When designing the pilot infrastructure, you should use the same hardware that is used during production. This includes ESXi hosts, storage, fiber or Internet Small Computer System Interface (iSCSI) connectivity, and vCenter versions. This will take into account any variances between platforms and configurations that could affect performance. Even at this stage, design, attention to detail, and following VMware best practices is key. Often, pilot programs get rolled straight into production. Adhering to these concepts will put you on the right path to a successful deployment. To get a better understanding, let's look at some of the design elements that should be considered: Size of the deployment: A small deployment will support 10,000 managed machines, 500 catalog items, and 10 concurrent deployments. Concurrent provisioning: Only two concurrent provisions per endpoint are allowed by default. You may want to increase this limit to suit your requirements. Hardware sizing: This refers to the number of servers, the CPU, and the memory. Scale: This refers to whether there will be multiple Identity and vRealize Automation vApps. Storage: This refers to pools of storage from Storage Area Network (SAN) or Network Attached Storage (NAS) and tiers of storage for performance requirements. Network: This refers to LANs, load balancing, internal versus external access to web portals, and IP pools for use with the infrastructure provisioned through vRA. Firewall: This refers to knowing what ports need to be opened between the various components that make up vRA, as well as the other endpoint that may fall under vRA's purview. Portal layout: This refers to the items you want to provide to the end user and the manner in which you categorize them for future growth. IT Business Management Suite Standard Edition: If you are going to implement this product, it can scale up to 20,000 VMs across four vCenter servers. Certificates: Appliances can be self-signed, but it is recommended to use an internal Certificate Authority for vRA components and an externally signed certificate to use on the vRA web portal if it is going to be exposed to the public Internet. VMware has published a Technical White Paper that covers all the details and considerations when deploying vRA. You can download the paper by visiting http://www.vmware.com/files/pdf/products/vCloud/VMware-vCloud-Automation-Center-61-Reference-Architecture.pdf. VMware provides the following general recommendation when deploying vRealize Automation: keep all vRA components in the same time zone with their clocks synced. If you plan on using VMware IT Business Management Suite Standard Edition, deploy it in the same LAN as vCenter. You can deploy Worker DEMs and proxy agents over the WAN, but all other components should not go over the WAN, as to prevent performance degradation. Here is a diagram of the pilot process: Designing the vRealize Automation architecture We have discussed the components that comprise vRealize Automation as well as some key design elements. Now, let's see some of the scenarios at a high level. Keep in mind that vRA is designed to manage tens of thousands of VMs in an infrastructure. Depending on your environment, you may never exceed the limitations of what VMware considers to be a small deployment. The following diagram displays the minimum footprint needed for small deployment architecture: A medium deployment can support up to 30,000 managed machines, 1,000 catalog items, and 50 concurrent deployments. The following diagram shows you the minimum required footprint for a medium deployment: Large deployments support 50,000 managed machines, 2,500 catalog items, and 100 concurrent deployments. The following diagram shows you the minimum required footprint for a large deployment: Design considerations Now that we understand the design elements for a small, medium, and large infrastructure, let's explore the components of vRA and build an example design, based on the small infrastructure requirements from VMware. Since there are so many options and components, we have broken them down into easily digestible components. Naming conventions It is important to give some thought to naming conventions for different aspects of the vRA web portal. Your company has probably set a naming convention for servers and environments, and we will have to make sure items provisioned from vRA adhere to those standards. It is important to name the different components of vRealize Automation in a method that makes sense for what your end goal may be regarding what vRA will do. This is necessary because it is not easy (and in some cases not possible) to rename the elements of the vRA web portal once you have implemented them. Compute resources Compute resources in terms of vRA refers to an object that represents a host, host cluster, virtual data center, or a public Cloud region, such as Amazon, where machines and applications can be provisioned. For example, compute resources can refer to vCenter, Hyper-V, or Amazon AWS. This list grows with each subsequent release of vRA. Business and Fabric groups A Business group in the vRA web portal is a set of services and resources assigned to a set of users. Quite simply, it is a way to align a business department or unit with the resources it needs. For example, you may have a Business group named Software Developers, and you would want them to be able to provision SQL 2012 and 2014 on Windows 2012 R2 servers. Fabric groups enable IT administrators to provide resources from your infrastructure. You can add users or groups to the Fabric group in order to manage the infrastructure resources you have assigned. For example, if you have a software development cluster in vCenter, you could create a Fabric group that contains the users responsible for the management of this cluster to oversee the cluster resources. Endpoints and credentials Endpoints can represent anything from vCenter, to storage, physical servers, and public Cloud offerings, such as Amazon AWS. The platform address is defined with the endpoint (in terms of being accessed through a web browser) along with the credentials needed to manage them. Reservations Reservations refer to how we provide a portion of our total infrastructure that is to be used for consumption by end users. It is a key design element in the vRealize Automation 6.2 infrastructure design. Each reservation created will need to define the disk, memory, networking, and priority. The lower their number, the higher will be the priority. This is to resolve conflicts in case there are multiple matching reservations. If the priorities of the multiple reservations are equal, vRA will choose a reservation in a round-robin style order: In the preceding diagram, on the far right-hand side, we can see that we have Shared Infrastructure composed of Private Physical and Private Virtual space, as well as a portion of a Public Cloud offering. By creating different reservations, we can assure that there is enough infrastructure for the business, while providing a dedicated portion of the total infrastructure to our end users. Reservation policies A reservation policy is a set of reservations that you can select from a blueprint to restrict provisioning only to specific reservations. Reservation policies are then attached to a reservation. An example of reservations policies can be taken when using them to create different storage policies. You can create a separate Bronze, Silver, and Gold policy to reflect the type of disk available on our SAN (such as SATA, SAS, and SDD). Network profiles By default, vRA will assign an IP address from a DHCP server to all the machines it provisions. However, most production environments do not use DHCP for their servers. A network profile will need to be created to allocate and assign static IPs to these servers. Network profile options consist of external, private, NAT (short for Network Address Translation), and routed. For the scope of our examples, we will focus on the external option. Compute resources Compute resources are tied in with Fabric groups, endpoints, storage reservation policies, and cost profiles. You must have these elements created before you can configure compute resources, although some components, such as storage and cost profiles, are optional. An example of a compute resource is a vCenter cluster. It is created automatically when you add an endpoint to the vRA web portal. Blueprints Blueprints are instruction sets to build virtual, physical, and Cloud-based machines, as well vApps. Blueprints define a machine or a set of application properties, the way it is provisioned, and its policy and management settings. For an end user, a blueprint is listed as an item in the Service Catalog tab. The user can request the item, and vRA would use the blueprint to provision the user's request. Blueprints also provide a way to prompt the user making the request for additional items, such as more compute resources, application or machine names, as well as network information. Of course, this can be automated as well and will probably be the preferred method in your environment. Blueprints also contain workflow logic. vRealize Automation contains built-in workflows for cloning snapshots, Kickstart, ISO, SCCM, and WIM deployments. You can define a minimum and maximum for CPU, memory, and storage. This will give end users the option to customize their machines to match their individual needs. It is a best practice to define the minimum for servers with very low resources, such as 1 vCPU and 512 MB for memory. It is easy to hot add these resources if the end user needs more compute after an initial request. However, if you set the minimum resources too high in the blueprint, you cannot lower the value. You will have to create a new blueprint. You can also define customized properties in the blueprints. For example, if you want to provide a VM with a defined MAC address or without a virtual CD-ROM attached, you can do so. VMware has published a detailed guide of the Custom Properties and their values. You can find it at http://pubs.vmware.com/vra-62/topic/com.vmware.ICbase/PDF/vrealize-automation-62-custom-properties.pdf. Custom Properties are case sensitive. It is recommended to test Custom Properties individually until you are comfortable using them. For example, a blueprint referencing an ISO workflow would fail if you have a Custom Property to remove the CD-ROM. Users and groups Users and groups are defined in the Administration section of the vRA web portal. This is where we would assign vRA specific roles to groups. It is worth mentioning when you login to the vRA web portal and click on users, it is blank. This is because of the sheer number of users that could be potentially allowed to access the portal and would slow the load time. In our examples, we will focus on users and groups from our Identity Appliance that ties in to Active Directory. Catalog management Catalog management consists of services, catalog items, actions, and entitlement. We will discuss them in more detail in the following sections. Services Services are another key design element and are defined by the vRA administrators to help group subsets of your environment. For example, you may have services defined for applications, where you would list items, such as SQL and Oracle databases. You could also create a service called OperatingSystems where you would group catalog items, such as Linux and Windows. You can make these services active or inactive, and also define maintenance windows when catalog items under this category would be unavailable for provisioning. Catalog items Catalog items are essentially links back to blueprints. These items are tied back to a service that you previously defined and helped shape the Service Catalog tab that the end user will use to provision machines and applications. Also, you will entitle users to use the catalog item. Entitlements As mentioned previously, entitlements are how we link business users and groups to services, catalog items, and actions. Actions Actions are a list of operations that gives a user the ability to perform certain tasks with services and catalog items. There are over 30 out of the box action items that come with vRA. This includes creating and destroying VMs, changing the lease time, as well as adding additional compute resources. You also have the option of creating custom actions as well. Approval policies Approval policies are the sets of rules that govern the use of catalog items. They can be used in the pre or post configuration life cycle of an item. Let's say, as an example, we have a Red Hat Linux VM that a user can provision. We have set the minimum vCPU to 1, but have defined a maximum of 4. We would want to notify the user's manager and the IT team when a request to provision the VM exceeds the minimum vCPU we have defined. We could create an approval policy to perform a pre-check to see if the user is requesting more than one vCPU. If the threshold is exceeded, an e-mail will be sent out to approve the additional vCPU resources. Until the notification is approved, the VM will not be provisioned. Advanced services Advanced services is an area of the vRA web portal where we can tie in customized workflows from vRealize Orchestrator. For example, we may need to check for a file in the VM's operating system once it has been deployed. We need to do this to make sure that an application has been deployed successfully or a baseline compliance is in order. We can present vRealize Orchestrator workflows for end users to leverage in almost the same manner as we do IaaS. Summary In this article, we covered the design and build principles of vRealize Automation 6.2. We discussed how to prove the technology by performing due diligence checks with the business users and creating a case to implement a POC. We detailed considerations when rolling out vRA in a pilot program and showed you how to gauge its success. Lastly, we detailed the components that comprise the design and build of vRealize Automation, while introducing additional elements. Resources for Article: Further resources on this subject: vROps – Introduction and Architecture[article] An Overview of Horizon View Architecture and its Components[article] Master Virtual Desktop Image Creation [article]
Read more
  • 0
  • 0
  • 4078

article-image-content-switching-using-citrix-security
Packt
10 Apr 2013
8 min read
Save for later

Content Switching using Citrix Security

Packt
10 Apr 2013
8 min read
(For more resources related to this topic, see here.) Getting ready We will start with the packet flow of NetScaler and where content switching comes into play. The following diagram is self-explanatory (it is not the entire packet flow to the receiver's endpoint; the focus here is only to CS and LB): The content switching vserver can be used for HTTP/HTTPS/TCP and UDP protocols, and it can direct it only to another vserver, not to the backend service directly. The content switching vserver doesn't need an LB vserver to be bound to it for its status to be UP. Even with nothing bound to the CS vserver, the status would show UP (this comes in handy when you want to blackhole unwanted traffic).Hence, it is always recommended to check whether the load balancing vservers that are bound to the content switching vserver are up and running. If you want to avoid the preceding condition, the following CLI command will help you achieve it (by default, the value is disabled): root@ NetScaler> add cs vserver <name> <serviceType> (<IPAddress>) [-stateupdate ( ENABLED | DISABLED )] Content switching can be done based on the following client attributes: Mobile user/PC Images/videos Dynamic/static content Client with/without cookies Geographical locations. Per VLAN Similarly, server-side differentiations can also be made based on the following attributes: Server speed and capacity Source/destination port Source/destination IP SSL/HTTP Citrix also has an additional feature (starting from NetScaler version 9.3) that dynamically selects the load balancing feature based on any criteria or condition provided in the CS action/policy: >add cs action <name> -targetLBVserver <string-expression> >add cs policy <policyName> -rule <RULEValue> -action <actionName> The policy is then bound to the CS vserver CS vservers can be configured to process URLs in a case-sensitive manner. By default, this option is ON: >set cs vserver CSVserver -caseSensitive ON The load balancing vserver bound to the CS vserver need not have any IP address configured unless it is used in a different access as well. How to do it... We shall focus on a few case studies that we commonly come across, and that can be solved with the help of content switching: Case 1: Customer ABC accesses an online shopping portal and gets redirected to a secure connection at the payment gateway. For this scenario, an HTTP LB vserver is used and is bound to the CS vserver, which is on HTTPS: The configuration in the preceding screenshot shows that a CS policy as well as a responder policy is bound to the CS vserver named testVserver. The CS policy works on directing the traffic to the target LB vserver (if there are no CS policies bound at all, it goes to the default LB vserver; this default LB vserver should be configured on the CS Vserver). The responder policy, if bound to the CS vserver works on HTTP requests before matching any CS policy. The configuration is verified by using show cs vserver <vserver name>. A packet capture taken on NetScaler will clearly show the redirect from HTTP to HTTPS as <HTTP 302>. If there is any traffic that doesn't match any specific CS policies that are bound, then it uses the default policy. If there is no default policy, the user will get an error – HTTP 1.1 Service Unavailable error message. Case 2: The customer Star Networks has a single web application that contains two domains, namely www.starnetworks.com and www.starnetworks.com.edu and has a content switching setup, which works fine when accessing www.starnetworks.com, but throws an error when accessing www.starnetworks.com.edu. This happens because the peceding domains are not the same; they are different and the certificate that is bound to the CS vserver would be of type www.starnetworks.com only. To resolve this issue, we can bind multiple certificates to the CS vserver with the Server Name Indication (SNI) option enabled. The SNI option can be enabled in the SSL Parameters tab (this would pop up only if the SSL protocol is chosen while creating the vserver). The CLI command to enable SNI is as follows: >bind sslvserver star_cs_vserver -certkeyname -SNICert > bind sslvserver star_cs_vserver -certkeyname -SNICert For each domain added, NetScaler will establish a secure channel between itself and the client. With this solution, you can avoid configuring multiple CS vservers. Case 3: A Customer has a large pool of IP subnets that needs categorizing, and it would be a next to impossible task to configure that number of content switching policies; how does he go about deploying this scenario? The solution is as follows: A database file should be created that includes the IP address range and the domain: >shell #cd /var/ NetScaler/locdb # vi test.db Run the following command to apply the changes made to the database file: > add locationfile aol.db Bind the CS policy with an expression stating, for example, as follows: "CLIENT.IP.SRC.MATCHES_LOCATION ("star.*.*.*.*.*")"" How it works... The working of NetScaler in all three preceding scenarios is that it analyzes the incoming traffic directed to the CS VIP and parses through the bound CS policies, if any. If a match is found, it goes to the target LB vserver. If there are any other policies that are bound (for example, a responder policy or a rewrite policy), then the responder policy gets executed even before the CS policy is executed (since responder policies are usually applied to the HTTP requests).However, rewrite policies can be bound either at the CS or LB level, depending on whether the request or response needs to be modified. To recap what we have seen in the case studies mentioned before, the first case helps us to do a simple redirect from HTTP to HTTPS using a responder policy bound at the CS level. The second case shows us how multiple certificates with the SNI option are used to solve domain differences that would otherwise cause issues. The final case study shows us the basic but handy setting to map IP address ranges to target load balancing vservers. An important thing to note – there are scenarios where the vserver and the services that are bound to them may be different ports altogether (for example, HTTP LB VIP would be listening on port 80, but the services would be on port 8080). In such cases, the redirectPortRewrite feature should be enabled. There's more... This section concentrates on tidbits and troubleshooting techniques: Tips and troubleshooting We can start with checking the output of show cs and show lb vservers, to see if the services bound to them are up and running: root@ns > show cs vserver cs_star_vserver 1) cs_star_vserver (IP_ADDRESS_HERE:80) - HTTP Type: CONTENT State: UP Client Idle Timeout: 180 sec Down state flush: ENABLED Port Rewrite : DISABLED Default: lb_vserver Content Precedence: RULE Vserver IP and Port insertion: OFF Case Sensitivity: OFF If there are responder and rewrite policies, then we can check whether the number of hits on that policy are incrementing or not. Packet captures (using Wireshark) on the server and NetScaler. In some cases, the client would show us the packet flow in depth. The Down state flush feature of the NetScaler is useful for admins planning their downtimes in advance. This feature is enabled, by default, on the vserver and service level. When the feature is enabled, the connections that are already open and established will be terminated and the users will have to retry their connections again. The requests that are already being processed alone would be honored. When the feature is disabled, the open and established connections are honored, and no new connections will be accepted at this time. If enabled at the vserver level, and if the state of the vserver is DOWN, then the vserver will flush the client and server connections that are linked. Otherwise, it would terminate only the client facing connections. At the server level, if the service is marked as DOWN, then only the server facing connections would be flushed. There is another option on the Advanced tab of the CS/LB vserver to direct the excess traffic to a backup vserver. In cases where the backup server also overflows, there is an option to use the redirect URL, which is also found in the Advanced tab of the CS/LB vserver. Summary This article has explained the implementation of content switching using Citrix Security. Resources for Article : Further resources on this subject: Managing Citrix Policies [Article] Getting Started with XenApp 6 [Article] Getting Started with the Citrix Access Gateway Product Family [Article]
Read more
  • 0
  • 0
  • 3882

article-image-storage-configurations
Packt
07 Sep 2015
21 min read
Save for later

Storage Configurations

Packt
07 Sep 2015
21 min read
In this article by Wasim Ahmed, author of the book Proxmox Cookbook, we will cover topics such as local storage, shared storage, Ceph storage, and a recipe which shows you how to configure the Ceph RBD storage. (For more resources related to this topic, see here.) A storage is where virtual disk images of virtual machines reside. There are many different types of storage systems with many different features, performances, and use case scenarios. Whether it is a local storage configured with direct attached disks or a shared storage with hundreds of disks, the main responsibility of a storage is to hold virtual disk images, templates, backups, and so on. Proxmox supports different types of storages, such as NFS, Ceph, GlusterFS, and ZFS. Different storage types can hold different types of data. For example, a local storage can hold any type of data, such as disk images, ISO/container templates, backup files and so on. A Ceph storage, on the other hand, can only hold a .raw format disk image. In order to provide the right type of storage for the right scenario, it is vital to have a proper understanding of different types of storages. The full details of each storage is beyond the scope of this article, but we will look at how to connect them to Proxmox and maintain a storage system for VMs. Storages can be configured into two main categories: Local storage Shared storage Local storage Any storage that resides in the node itself by using directly attached disks is known as a local storage. This type of storage has no redundancy other than a RAID controller that manages an array. If the node itself fails, the storage becomes completely inaccessible. The live migration of a VM is impossible when VMs are stored on a local storage because during migration, the virtual disk of the VM has to be copied entirely to another node. A VM can only be live-migrated when there are several Proxmox nodes in a cluster and the virtual disk is stored on a shared storage accessed by all the nodes in the cluster. Shared storage A shared storage is one that is available to all the nodes in a cluster through some form of network media. In a virtual environment with shared storage, the actual virtual disk of the VM may be stored on a shared storage, while the VM actually runs on another Proxmox host node. With shared storage, the live migration of a VM becomes possible without powering down the VM. Multiple Proxmox nodes can share one shared storage, and VMs can be moved around since the virtual disk is stored on different shared storages. Usually, a few dedicated nodes are used to configure a shared storage with their own resources apart from sharing the resources of a Proxmox node, which could be used to host VMs. In recent releases, Proxmox has added some new storage plugins that allow users to take advantage of some great storage systems and integrating them with the Proxmox environment. Most of the storage configurations can be performed through the Proxmox GUI. Ceph storage Ceph is a powerful distributed storage system, which provides RADOS Block Device (RBD) object storage, Ceph filesystem (CephFS), and Ceph Object Storage. Ceph is built with a very high-level of reliability, scalability, and performance in mind. A Ceph cluster can be expanded to several petabytes without compromising data integrity, and can be configured using commodity hardware. Any data written to the storage gets replicated across a Ceph cluster. Ceph was originally designed with big data in mind. Unlike other types of storages, the bigger a Ceph cluster becomes, the higher the performance. However, it can also be used in small environments just as easily for data redundancy. A lower performance can be mitigated using SSD to store Ceph journals. Refer to the OSD Journal subsection in this section for information on journals. The built-in self-healing features of Ceph provide unprecedented resilience without a single point of failure. In a multinode Ceph cluster, the storage can tolerate not just hard drive failure, but also an entire node failure without losing data. Currently, only an RBD block device is supported in Proxmox. Ceph comprises a few components that are crucial for you to understand in order to configure and operate the storage. The following components are what Ceph is made of: Monitor daemon (MON) Object Storage Daemon (OSD) OSD Journal Metadata Server (MSD) Controlled Replication Under Scalable Hashing map (CRUSH map) Placement Group (PG) Pool MON Monitor daemons form quorums for a Ceph distributed cluster. There must be a minimum of three monitor daemons configured on separate nodes for each cluster. Monitor daemons can also be configured as virtual machines instead of using physical nodes. Monitors require a very small amount of resources to function, so allocated resources can be very small. A monitor can be set up through the Proxmox GUI after the initial cluster creation. OSD Object Storage Daemons (OSDs) are responsible for the storage and retrieval of actual cluster data. Usually, each physical storage device, such as HDD or SSD, is configured as a single OSD. Although several OSDs can be configured on a single physical disc, it is not recommended for any production environment at all. Each OSD requires a journal device where data first gets written and later gets transferred to an actual OSD. By storing journals on fast-performing SSDs, we can increase the Ceph I/O performance significantly. Thanks to the Ceph architecture, as more and more OSDs are added into the cluster, the I/O performance also increases. An SSD journal works very well on small clusters with about eight OSDs per node. OSDs can be set up through the Proxmox GUI after the initial MON creation. OSD Journal Every single piece of data that is destined to be a Ceph OSD first gets written in a journal. A journal allows OSD daemons to write smaller chunks to allow the actual drives to commit writes that give more time. In simpler terms, all data gets written to journals first, then the journal filesystem sends data to an actual drive for permanent writes. So, if the journal is kept on a fast-performing drive, such as SSD, incoming data will be written at a much higher speed, while behind the scenes, slower performing SATA drives can commit the writes at a slower speed. Journals on SSD can really improve the performance of a Ceph cluster, especially if the cluster is small, with only a few terabytes of data. It should also be noted that if there is a journal failure, it will take down all the OSDs that the journal is kept on the journal drive. In some environments, it may be necessary to put two SSDs to mirror RAIDs and use them as journaling. In a large environment with more than 12 OSDs per node, performance can actually be gained by collocating a journal on the same OSD drive instead of using SSD for a journal. MDS The Metadata Server (MDS) daemon is responsible for providing the Ceph filesystem (CephFS) in a Ceph distributed storage system. MDS can be configured on separate nodes or coexist with already configured monitor nodes or virtual machines. Although CephFS has come a long way, it is still not fully recommended to use in a production environment. It is worth mentioning here that there are many virtual environments actively running MDS and CephFS without any issues. Currently, it is not recommended to configure more than two MDSs in a Ceph cluster. CephFS is not currently supported by a Proxmox storage plugin. However, it can be configured as a local mount and then connected to a Proxmox cluster through the Directory storage. MDS cannot be set up through the Proxmox GUI as of version 3.4. CRUSH map A CRUSH map is the heart of the Ceph distributed storage. The algorithm for storing and retrieving user data in Ceph clusters is laid out in the CRUSH map. CRUSH allows a Ceph client to directly access an OSD. This eliminates a single point of failure and any physical limitations of scalability since there are no centralized servers or controllers to manage data in and out. Throughout Ceph clusters, CRUSH maintains a map of all MONs and OSDs. CRUSH determines how data should be chunked and replicated among OSDs spread across several local nodes or even nodes located remotely. A default CRUSH map is created on a freshly installed Ceph cluster. This can be further customized based on user requirements. For smaller Ceph clusters, this map should work just fine. However, when Ceph is deployed with very big data in mind, this map should be customized. A customized map will allow better control of a massive Ceph cluster. To operate Ceph clusters of any size successfully, a clear understanding of the CRUSH map is mandatory. For more details on the Ceph CRUSH map, visit http://ceph.com/docs/master/rados/operations/crush-map/ and http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map. As of Proxmox VE 3.4, we cannot customize the CRUSH map throughout the Proxmox GUI. It can only be viewed through a GUI and edited through a CLI. PG In a Ceph storage, data objects are aggregated in groups determined by CRUSH algorithms. This is known as a Placement Group (PG) since CRUSH places this group in various OSDs depending on the replication level set in the CRUSH map and the number of OSDs and nodes. By tracking a group of objects instead of the object itself, a massive amount of hardware resources can be saved. It would be impossible to track millions of individual objects in a cluster. The following diagram shows how objects are aggregated in groups and how PG relates to OSD: To balance available hardware resources, it is necessary to assign the right number of PGs. The number of PGs should vary depending on the number of OSDs in a cluster. The following is a table of PG suggestions made by Ceph developers: Number of OSDs Number of PGs Less than 5 OSDs 128 Between 5-10 OSDs 512 Between 10-50 OSDs 4096 Selecting the proper number of PGs is crucial since each PG will consume node resources. Too many PGs for the wrong number of OSDs will actually penalize the resource usage of an OSD node, while very few assigned PGs in a large cluster will put data at risk. A rule of thumb is to start with the lowest number of PGs possible, then increase them as the number of OSDs increases. For details on Placement Groups, visit http://ceph.com/docs/master/rados/operations/placement-groups/. There's a great PG calculator created by Ceph developers to calculate the recommended number of PGs for various sizes of Ceph clusters at http://ceph.com/pgcalc/. Pools Pools in Ceph are like partitions on a hard drive. We can create multiple pools on a Ceph cluster to separate stored data. For example, a pool named accounting can hold all the accounting department data, while another pool can store the human resources data of a company. When creating a pool, assigning the number of PGs is necessary. During the initial Ceph configuration, three default pools are created. They are data, metadata, and rbd. Deleting a pool will delete all stored objects permanently. For details on Ceph and its components, visit http://ceph.com/docs/master/. The following diagram shows a basic Proxmox+Ceph cluster: The preceding diagram shows four Proxmox nodes, three Monitor nodes, three OSD nodes, and two MDS nodes comprising a Proxmox+Ceph cluster. Note that Ceph is on a different network than the Proxmox public network. Depending on the set replication number, each incoming data object needs to be written more than once. This causes high bandwidth usage. By separating Ceph on a dedicated network, we can ensure that a Ceph network can fully utilize the bandwidth. On advanced clusters, a third network is created only between Ceph nodes for cluster replication, thus improving network performance even further. As of Proxmox VE 3.4, the same node can be used for both Proxmox and Ceph. This provides a great way to manage all the nodes from the same Proxmox GUI. It is not advisable to put Proxmox VMs on a node that is also configured as Ceph. During day-to-day operations, Ceph nodes do not consume large amounts of resources, such as CPU or memory. However, when Ceph goes into rebalancing mode due to OSD or node failure, a large amount of data replication occurs, which takes up lots of resources. Performance will degrade significantly if resources are shared by both VMs and Ceph. Ceph RBD storage can only store .raw virtual disk image files. Ceph itself does not come with a GUI to manage, so having the option to manage Ceph nodes through the Proxmox GUI makes administrative tasks mush easier. Refer to the Monitoring the Ceph storage subsection under the How to do it... section of the Connecting the Ceph RBD storage recipe later in this article to learn how to install a great read-only GUI to monitor Ceph clusters. Connecting the Ceph RBD storage In this recipe, we are going to see how to configure a Ceph block storage with a Proxmox cluster. Getting ready The initial Ceph configuration on a Proxmox cluster must be accomplished through a CLI. After the Ceph installation, initial configurations and one monitor creation for all other tasks can be accomplished through the Proxmox GUI. How to do it... We will now see how to configure the Ceph block storage with Proxmox. Installing Ceph on Proxmox Ceph is not installed by default. Prior to configuring a Proxmox node for the Ceph role, Ceph needs to be installed and the initial configuration must be created through a CLI. The following steps need to be performed on all Proxmox nodes that will be part of the Ceph cluster: Log in to each node through SSH or a console. Configure a second network interface to create a separate Ceph network with a different subnet. Reboot the nodes to initialize the network configuration. Using the following command, install the Ceph package on each node: # pveceph install –version giant Initializing the Ceph configuration Before Ceph is usable, we have to create the initial Ceph configuration file on one Proxmox+Ceph node. The following steps need to be performed only on one Proxmox node that will be part of the Ceph cluster: Log in to the node using SSH or a console. Run the following command create the initial Ceph configuration: # pveceph init –network <ceph_subnet>/CIDR Run the following command to create the first monitor: # pveceph createmon Configuring Ceph through the Proxmox GUI After the initial Ceph configuration and the creation of the first monitor, we can continue with further Ceph configurations through the Proxmox GUI or simply run the Ceph Monitor creation command on other nodes. The following steps show how to create Ceph Monitors and OSDs from the Proxmox GUI: Log in to the Proxmox GUI as a root or with any other administrative privilege. Select a node where the initial monitor was created in previous steps, and then click on Ceph from the tabbed menu. The following screenshot shows a Ceph cluster as it appears after the initial Ceph configuration: Since no OSDs have been created yet, it is normal for a new Ceph cluster to show PGs stuck and unclean error Click on Disks on the bottom tabbed menu under Ceph to display the disks attached to the node, as shown in the following screenshot: Select an available attached disk, then click on the Create: OSD button to open the OSD dialog box, as shown in the following screenshot: Click on the Journal Disk drop-down menu to select a different device or collocate the journal on the same OSD by keeping it as the default. Click on Create to finish the OSD creation. Create additional OSDs on Ceph nodes as needed. The following screenshot shows a Proxmox node with three OSDs configured: By default, Proxmox has created OSDs with an ext3 partition. However, sometimes, it may be necessary to create OSDs with different partition types due to a requirement or for performance improvement. Enter the following command format through the CLI to create an OSD with a different partition type: # pveceph createosd –fstype ext4 /dev/sdX The following steps show how to create Monitors through the Proxmox GUI: Click on Monitor from the tabbed menu under the Ceph feature. The following screenshot shows the Monitor status with the initial Ceph Monitor we created earlier in this recipe: Click on Create to open the Monitor dialog box. Select a Proxmox node from the drop-down menu. Click on the Create button to start the monitor creation process. Create a total of three Ceph monitors to establish a Ceph quorum. The following screenshot shows the Ceph status with three monitors and OSDs added: Note that even with three OSDs added, the PGs are still stuck with errors. This is because by default, the Ceph CRUSH is set up for two replicas. So far, we've only created OSDs on one node. For a successful replication, we need to add some OSDs on the second node so that data objects can be replicated twice. Follow the steps described earlier to create three additional OSDs on the second node. After creating three more OSDs, the Ceph status should look like the following screenshot: Managing Ceph pools It is possible to perform basic tasks, such as creating and removing Ceph pools through the Proxmox GUI. Besides these, we can see check the list, status, number of PGs, and usage of the Ceph pools. The following steps show how to check, create, and remove Ceph pools through the Proxmox GUI: Click on the Pools tabbed menu under Ceph in the Proxmox GUI. The following screenshot shows the status of the default rbd pool, which has replica 1, 256 PG, and 0% usage: Click on Create to open the pool creation dialog box. Fill in the required information, such as the name of the pool, replica size, and number of PGs. Unless the CRUSH map has been fully customized, the ruleset should be left at the default value 0. Click on OK to create the pool. To remove a pool, select the pool and click on Remove. Remember that once a Ceph pool is removed, all the data stored in this pool is deleted permanently. To increase the number of PGs, run the following command through the CLI: #ceph osd pool set <pool_name> pg_num <value> #ceph osd pool set <pool_name> pgp_num <value> It is only possible to increase the PG value. Once increased, the PG value can never be decreased. Connecting RBD to Proxmox Once a Ceph cluster is fully configured, we can proceed to attach it to the Proxmox cluster. During the initial configuration file creation, Ceph also creates an authentication keyring in the /etc/ceph/ceph.client.admin.keyring directory path. This keyring needs to be copied and renamed to match the name of the storage ID to be created in Proxmox. Run the following commands to create a directory and copy the keyring: # mkdir /etc/pve/priv/ceph # cd /etc/ceph/ # cp ceph.client.admin.keyring /etc/pve/priv/ceph/<storage>.keyring For our storage, we are naming it rbd.keyring. After the keyring is copied, we can attach the Ceph RBD storage with Proxmox using the GUI: Click on Datacenter, then click on Storage from the tabbed menu. Click on the Add drop-down menu and select the RBD storage plugin. Enter the information as described in the following table: Item Type of value Entered value ID The name of the storage. rbd Pool The name of the Ceph pool. rbd Monitor Host The IP address and port number of the Ceph MONs. We can enter multiple MON hosts for redundancy. 172.16.0.71:6789;172.16.0.72:6789; 172.16.0.73:6789 User name The default Ceph administrator. Admin Nodes The Proxmox nodes that will be able to use the storage. All Enable The checkbox for enabling/disabling the storage. Enabled Click on Add to attach the RBD storage. The following screenshot shows the RBD storage under Summary: Monitoring the Ceph storage Ceph itself does not come with any GUI to manage or monitor the cluster. We can view the cluster status and perform various Ceph-related tasks through the Proxmox GUI. There are several third-party software that allow Ceph-only GUI to manage and monitor the cluster. Some software provide management features, while others provide read-only features for Ceph monitoring. Ceph Dash is such a software that provides an appealing read-only GUI to monitor the entire Ceph cluster without logging on to the Proxmox GUI. Ceph Dash is freely available through GitHub. There are other heavyweight Ceph GUI dashboards, such as Kraken, Calamari, and others. In this section, we are only going to see how to set up the Ceph Dash cluster monitoring GUI. The following steps can be used to download and start Ceph Dash to monitor a Ceph cluster using any browser: Log in to any Proxmox node, which is also a Ceph MON. Run the following commands to download and start the dashboard: # mkdir /home/tools # apt-get install git # git clone https://github.com/Crapworks/ceph-dash # cd /home/tools/ceph-dash # ./ceph_dash.py Ceph Dash will now start listening on port 5000 of the node. If the node is behind a firewall, open port 5000 or any other ports with port forwarding in the firewall. Open any browser and enter <node_ip>:5000 to open the dashboard. The following screenshot shows the dashboard of the Ceph cluster we have created: We can also monitor the status of the Ceph cluster through a CLI using the following commands: To check the Ceph status: # ceph –s To view OSDs in different nodes: # ceph osd tree To display real-time Ceph logs: # ceph –w To display a list of Ceph pools: # rados lspools To change the number of replicas of a pool: # ceph osd pool set size <value> Besides the preceding commands, there are many more CLI commands to manage Ceph and perform advanced tasks. The Ceph official documentation has a wealth of information and how-to guides along with the CLI commands to perform them. The documentation can be found at http://ceph.com/docs/master/. How it works… At this point, we have successfully integrated a Ceph cluster with a Proxmox cluster, which comprises six OSDs, three MONs, and three nodes. By viewing the Ceph Status page, we can get lot of information about a Ceph cluster at a quick glance. From the previous figure, we can see that there are 256 PGs in the cluster and the total cluster storage space is 1.47 TB. A healthy cluster will have the PG status as active+clean. Based on the nature of issue, the PGs can have various states, such as active+unclean, inactive+degraded, active+stale, and so on. To learn details about all the states, visit http://ceph.com/docs/master/rados/operations/pg-states/. By configuring a second network interface, we can separate a Ceph network from the main network. The #pveceph init command creates a Ceph configuration file in the /etc/pve/ceph.conf directory path. A newly configured Ceph configuration file looks similar to the following screenshot: Since the ceph.conf configuration file is stored in pmxcfs, any changes made to it are immediately replicated in all the Proxmox nodes in the cluster. As of Proxmox VE 3.4, Ceph RBD can only store a .raw image format. No templates, containers, or backup files can be stored on the RBD block storage. Here is the content of a storage configuration file after adding the Ceph RBD storage: rbd: rbd monhost 172.16.0.71:6789;172.16.0.72:6789;172.16.0.73:6789 pool rbd content images username admin If a situation dictates the IP address change of any node, we can simply edit this content in the configuration file to manually change the IP address of the Ceph MON nodes. See also To learn about Ceph in greater detail, visit http://ceph.com/docs/master/ for the official Ceph documentation Also, visit https://indico.cern.ch/event/214784/session/6/contribution/68/material/slides/0.pdf to find out why Ceph is being used at CERN to store the massive data generated by the Large Hadron Collider (LHC) Summary In this article, we came across with different configurations for a variety of storage categories and got hands-on practice with various stages in configuring the Ceph RBD storage. Resources for Article: Further resources on this subject: Deploying New Hosts with vCenter [article] Let's Get Started with Active Di-rectory [article] Basic Concepts of Proxmox Virtual Environment [article]
Read more
  • 0
  • 0
  • 3856

article-image-creating-horizon-desktop-pools
Packt
16 May 2016
17 min read
Save for later

Creating Horizon Desktop Pools

Packt
16 May 2016
17 min read
A Horizon desktop pool is a collection of desktops that users select when they log in using the Horizon client. A pool can be created based on a subset of users, such as finance, but this is not explicitly required unless you will be deploying multiple virtual desktop master images. The pool can be thought of as a central point of desktop management within Horizon; from it you create, manage, and entitle access to Horizon desktops. This article by Jason Ventresco, author of the book Implementing VMware Horizon View 6.X, will discuss how to create a desktop pool using the Horizon Administrator console, an important administrative task. (For more resources related to this topic, see here.) Creating a Horizon desktop pool This section will provide an example of how to create two different Horizon dedicated assignment desktop pools, one based on Horizon Composer linked clones and another based on full clones. Horizon Instant Clone pools only support floating assignment, so they have fewer options compared to the other types of desktop pools. Also discussed will be how to use the Horizon Administrator console and the vSphere client to monitor the provisioning process. The examples provided for full clone and linked clone pools created dedicated assignment pools, although floating assignment may be created as well. The options will be slightly different for each, so refer the information provided in the Horizon documentation (https://www.vmware.com/support/pubs/view_pubs.html), to understand what each setting means. Additionally, the Horizon Administrator console often explains each setting within the desktop pool configuration screens. Creating a pool using Horizon Composer linked clones The following steps outline how to use the Horizon Administrator console to create a dedicated assignment desktop pool using Horizon Composer linked clones. As discussed previously, it is assumed that you already have a virtual desktop master image that you have created a snapshot of. During each stage of the pool creation process, a description of many of the settings is displayed in the right-hand side of the Add Desktop Pool window. In addition, a question mark appears next to some of the settings; click on it to read important information about the specified setting. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window, select the Automated Desktop Pool radio button as shown in the following screenshot, and then click on Next >: In the Desktop Pool Definition | User Assignment window, select the Dedicated radio button and check the Enable automatic assignment checkbox as shown in the following screenshot, and then click on Next >: In the Desktop Pool Definition | vCenter Server window, select the View Composer linked clones radio button, highlight the vCenter server as shown in the following screenshot, and then click on Next >: In the Setting | Desktop Pool Identification window, populate the pool ID: as shown in the following screenshot, and then click on Next >. Optionally, configure the Display Name: field. When finished, click on Next >: In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >: In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format, the number of desktops, and the number of desktops that should remain available during Horizon Composer maintenance operations. When finished, click on Next >: When creating a desktop naming pattern, use a {n} to instruct Horizon to insert a unique number in the desktop name. For example, using Win10x64{n} as shown in the preceding screenshot will name the first desktop Win10x641, the next Win10x642, and so on. In the Setting | View Composer Disks window, configure the settings for your optional linked clone disks. By default, both a Persistent Disk for user data and a non-persistent disk for Disposable File Redirection are created. When finished, click on Next >: In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN, and if not whether or not to separate our Horizon desktop replica disks from the individual desktop OS disks. In our example, we have checked the Use VMware Virtual SAN radio button as that is what our destination vSphere cluster is using. When finished, click on Next >: As all-flash storage arrays or all-flash or flash-dependent Software Defined Storage (SDS) platforms become more common, there is less of a need to place the shared linked clone replica disks on separate, faster datastores than the individual desktop OS disks. In the Setting | vCenter Settings window, we will need to configure six different options that include selecting the parent virtual machine, which snapshot of that virtual machine to use, what vCenter folder to place the desktops in, what vSphere cluster and resource pool to deploy the desktops to, and what datastores to use. Click on the Browse… button next to the Parent VM: field to begin the process and open the Select Parent VM window: In the Select Parent VM window, highlight the virtual desktop master image that you wish to deploy desktops from, as shown in the following screenshot. Click on OK when the image is selected to return to the previous window: The virtual machine will only appear if a snapshot has been created. In the Setting | vCenter Settings window, click on the Browse… button next to the Snapshot: field to open the Select default image window. Select the desired snapshot, as shown in the following screenshot, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the VM folder location: field to open the VM Folder Location window, as shown in the following screenshot. Select the folder within vCenter where you want the desktop virtual machines to be placed, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Host or cluster: field to open the Host or Cluster window, as shown in the following screenshot. Select the cluster or individual ESXi server within vCenter where you want the desktop virtual machines to be created, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Resource pool: field to open the Resource Pool window, as shown in the following screenshot. If you intend to place the desktops within a resource pool you would select that here; if not select the same cluster or ESXi server you chose in the previous step. Once finished, click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Datastores: field to open the Select Linked Clone Datastores window, as shown in the following screenshot. Select the datastore or datastores where you want the desktops to be created, and click on OK to return to the previous window: If you were using storage other than VMware Virtual SAN, and had opted to use separate datastores for your OS and replica disks in step 11, you would have had to select unique datastores for each here instead of just one. Additionally, you would have had the option to configure the storage overcommit level. The Setting | vCenter Settings window should now have all options selected, enabling the Next > button. When finished, click on Next >: In the Setting | Advanced Storage Options window, if desired select and configure the Use View Storage Accelerator and Other Options check boxes to enable those features. In our example, we have enabled both the Use View Storage Accelerator and Reclaim VM disk space options, and configured Blackout Times to ensure that these operations do not occur between 8 A.M. (08:00) and 5 P.M. (17:00) on weekdays. When finished, click on Next >: The Use native NFS snapshots (VAAI) feature enables Horizon to leverage features of the a supported NFS storage array to offload the creation of linked clone desktops. If you are using an external array with your Horizon ESXi servers, consult the product documentation to understand if it supports this feature. Since we are using VMware Virtual SAN, this and other options under Other Options are greyed out as these settings are not needed. Additionally, if View Storage Accelerator is not enabled in the vCenter Server settings the option to use it would be greyed out here. In the Setting | Guest Customization window, select the Domain: where the desktops will be created, the AD container: where the computer accounts will be placed, whether to Use QuickPrep or Use a customization specification (Sysprep), and any other options as required. When finished, click on Next >: In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The Horizon desktop pool and virtual desktops will now be created. Creating a pool using Horizon Instant Clones The process used to create an Instant Clone desktop pool is similar to that used to create a linked clone pool. As discussed previously, it is assumed that you already have a virtual desktop master image that has the Instant Clone option enabled in the Horizon agent, and that you have taken a snapshot of that master image. A master image can have either the Horizon Composer (linked clone) option or Instant Clone option enabled in the Horizon agent, but not both. To get around this restriction you can configure one snapshot of the master image with the View Composer option installed, and a second with the Instant Clone option installed. The following steps outline the process used to create the Instant Clone desktop pool. Screenshots are included only when the step differs significantly from the same step in the Creating a pool using Horizon Composer linked clones section. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window, select the Automated Desktop Pool radio button as shown in the following screenshot, and then click on Next >. In the Desktop Pool Definition | User Assignment window, select the Floating radio button (mandatory for Instant Clone desktops), and then click on Next >. In the Desktop Pool Definition | vCenter Server window, select the View Composer linked clones radio button as shown in the following screenshot, highlight the vCenter server, and then click on Next >: If Instant Clones is greyed out here, it is usually because you did not select Floating in the previous step. In the Setting | Desktop Pool Identification window, populate the pool ID:, and then click on Next >. Optionally, configure the Display Name: field. In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >. In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format, the number of desktops, and the number of desktops that should remain available during maintenance operations. When finished, click on Next >. Instant Clones are required to always be powered on, so some options available to linked clones will be greyed out here. In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN, and if not whether or not to separate our Horizon desktop replica disks from the individual desktop OS disks. When finished, click on Next >. In the Setting | vCenter Settings window, we will need to configure six different options that include selecting the parent virtual machine, which snapshot of that virtual machine to use, what vCenter folder to place the desktops in, what vSphere cluster and resource pool to deploy the desktops to, and what datastores to use. Click on the Browse… button next to the Parent VM: Horizon Instant Clone field to begin the process and open the Select Parent VM window. In the Select Parent VM window, highlight the virtual desktop master image that you wish to deploy desktops from. Click on OK when the image is selected to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Snapshot: field to open the Select default image window. Select the desired snapshot, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the VM folder location: field to open the VM Folder Location window. Select the folder within vCenter where you want the desktop virtual machines to be placed, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Host or cluster: field to open the Host or Cluster window. Select the cluster or individual ESXi server within vCenter where you want the desktop virtual machines to be created, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Resource pool: field to open the Resource Pool window. If you intend to place the desktops within a resource pool you would select that here; if not select the same cluster or ESXi server you chose in the previous step. Once finished, click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Datastores: field to open the Select Instant Clone Datastores window. Select the datastore or datastores where you want the desktops to be created, and click on OK to return to the previous window. The Setting | vCenter Settings window should now have all options selected, enabling the Next > button. When finished, click on Next >. In the Setting | Guest Customization window, select the Domain: where the desktops will be created, the AD container: where the computer accounts will be placed, and any other options as required. When finished, click on Next >. Instant Clones only support ClonePrep for customization, so there are fewer options here than seen when deploying a linked clone desktop pool. In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The Horizon desktop pool and Instant Clone virtual desktops will now be created. Creating a pool using full clones The process used to create full clone desktops pool is similar to that used to create a linked clone pool. As discussed previously, it is assumed that you already have a virtual desktop master image that you have converted to a vSphere template. In addition, if you wish for Horizon to perform the virtual machine customization, you will need to create a Customization Specification using the vCenter Customization Specifications Manager. The Customization Specification is used by the Windows Sysprep utility to complete the guest customization process. Visit the VMware vSphere virtual machine administration guide (http://pubs.vmware.com/vsphere-60/index.jsp) for instructions on how to create a Customization Specification. The following steps outline the process used to create the full clone desktop pool. Screenshots are included only when the step differs significantly from the same step in the Creating a pool using Horizon Composer linked clones section. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window select the Automated Pool radio button and then click on Next. In the Desktop Pool Definition | User Assignment window, select the Dedicated radio button, check the Enable automatic assignment checkbox, and then click on Next. In the Desktop Pool Definition | vCenter Server window, click the Full virtual machines radio button, highlight the desired vCenter server, and then click on Next. In the Setting | Desktop Pool Identification window, populate the pool ID: and Display Name: fields and then click on Next. In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >. In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format and number of desktops. When finished, click on Next >. In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN. When finished, click on Next >. In the Setting | vCenter Settings window, we will need to configure settings that set the virtual machine template, what vSphere folder to place the desktops in, which ESXi server or cluster to deploy the desktops to, and which datastores to use. Other than the Template setting described in the next step, each of these settings is identical to those seen when creating a Horizon Composer linked clone pool. Click on the Browse… button next to each of the settings in turn and select the appropriate options. To configure the Template: setting, select the vSphere template that you created from your virtual desktop master image as shown in the following screenshot, and then click OK to return to the previous window: A template will only appear if one is present within vCenter. Once all the settings in the Setting | vCenter Settings window have been configured, click on Next >. In the Setting | Advanced Storage Options window, if desired select and configure the Use View Storage Accelerator radio buttons and configure Blackout Times. When finished, click on Next >. In the Setting | Guest Customization window, select either the None | Customization will be done manually or Use this customization specification radio button, and if applicable select a customization specification. When finished, click on Next >. In the following screenshot, we have selected the Win10x64-HorizonFC customization specification that we previously created within vCenter: Manual customization is typically used when the template has been configured to run Sysprep automatically upon start up, without requiring any interaction from either Horizon or VMware vSphere. In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The desktop pool and virtual desktops will now be created. Summary In this article, we have learned about Horizon desktop pools. In addition to learning how to create three different types of desktop pools, we were introduced to a number of key concepts that are part of the pool creation process. Resources for Article: Further resources on this subject: Essentials of VMware vSphere [article] Cloning and Snapshots in VMware Workstation [article] An Introduction to VMware Horizon Mirage [article]
Read more
  • 0
  • 0
  • 3825
article-image-design-install-and-configure
Packt
18 Mar 2014
4 min read
Save for later

Design, Install, and Configure

Packt
18 Mar 2014
4 min read
(For more resources related to this topic, see here.) In this article, we will cover the following key subjects: Horizon Workspace Architecture Overview Designing a solution Sizing guidelines vApp deployment Step-by-step configuration Install Certificates Setting up Kerberos Single Sign-On (SSO) Reading this article will provide you with an introduction to the solution, and also provides you with useful reference points throughout the book that will help you install, configure, and manage a Horizon Workspace deployment. A few things are out of scope for this article, such as setting up vSphere, configuring HA, and using certificates. We will assume that the core infrastructure is already in place. We start by looking at the solution architecture and then how to size a deployment, based on best practice, and suitable to meet the requirements of your end users. Next we will cover the preparation steps in vCenter and then deploy the Horizon Workspace vApp. There are then two steps to installation and configuration. First we will guide you through the initial command-line-based setup and then finally the web-based Setup Wizard. Each section is described in easy to follow steps, and shown in detail using actual screenshots of our lab deployment. So let's get started with the architecture overview. The Horizon Workspace architecture The following diagram shows a more detailed view of how the architecture fits together: The Horizon Workspace sizing guide We have already discussed that Horizon Workspace is made up of five virtual appliances. However, for production environments, you will need to deploy multiple instances to provide for high availability, offer load balancing, and support the number of users that you need in your environment. For a Proof of Concept (POC) or pilot deployment, this is of less importance. Sizing the Horizon Workspace virtual appliances The following diagram shows the maximum number of users that each appliance can accommodate. Using these maximum values, you can calculate the number of appliances that you need to deploy. For example, if you had 6,000 users in your environment, you would need to deploy a single connector-va appliance, three gateway-va appliances, one service-va appliance, seven data-va appliances, and a single configurator-va appliance. Please note that data-va should be sized using N+1. The first data-va appliance should never contain any user data. For high availability, you may want to use two connector-va appliances and two service-va appliances. Sizing for Preview services If you plan to use a Microsoft Preview Server, this needs to be sized based on the requirements shown in the following diagram: If we use our previous example of 6,000 users, then to use Microsoft Preview, you would require a total of six Microsoft Preview Servers. The Horizon Workspace database You have a few options for the database. For a POC or pilot environment, you can use the internal database functionality. In a production deployment, you would use an external database option, using either VMware PostgreSQL or Oracle 11g. This allows you to have a highly available and protected database. The VMware recommendation is PostgreSQL, and the following diagram details the sizing information for the Horizon Workspace database: External access and load balancing considerations In a production environment, high availability, redundancy, and external access is a core requirement. This needs planning and configuration. For a POC or pilot deployment, this is usually not of high importance but should be something to be aware of. To achieve high availability and redundancy, a load balancer is required in front of the gateway-va and the connector-va appliances that are used for Kerberos (Windows authentication). If external access is required, then typically you will also need a load balancer in the Demilitarized Zone (DMZ). This is detailed in the diagram at the beginning of this article. It is not supported to place gateway-va appliances in the DMZ. For more detailed information about load balancing, please visit the following guide: https://communities.vmware.com/docs/DOC-24577 Summary In this article, we had an overview of the Horizon Workspace architecture. We made sure that all the prerequisites are in place before we deploy the Horizon Workspace vApp. This article covers the basic sizing, configuration, and the installation of Horizon Workspace 1.5. Resources for Article: Further resources on this subject: An Introduction to VMware Horizon Mirage [Article] Windows 8 with VMware View [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 3809

article-image-chaos-engineering-comes-to-kubernetes-thanks-to-gremlin
Richard Gall
18 Nov 2019
2 min read
Save for later

Chaos engineering comes to Kubernetes thanks to Gremlin

Richard Gall
18 Nov 2019
2 min read
Kubernetes causes problems. Just last week Cindy Sridharan wrote on Twitter that while Docker "succeeded... because it was a great developer tool," Kubernetes "decided to be all things tech and not much by way of UX. It was and remains a hostile piece of software to learn, run, operate, maintain." https://twitter.com/copyconstruct/status/1194701905248673792?s=20 That's just a drop in the ocean - you don't have to look hard to find more hot takes, jokes, and memes about how complicated working with Kubernetes can feel. Despite all this, it's certainly here to stay. That makes chaos engineering platform Gremlin's announcement that the platform will offer native support for Kubernetes particularly welcome. Citing container orchestration research done by Datadog in the press release, which indicates the rapid rate of Kubernetes adoption, Gremlin is hoping that it can provide some additional support for users that might be concerned about the platforms complexity. From last year: Gremlin makes chaos engineering with Docker easier with new container discovery feature Gremlin CTO Matt Fornaciari said "our goal is to provide SRE and DevOps teams that are building and deploying modern applications with the tools and processes necessary to understand how their systems handle failure, before that failure has the chance to impact customers and business." The new feature is designed to help engineers do exactly that by allowing them "to automate the process of identifying Kubernetes primitives such as nodes and pods," and to select and attack traffic from different services within Kubernetes. The other important element to all this is that Gremlin wants to make things as straightforward as possible for engineering teams. With a neat and easy to use UI, it would seem that, to return to Sridharan's words, the team are eager to make sure their product is "a great developer tool."   The tool has already been tried and tested in the wild. Simon Govier, Expedia's Director of Program Management described how performing chaos experiments on Kubernetes with Gremlin "significantly reduces the amount of time it takes to do fault injection and increases our systems' resilience to failure." Learn more on the Gremlin website.
Read more
  • 0
  • 0
  • 3712

article-image-hyper-v-basics
Packt
06 Feb 2015
10 min read
Save for later

Hyper-V Basics

Packt
06 Feb 2015
10 min read
This article by Vinith Menon, the author of Microsoft Hyper-V PowerShell Automation, delves into the basics of Hyper-V, right from installing Hyper-V to resizing virtual hard disks. The Hyper-V PowerShell module includes several significant features that extend its use, improve its usability, and allow you to control and manage your Hyper-V environment with more granular control. Various organizations have moved on from Hyper-V (V2) to Hyper-V (V3). In Hyper-V (V2), the Hyper-V management shell was not built-in and the PowerShell module had to be manually installed. In Hyper-V (V3), Microsoft has provided an exhaustive set of cmdlets that can be used to manage and automate all configuration activities of the Hyper-V environment. The cmdlets are executed across the network using Windows Remote Management. In this article, we will cover: The basics of setting up a Hyper-V environment using PowerShell The fundamental concepts of Hyper-V management with the Hyper-V management shell The updated features in Hyper-V (For more resources related to this topic, see here.) Here is a list of all the new features introduced in Hyper-V in Windows Server 2012 R2. We will be going in depth through the important changes that have come into the Hyper-V PowerShell module with the following features and functions: Shared virtual hard disk Resizing the live virtual hard disk Installing and configuring your Hyper-V environment Installing and configuring Hyper-V using PowerShell Before you proceed with the installation and configuration of Hyper-V, there are some prerequisites that need to be taken care of: The user account that is used to install the Hyper-V role should have administrative privileges on the computer There should be enough RAM on the server to run newly created virtual machines Once the prerequisites have been taken care of, let's start with installing the Hyper-V role: Open a PowerShell prompt in Run as Administrator mode: Type the following into the PowerShell prompt to install the Hyper-V role along with the management tools; once the installation is complete, the Hyper-V Server will reboot and the Hyper-V role will be successfully installed: Install-WindowsFeature –Name Hyper-V -IncludeManagementTools - Restart Once the server boots up, verify the installation of Hyper-V using the Get-WindowsFeature cmdlet: Get-WindowsFeature -Name hyper* You will be able to see that the Hyper-V role, Hyper-V PowerShell management shell, and the GUI management tools are successfully installed:   Fundamental concepts of Hyper-V management with the Hyper-V management shell In this section, we will look at some of the fundamental concepts of Hyper-V management with the Hyper-V management shell. Once you get the Hyper-V role installed as per the steps illustrated in the previous section, a PowerShell module to manage your Hyper-V environment will also get installed. Now, perform the following steps: Open a PowerShell prompt in the Run as Administrator mode. PowerShell uses cmdlets that are built using a verb-noun naming system (for more details, refer to Learning Windows PowerShell Names at http://technet.microsoft.com/en-us/library/dd315315.aspx). Type the following command into the PowerShell prompt to get a list of all the cmdlets in the Hyper-V PowerShell module: Get-Command -Module Hyper-V Hyper-V in Windows Server 2012 R2 ships with about 178 cmdlets. These cmdlets allow a Hyper-V administrator to handle very simple, basic tasks to advanced ones such as setting up a Hyper-V replica for virtual machine disaster recovery. To get the count of all the available Hyper-V cmdlets, you can type the following command in PowerShell: Get-Command -Module Hyper-V | Measure-Object The Hyper-V PowerShell cmdlets follow a very simple approach and are very user friendly. The cmdlet name itself indirectly communicates with the Hyper-V administrator about its functionality. The following screenshot shows the output of the Get command: For example, in the following screenshot, the Remove-VMSwitch cmdlet itself says that it's used to delete a previously created virtual machine switch: If the administrator is still not sure about the task that can be performed by the cmdlet, he or she can get help with detailed examples using the Get-Help cmdlet. To get help on the cmdlet type, type the cmdlet name in the prescribed format. To make sure that the latest version of help files are installed on the server, run the Update-Help cmdlet before executing the following cmdlet: Get-Help <Hyper-V cmdlet> -Full The following screenshot is an example of the Get-Help cmdlet: Shared virtual hard disks This new and improved feature in Windows Server 2012 R2 allows an administrator to share a virtual hard disk file (the .vhdx file format) between multiple virtual machines. These .vhdx files can be used as shared storage for a failover cluster created between virtual machines (also known as guest clustering). A shared virtual hard disk allows you to create data disks and witness disks using .vhdx files with some advantages: Shared disks are ideal for SQL database files and file servers Shared disks can be run on generation 1 and generation 2 virtual machines This new feature allows you to save on storage costs and use the .vhdx files for guest clustering, enabling easier deployment rather than using virtual Fibre Channel or Internet Small Computer System Interface (iSCSI), which are complicated and require storage configuration changes such as zoning and Logic Unit Number (LUN) masking. In Windows Server 2012 R2, virtual iSCSI disks (both shared and unshared virtual hard disk files) show up as virtual SAS disks when you add an iSCSI hard disk to a virtual machine. Shared virtual hard disks (.vhdx) files can be placed on Cluster Shared Volumes (CSV) or a Scale-Out File Server cluster Let's look at the ways you can automate and manage your shared .vhdx guest clustering configuration using PowerShell. In the following example, we will demonstrate how you can create a two-node file server cluster using the shared VHDX feature. After that, let's set up a testing environment within which we can start learning these new features. The steps are as follows: We will start by creating two virtual machines each with 50 GB OS drives, which contains a sysprep image of Windows Server 2012 R2. Each virtual machine will have 4 GB RAM and four virtual CPUs. D:vhdbase_1.vhdx and D:vhdbase_2.vhdx are already existing VHDX files with sysprepped image of Windows Server 2012 R2. The following code is used to create two virtual machines: New-VM –Name "Fileserver_VM1" –MemoryStartupBytes 4GB – NewVHDPath d:vhdbase_1.vhdx -NewVHDSizeBytes 50GB New-VM –Name "Fileserver_VM2" –MemoryStartupBytes 4GB –NewVHDPath d:vhdbase_2.vhdx -NewVHDSizeBytes 50GB Next, we will install the file server role and configure a failover cluster on both the virtual machines using PowerShell. You need to enable PowerShell remoting on both the file servers and also have them joined to a domain. The following is the code: Install-WindowsFeature -computername Fileserver_VM1 File- Services, FS-FileServer, Failover-Clustering   Install-WindowsFeature -computername Fileserver_VM1 RSAT- Clustering –IncludeAllSubFeature   Install-WindowsFeature -computername Fileserver_VM2 File- Services, FS-FileServer, Failover-Clustering   Install-WindowsFeature -computername Fileserver_VM2 RSAT- Clustering -IncludeAllSubFeature Once we have the virtual machines created and the file server and failover clustering features installed, we will create the failover cluster as per Microsoft's best practices using the following set of cmdlets: New-Cluster -Name Cluster1 -Node FileServer_VM1,   FileServer_VM2 -StaticAddress 10.0.0.59 -NoStorage – Verbose You will need to choose a name and IP address that fits your organization. Next, we will create two vhdx files named sharedvhdx_data.vhdx (which will be used as a data disk) and sharedvhdx_quorum.vhdx (which will be used as the quorum or the witness disk). To do this, the following commands need to be run on the Hyper-V cluster: New-VHD -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX -Fixed - SizeBytes 10GB   New-VHD -Path   c:ClusterStorageVolume1sharedvhdx_quorum.VHDX -Fixed - SizeBytes 1GB Once we have created these virtual hard disk files, we will add them as shared .vhdx files. We will attach these newly created VHDX files to the Fileserver_VM1 and Fileserver_VM2 virtual machines and specify the parameter-shared VHDX files for guest clustering: Add-VMHardDiskDrive –VMName Fileserver_VM1 -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX – ShareVirtualDisk   Add-VMHardDiskDrive –VMName Fileserver_VM2 -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX – ShareVirtualDisk Finally, we will be making the disks available online and adding them to the failover cluster using the following command: Get-ClusterAvailableDisk | Add-ClusterDisk Once we have executed the preceding set of steps, we will have a highly available file server infrastructure using shared VHD files. Live virtual hard disk resizing With Windows Server 2012 R2, a newly added feature in Hyper-V allows the administrators to expand or shrink the size of a virtual hard disk attached to the SCSI controller while the virtual machines are still running. Hyper-V administrators can now perform maintenance operations on a live VHD and avoid any downtime by not temporarily shutting down the virtual machine for these maintenance activities. Prior to Windows Server 2012 R2, to resize a VHD attached to the virtual machine, it had to be turned off leading to costly downtime. Using the GUI controls, the VHD resize can be done by using only the Edit Virtual Hard Disk wizard. Also, note that the VHDs that were previously expanded can be shrunk. The Windows PowerShell way of doing a VHD resize is by using the Resize-VirtualDisk cmdlet. Let's look at the ways you can automate a VHD resize using PowerShell. In the next example, we will demonstrate how you can expand and shrink a virtual hard disk connected to a VM's SCSI controller. We will continue using the virtual machine that we created for our previous example. We have a pre-created VHD of 50 GB that is connected to the virtual machine's SCSI controller. Expanding the virtual hard disk Let's resize the aforementioned virtual hard disk to 57 GB using the Resize-Virtualdisk cmdlet: Resize-VirtualDisk -Name "scsidisk" -Size (57GB) Next, if we open the VM settings and perform an inspect disk operation, we'll be able to see that the VHDX file size has become 57 GB: Also, one can verify this when he or she logs into the VM, opens disk management, and extends the unused partition. You can see that the disk size has increased to 57 GB: Resizing the virtual hard disk Let's resize the earlier mentioned VHD to 57 GB using the Resize-Virtualdisk cmdlet: For this exercise, the primary requirement is to shrink the disk partition by logging in to the VM using disk management, as you can see in the following screenshot; we're shrinking the VHDX file by 7 GB: Next, click on Shrink. Once you complete this step, you will see that the unallocated space is 7 GB. You can also execute this step using the Resize-Partition Powershell cmdlet: Get-Partition -DiskNumber 1 | Resize-Partition -Size 50GB The following screenshot shows the partition: Next, we will resize/shrink the VHD to 50 GB: Resize-VirtualDisk -Name "scsidisk" -Size (50GB) Once the previous steps have been executed successfully, run a re-scan disk using disk management and you will see that the disk size is 50 GB: Summary In this article, we went through the basics of setting up a Hyper-V environment using PowerShell. We also explored the fundamental concepts of Hyper-V management with Hyper-V management shell. Resources for Article: Further resources on this subject: Hyper-V building blocks for creating your Microsoft virtualization platform [article] The importance of Hyper-V Security [article] Network Access Control Lists [article]
Read more
  • 0
  • 0
  • 3704
article-image-metrics-vrealize-operations
Packt
26 Dec 2014
25 min read
Save for later

Metrics in vRealize Operations

Packt
26 Dec 2014
25 min read
 In this article by Iwan 'e1' Rahabok, author of VMware vRealize Operations Performance and Capacity Management, we will learn that vSphere 5.5 comes with many counters, many more than what a physical server provides. There are new counters that do not have a physical equivalent, such as memory ballooning, CPU latency, and vSphere replication. In addition, some counters have the same name as their physical world counterpart but behave differently in vSphere. Memory usage is a common one, resulting in confusion among system administrators. For those counters that are similar to their physical world counterparts, vSphere may use different units, such as milliseconds. (For more resources related to this topic, see here.) As a result, experienced IT administrators find it hard to master vSphere counters by building on their existing knowledge. Instead of trying to relate each counter to its physical equivalent, I find it useful to group them according to their purpose. Virtualization formalizes the relationship between the infrastructure team and application team. The infrastructure team changes from the system builder to service provider. The application team no longer owns the physical infrastructure. The application team becomes a consumer of a shared service—the virtual platform. Depending on the Service Level Agreement (SLA), the application team can be served as if they have dedicated access to the infrastructure, or they can take a performance hit in exchange for a lower price. For SLAs where performance matters, the VM running in the cluster should not be impacted by any other VMs. The performance must be as good as if it is the only VM running in the ESXi. Because there are two different counter users, there are two different purposes. The application team (developers and the VM owner) only cares about their own VM. The infrastructure team has to care about both the VM and infrastructure, especially when they need to show that the shared infrastructure is not a bottleneck. One set of counters is to monitor the VM; the other set is to monitor the infrastructure. The following diagram shows the two different purposes and what we should check for each. By knowing what matters on each layer, we can better manage the virtual environment. The two-tier IT organization At the VM layer, we care whether the VM is being served well by the platform. Other VMs are irrelevant from the VM owner's point of view. A VM owner only wants to make sure his or her VM is not contending for a resource. So the key counter here is contention. Only when we are satisfied that there is no contention can we proceed to check whether the VM is sized correctly or not. Most people check for utilization first because that is what they are used to monitoring in the physical infrastructure. In a virtual environment, we should check for contention first. At the infrastructure layer, we care whether it serves everyone well. Make sure that there is no contention for resource among all the VMs in the platform. Only when the infrastructure is clear from contention can we troubleshoot a particular VM. If the infrastructure is having a hard time serving majority of the VMs, there is no point troubleshooting a particular VM. This two-layer concept is also implemented by vSphere in compute and storage architectures. For example, there are two distinct layers of memory in vSphere. There is the individual VM memory provided by the hypervisor and there is the physical memory at the host level. For an individual VM, we care whether the VM is getting enough memory. At the host level, we care whether the host has enough memory for everyone. Because of the difference in goals, we look for a different set of counters. In the previous diagram, there are two numbers shown in a large font, indicating that there are two main steps in monitoring. Each step applies to each layer (the VM layer and infrastructure layer), so there are two numbers for each step. Step 1 is used for performance management. It is useful during troubleshooting or when checking whether we are meeting performance SLAs or not. Step 2 is used for capacity management. It is useful as part of long-term capacity planning. The time period for step 2 is typically 3 months, as we are checking for overall utilization and not a one off spike. With the preceding concept in mind, we are ready to dive into more detail. Let's cover compute, network, and storage. Compute The following diagram shows how a VM gets its resource from ESXi. It is a pretty complex diagram, so let me walk you through it. The tall rectangular area represents a VM. Say this VM is given 8 GB of virtual RAM. The bottom line represents 0 GB and the top line represents 8 GB. The VM is configured with 8 GB RAM. We call this Provisioned. This is what the Guest OS sees, so if it is running Windows, you will see 8 GB RAM when you log into Windows. Unlike a physical server, you can configure a Limit and a Reservation. This is done outside the Guest OS, so Windows or Linux does not know. You should minimize the use of Limit and Reservation as it makes the operation more complex. Entitlement means what the VM is entitled to. In this example, the hypervisor entitles the VM to a certain amount of memory. I did not show a solid line and used an italic font style to mark that Entitlement is not a fixed value, but a dynamic value determined by the hypervisor. It varies every minute, determined by Limit, Entitlement, and Reservation of the VM itself and any shared allocation with other VMs running on the same host. Obviously, a VM can only use what it is entitled to at any given point of time, so the Usage counter does not go higher than the Entitlement counter. The green line shows that Usage ranges from 0 to the Entitlement value. In a healthy environment, the ESXi host has enough resources to meet the demands of all the VMs on it with sufficient overhead. In this case, you will see that the Entitlement, Usage, and Demand counters will be similar to one another when the VM is highly utilized. This is shown by the green line where Demand stops at Usage, and Usage stops at Entitlement. The numerical value may not be identical because vCenter reports Usage in percentage, and it is an average value of the sample period. vCenter reports Entitlement in MHz and it takes the latest value in the sample period. It reports Demand in MHz and it is an average value of the sample period. This also explains why you may see Usage a bit higher than Entitlement in highly-utilized vCPU. If the VM has low utilization, you will see the Entitlement counter is much higher than Usage. An environment in which the ESXi host is resource constrained is unhealthy. It cannot give every VM the resources they ask for. The VMs demand more than they are entitled to use, so the Usage and Entitlement counters will be lower than the Demand counter. The Demand counter can go higher than Limit naturally. For example, if a VM is limited to 2 GB of RAM and it wants to use 14 GB, then Demand will exceed Limit. Obviously, Demand cannot exceed Provisioned. This is why the red line stops at Provisioned because that is as high as it can go. The difference between what the VM demands and what it gets to use is the Contention counter. Contention is Demand minus Usage. So if the Contention is 0, the VM can use everything it demands. This is the ultimate goal, as performance will match the physical world. This Contention value is useful to demonstrate that the infrastructure provides a good service to the application team. If a VM owner comes to see you and says that your shared infrastructure is unable to serve his or her VM well, both of you can check the Contention counter. The Contention counter should become a part of your SLA or Key Performance Indicator (KPI). It is not sufficient to track utilization alone. When there is contention, it is possible that both your VM and ESXi host have low utilization, and yet your customers (VMs running on that host) perform poorly. This typically happens when the VMs are relatively large compared to the ESXi host. Let me give you a simple example to illustrate this. The ESXi host has two sockets and 20 cores. Hyper-threading is not enabled to keep this example simple. You run just 2 VMs, but each VM has 11 vCPUs. As a result, they will not be able to run concurrently. The hypervisor will schedule them sequentially as there are only 20 physical cores to serve 22 vCPUs. Here, both VMs will experience high contention. Hold on! You might say, "There is no Contention counter in vSphere and no memory Demand counter either." This is where vRealize Operations comes in. It does not just regurgitate the values in vCenter. It has implicit knowledge of vSphere and a set of derived counters with formulae that leverage that knowledge. You need to have an understanding of how the vSphere CPU scheduler works. The following diagram shows the various states that a VM can be in: The preceding diagram is taken from The CPU Scheduler in VMware vSphere® 5.1: Performance Study (you can find it at http://www.vmware.com/resources/techresources/10345). This is a whitepaper that documents the CPU scheduler with a good amount of depth for VMware administrators. I highly recommend you read this paper as it will help you explain to your customers (the application team) how your shared infrastructure juggles all those VMs at the same time. It will also help you pick the right counters when you create your custom dashboards in vRealize Operations. Storage If you look at the ESXi and VM metric groups for storage in the vCenter performance chart, it is not clear how they relate to one another at first glance. You have storage network, storage adapter, storage path, datastore, and disk metric groups that you need to check. How do they impact on one another? I have created the following diagram to explain the relationship. The beige boxes are what you are likely to be familiar with. You have your ESXi host, and it can have NFS Datastore, VMFS Datastore, or RDM objects. The blue colored boxes represent the metric groups. From ESXi to disk NFS and VMFS datastores differ drastically in terms of counters, as NFS is file-based while VMFS is block-based. For NFS, it uses the vmnic, and so the adapter type (FC, FCoE, or iSCSI) is not applicable. Multipathing is handled by the network, so you don't see it in the storage layer. For VMFS or RDM, you have more detailed visibility of the storage. To start off, each ESXi adapter is visible and you can check the counters for each of them. In terms of relationship, one adapter can have many devices (disk or CDROM). One device is typically accessed via two storage adapters (for availability and load balancing), and it is also accessed via two paths per adapter, with the paths diverging at the storage switch. A single path, which will come from a specific adapter, can naturally connect one adapter to one device. The following diagram shows the four paths: Paths from ESXi to storage A storage path takes data from ESXi to the LUN (the term used by vSphere is Disk), not to the datastore. So if the datastore has multiple extents, there are four paths per extent. This is one reason why I did not use more than one extent, as each extent adds four paths. If you are not familiar with extent, Cormac Hogan explains it well on this blog post: http://blogs.vmware.com/vsphere/2012/02/vmfs-extents-are-they-bad-or-simply-misunderstood.html For VMFS, you can see the same counters at both the Datastore level and the Disk level. Their value will be identical if you follow the recommended configuration to create a 1:1 relationship between a datastore and a LUN. This means you present an entire LUN to a datastore (use all of its capacity). The following screenshot shows how we manage the ESXi storage. Click on the ESXi you need to manage, select the Manage tab, and then the Storage subtab. In this subtab, we can see the adapters, devices, and the host cache. The screen shows an ESXi host with the list of its adapters. I have selected vmhba2, which is an FC HBA. Notice that it is connected to 5 devices. Each device has 4 paths, so I have 20 paths in total. ESXi adapter Let's move on to the Storage Devices tab. The following screenshot shows the list of devices. Because NFS is not a disk, it does not appear here. I have selected one of the devices to show its properties. ESXi device If you click on the Paths tab, you will be presented with the information shown in the next screenshot, including whether a path is active. Note that not all paths carry I/O; it depends on your configuration and multipathing software. Because each LUN typically has four paths, path management can be complicated if you have many LUNs. ESXi paths The story is quite different on the VM layer. A VM does not see the underlying shared storage. It sees local disks only. So regardless of whether the underlying storage is NFS, VMFS, or RDM, it sees all of them as virtual disks. You lose visibility in the physical adapter (for example, you cannot tell how many IOPSs on vmhba2 are coming from a particular VM) and physical paths (for example, how many disk commands travelling on that path are coming from a particular VM). You can, however, see the impact at the Datastore level and the physical Disk level. The Datastore counter is especially useful. For example, if you notice that your IOPS is higher at the Datastore level than at the virtual Disk level, this means you have a snapshot. The snapshot IO is not visible at the virtual Disk level as the snapshot is stored on a different virtual disk. From VM to disk Counters in vCenter and vRealize Operations We compared the metric groups between vCenter and vRealize Operations. We know that vRealize Operations provides a lot more detail, especially for larger objects such as vCenter, data center, and cluster. It also provides information about the distributed switch, which is not displayed in vCenter at all. This makes it useful for the big-picture analysis. We will now look at individual counters. To give us a two-dimensional analysis, I would not approach it from the vSphere objects' point of view. Instead, we will examine the four key types of metrics (CPU, RAM, network, and storage). For each type, I will provide my personal take on what I think is a good guidance for their value. For example, I will give guidance on a good value for CPU contention based on what I have seen in the field. This is not an official VMware recommendation. I will state the official recommendation or popular recommendation if I am aware of it. You should spend time understanding vCenter counters and esxtop counters. This section of the article is not meant to replace the manual. I would encourage you to read the vSphere documentation on this topic, as it gives you the required foundation while working with vRealize Operations. The following are the links to this topic: The link for vSphere 5.5 is http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.monitoring.doc/GUID-12B1493A-5657-4BB3-8935-44B6B8E8B67C.html. If this link does not work, visit https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html and then navigate to ESXi and vCenter Server 5.5 Documentation | vSphere Monitoring and Performance | Monitoring Inventory Objects with Performance Charts. The counters are documented in the vSphere API. You can find it at http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.PerformanceManager.html. If this link has changed and no longer works, open the vSphere online documentation and navigate to vSphere API/SDK Documentation | vSphere Management SDK | vSphere Web Services SDK Documentation | VMware vSphere API Reference | Managed Object Types | P. Here, choose Performance Manager from the list under the letter P. The esxtop manual provides good information on the counters. You can find it at https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html. You should also be familiar with the architecture of ESXi, especially how the scheduler works. vCenter has a different collection interval (sampling period) depending upon the timeline you are looking at. Most of the time you are looking at the real-time statistic (chart), as other timelines do not have enough counters. You will notice right away that most of the counters become unavailable once you choose a timeline. In the real-time chart, each data point has 20 seconds' worth of data. That is as accurate as it gets in vCenter. Because all other performance management tools (including vRealize Operations) get their data from vCenter, they are not getting anything more granular than this. As mentioned previously, esxtop allows you to sample down to a minimum of 2 seconds. Speaking of esxtop, you should be aware that not all counters are exposed in vCenter. For example, if you turn on 3D graphics, there is a separate SVGA thread created for that VM. This can consume CPU and it will not show up in vCenter. The Mouse, Keyboard, Screen (MKS) threads, which give you the console, also do not show up in vCenter. The next screenshot shows how you lose most of your counters if you choose a timespan other than real time. In the case of CPU, you are basically left with two counters, as Usage and Usage in MHz cover the same thing. You also lose the ability to monitor per core, as the target objects only list the host now and not the individual cores. Counters are lost beyond 1 hour Because the real-time timespan only lasts for 1 hour, the performance troubleshooting has to be done at the present moment. If the performance issue cannot be recreated, there is no way to troubleshoot in vCenter. This is where vRealize Operations comes in, as it keeps your data for a much longer period. I was able to perform troubleshooting for a client on a problem that occurred more than a month ago! vRealize Operations takes data every 5 minutes. This means it is not suitable to troubleshoot performance that does not last for 5 minutes. In fact, if the performance issue only lasts for 5 minutes, you may not get any alert, because the collection may happen exactly in the middle of those 5 minutes. For example, let's assume the CPU is idle from 08:00:00 to 08:02:30, spikes from 08:02:30 to 08:07:30, and then again is idle from 08:07:30 to 08:10:00. If vRealize Operations is collecting at exactly 08:00, 08:05, and 08:10, you will not see the spike as it is spread over two data points. This means, for vRealize Operations to pick up the spike in its entirety without any idle data, the spike has to last for 10 minutes or more. In some metrics, the unit is actually 20 seconds. vRealize Operations averages a set of 20-second data points into a single 5-minute data point. The Rollups column is important. Average means the average of 5 minutes in the case of vRealize Operations. The summation value is actually an average for those counters where accumulation makes more sense. An example is CPU Ready time. It gets accumulated over the sampling period. Over a period of 20 seconds, a VM may accumulate 200 milliseconds of CPU ready time. This translates into 1 percent, which is why I said it is similar to average, as you lose the peak. Latest, on the other hand, is different. It takes the last value of the sampling period. For example, in the sampling for 20 seconds, it takes the value between 19 and 20 seconds. This value can be lower or higher than the average of the entire 20-second period. So what is missing here is the peak of the sampling period. In the 5-minute period, vRealize Operations does not collect low, average, and high from vCenter. It takes average only. Let's talk about the Units column now. Some common units are milliseconds, MHz, percent, KBps, and KB. Some counters are shown in MHz, which means you need to know your ESXi physical CPU frequency. This can be difficult due to CPU power saving features, which lower the CPU frequency when the demand is low. In large environments, this can be operationally difficult as you have different ESXi hosts from different generations (and hence, are likely to sport a different GHz). This is also the reason why I state that the cluster is the smallest logical building block. If your cluster has ESXi hosts with different frequencies, these MHz-based counters can be difficult to use, as the VMs get vMotion-ed by DRS. vRealize Operations versus vCenter I mentioned earlier that vRealize Operations does not simply regurgitate what vCenter has. Some of the vSphere-specific characteristics are not properly understood by traditional management tools. Partial understanding can lead to misunderstanding. vRealize Operations starts by fully understanding the unique behavior of vSphere, then simplifying it by consolidating and standardizing the counters. For example, vRealize Operations creates derived counters such as Contention and Workload, then applies them to CPU, RAM, disk, and network. Let's take a look at one example of how partial information can be misleading in a troubleshooting scenario. It is common for customers to invest in an ESXi host with plenty of RAM. I've seen hosts with 256 to 512 GB of RAM. One reason behind this is the way vCenter displays information. In the following screenshot, vCenter is giving me an alert. The host is running on high memory utilization. I'm not showing the other host, but you can see that it has a warning, as it is high too. The screenshots are all from vCenter 5.0 and vCenter Operations 5.7, but the behavior is still the same in vCenter 5.5 Update 2 and vRealize Operations 6.0. vSphere 5.0 – Memory alarm I'm using vSphere 5.0 and vCenter Operations 5.x to show the screenshots as I want to provide an example of the point I stated earlier, which is the rapid change of vCloud Suite. The first step is to check if someone has modified the alarm by reducing the threshold. The next screenshot shows that utilization above 95 percent will trigger an alert, while utilization above 90 percent will trigger a warning. The threshold has to be breached by at least 5 minutes. The alarm is set to a suitably high configuration, so we will assume the alert is genuinely indicating a high utilization on the host. vSphere 5.0 – Alarm settings Let's verify the memory utilization. I'm checking both the hosts as there are two of them in the cluster. Both are indeed high. The utilization for vmsgesxi006 has gone down in the time taken to review the Alarm Settings tab and move to this view, so both hosts are now in the Warning status. vSphere 5.0 – Hosts tab Now we will look at the vmsgesxi006 specification. From the following screenshot, we can see it has 32 GB of physical RAM, and RAM usage is 30747 MB. It is at 93.8 percent utilization. vSphere – Host's summary page Since all the numbers shown in the preceding screenshot are refreshed within minutes, we need to check with a longer timeline to make sure this is not a one-time spike. So let's check for the last 24 hours. The next screenshot shows that the utilization was indeed consistently high. For the entire 24-hour period, it has consistently been above 92.5 percent, and it hits 95 percent several times. So this ESXi host was indeed in need of more RAM. Deciding whether to add more RAM is complex; there are many factors to be considered. There will be downtime on the host, and you need to do it for every host in the cluster since you need to maintain a consistent build cluster-wide. Because the ESXi is highly utilized, I should increase the RAM significantly so that I can support more VMs or larger VMs. Buying bigger DIMMs may mean throwing away the existing DIMMs, as there are rules restricting the mixing of DIMMs. Mixing DIMMs also increases management complexity. The new DIMM may require a BIOS update, which may trigger a change request. Alternatively, the large DIMM may not be compatible with the existing host, in which case I have to buy a new box. So a RAM upgrade may trigger a host upgrade, which is a larger project. Before jumping in to a procurement cycle to buy more RAM, let's double-check our findings. It is important to ask what is the host used for? and who is using it?. In this example scenario, we examined a lab environment, the VMware ASEAN lab. Let's check out the memory utilization again, this time with the context in mind. The preceding graph shows high memory utilization over a 24-hour period, yet no one was using the lab in the early hours of the morning! I am aware of this as I am the lab administrator. We will now turn to vCenter Operations for an alternative view. The following screenshot from vCenter Operations 5 tells a different story. CPU, RAM, disk, and network are all in the healthy range. Specifically for RAM, it has 97 percent utilization but 32 percent demand. Note that the Memory chart is divided into two parts. The upper one is at the ESXi level, while the lower one shows individual VMs in that host. The upper part is in turn split into two. The green rectangle (Demand) sits on top of a grey rectangle (Usage). The green rectangle shows a healthy figure at around 10 GB. The grey rectangle is much longer, almost filling the entire area. The lower part shows the hypervisor and the VMs' memory utilization. Each little green box represents one VM. On the bottom left, note the KEY METRICS section. vCenter Operations 5 shows that Memory | Contention is 0 percent. This means none of the VMs running on the host is contending for memory. They are all being served well! vCenter Operations 5 – Host's details page I shared earlier that the behavior remains the same in vCenter 5.5. So, let's take a look at how memory utilization is shown in vCenter 5.5. The next screenshot shows the counters provided by vCenter 5.5. This is from a different ESXi host, as I want to provide you with a second example. Notice that the ballooning is 0, so there is no memory pressure for this host. This host has 48 GB of RAM. About 26 GB has been mapped to VM or VMkernel, which is shown by the Consumed counter (the highest line in the chart; notice that the value is almost constant). The Usage counter shows 52 percent because it takes from Consumed. The active memory is a lot lower, as you can see from the line at the bottom. Notice that the line is not a simple straight line, as the value goes up and down. This proves that the Usage counter is actually the Consumed counter. vCenter 5.5 Update 1 memory counters At this point, some readers might wonder whether that's a bug in vCenter. No, it is not. There are situations in which you want to use the consumed memory and not the active memory. In fact, some applications may not run properly if you use active memory. Also, technically, it is not a bug as the data it gives is correct. It is just that additional data will give a more complete picture since we are at the ESXi level and not at the VM level. vRealize Operations distinguishes between the active memory and consumed memory and provides both types of data. vCenter uses the Consumed counter for utilization for the ESXi host. As you will see later in this article, vCenter uses the Active counter for utilization for VM. So the Usage counter has a different formula in vCenter depending upon the object. This makes sense as they are at different levels. vRealize Operations uses the Active counter for utilization. Just because a physical DIMM on the motherboard is mapped to a virtual DIMM in the VM, it does not mean it is actively used (read or write). You can use that DIMM for other VMs and you will not incur (for practical purposes) performance degradation. It is common for Microsoft Windows to initialize pages upon boot with zeroes, but never use them subsequently. For further information on this topic, I would recommend reviewing Kit Colbert's presentation on Memory in vSphere at VMworld, 2012. The content is still relevant for vSphere 5.x. The title is Understanding Virtualized Memory Performance Management and the session ID is INF-VSP1729. You can find it at http://www.vmworld.com/docs/DOC-6292. If the link has changed, the link to the full list of VMworld 2012 sessions is http://www.vmworld.com/community/sessions/2012/. Not all performance management tools understand this vCenter-specific characteristic. They would have given you a recommendation to buy more RAM. Summary In this article, we covered the world of counters in vCenter and vRealize Operations. The counters were analyzed based on their four main groupings (CPU, RAM, disk, and network). We also covered each of the metric groups, which maps to the corresponding objects in vCenter. For the counters, we also shared how they are related, and how they differ. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] An Introduction to VMware Horizon Mirage [Article]
Read more
  • 0
  • 0
  • 3702

article-image-xendesktop-architecture
Packt
08 Mar 2013
12 min read
Save for later

The XenDesktop architecture

Packt
08 Mar 2013
12 min read
(For more resources related to this topic, see here.) Virtual desktops can reside in two locations: Hosted at the data center: In this case, you host the desktop/application at the data center and provide users with these desktops through different types of technologies (remote desktop client and Citrix ReceiverXenDesktop). Hosted at the client-side machine: In this case, users access the virtual desktop or application through a VM that is loaded at the client side through different types of client-side virtualizations (level 1, level 2, or streaming). Hosted desktops can be provided using: VM or physical PC : In this type, users are accessing the desktop on an OS that is installed to a dedicated VM or physical machine. You typically install the OS to the VM and clone the VM using hypervisor or storage cloning technology, or install the OS to a physical machine using automatic installation technology, enabling users to access the desktop. The problem with this method is that storage consumption is very high since you will have to load the VM/PC with the OS. Additionally, adding/removing/managing the infrastructure is sometimes complex and time consuming. Streamed OS to VM or physical PC : This type is the same as the VM or physical PC type; however, an OS is not installed on the VM or the physical machine but rather streamed to them. Machines (virtual or physical) will boot from the network, and load the required virtual hosted desktop (VHD) over the network and boot from the VHD again. It has a lot of benefits, including simplicity and storage optimization since an OS is not installed. A single VHD can support thousands of users. Hosted shared desktop : In this model, you provide users with access to a desktop that is hosted on the server and published to users. This method is implemented with Microsoft Terminal Services and XenApp plus Terminal services. This provides high scalability in terms of the number of users per server scale but provides limited security and limited environment isolation for the user. Now that we have visited the different types of DV and how they are delivered and located, let's see what the pros and cons are of each type: Virtualization Type Pros Cons Usage Hosted PC - not streamed PCs are located at the DC, allowing better control. Can be used in a consolidation scenario along with refurbished PCs. Superior performance, since a physical HW provides HW resources to the OS and the OS is preinstalled. Management/deployment is difficult, especially in large-scale deployment scenarios. Operating system deployment and maintenance is an overwhelming process. Low storage optimization and utilization since each OS is preinstalled. High electricity consumption since all of those PCs (normal or blade) are consuming electricity and HVAC resources. Requires users to be connected to the DC if they wish to access the VD. Suitable for refurbished PCs and reusing old PCs. Suitable for task workers, call center agents, and individuals with similar workloads. Hosted VM - not streamed VMs are located at the DC, allowing better control. Superior performance since the OS is preinstalled, but performance is less than what is provided with physical PCs. High consolidation scenario using server virtualization; it provides green IT for your DC. Management/deployment is difficult, especially in large-scale deployment scenarios. Low storage optimization and utilization since each OS is preinstalled, thus requiring a storage side optimization/de-duplication technique that might add costs for some vendors. Hypervisor management could add a costs for some vendors. Requires users to be connected to the DC if they wish to access the VD. Suitable for task workers and call center agents. Some organizations use this model to give users access to server-level hardware resources such as rendering and CAD software. Hosted PC - streamed PCs are located at the DC, allowing better control. Can be used in a consolidation scenario along with refurbished PCs. Superior performance since a physical HW provides HW resources to the OS. High storage optimization and utilization since each OS is streamed on demand. Deployment is difficult, especially in large-scale deployment scenarios. High electricity consumption since all of those PCs (normal or blade) are consuming electricity and HVAC resources. Requires high storage performance to provide write cache storage for the OS. Requires the deployment of a streaming technology provider within the architecture. PCs must be connected to streaming servers over the network to stream the OS's VHD. Requires users to be connected to the DC if they wish to access the VD. Suitable for task workers and call center agents. Some organizations use this model to give users access to server-level hardware resources such as rendering and CAD software. Hosted VMs - Streamed VMs are located at the DC, allowing better control. Superior performance, but performance less than what is provided with physical PCs. High consolidation scenario using of server virtualization; it provides green IT for your DC. High storage optimization and utilization since each OS is streamed on demand. Requires high storage performance to provide write cache storage for the OS. Requires the deployment of a streaming technology provider within the architecture. Hyper-v management could add some costs for some vendors. VMs must be connected to streaming servers over the network to stream the OS's VHD. Requires users to be connected to the DC if they wish to access the VD. Hosted for users who need hosted environment but do not need powerful resources and isolated environment that is not shared between them. Hosted shared desktops Desktop is located in the DC, allowing better control. High capacity for server/user ratio. Server consolidation can be achieved by hosting server on top of hypervisor technology. Easy to manage, deploy, and administrate. Different types of users will require different types of applications, HW resources, and security, which cannot be fully achieved in HSD mode. Resource isolation for users cannot be fully achieved as in other models. Customization for the OS, application, and for profile management could be a little tricky. Requires users to be connected to the DC if they wish to access the VD. Suitable for most VDs, task workers, call center agents, and PC users. Level 1 client hypervisors Since VMs are hosted on the clients, the load is offloaded from the DC and less HW investments are required. Users can use their VD without being connected to the network. Client HW can be utilized, including graphic cards and sound devices. VD are not located on the DC, thus could be less secure for some organizations. Might not fit with organizations that allow users to use their own personal computers. Could require some HW investments at the client HW side to allow adequate performance. The client is connected to a central storage for the purpose of backing up the company's data. This type might fit into smaller environments, but for larger environments, management could be hard or even impossible. Suitable for mobile users and power users who are not connected to the data center at all times. Level 2 client hypervisors Since VMs are hosted on the clients, the load is offloaded from the DC and less HW investments are required. Users can use their VD without being connected to the network. Client HW can be utilized, including graphic cards and sound devices. Allows the users to mix the usage of virtual and physical desktops. VD are not located on the DC, thus could be less secure for some organizations. Might not fit with organizations that allows users to use their own personal computers. Could require some HW investments at the client HW side to allow adequate performance. The client is connected to a central storage for the purpose of backing up the company's data backup. This type might fit into smaller environments, but for larger environments, management could be hard or even impossible. Suitable for mobile users and power users who are not connected to the data center at all times. XenDesktop components Now that we have reviewed the DV delivery methods, let us see the components that are used by XenDesktop to build the DV infrastructure. The XenDesktop infrastructure can be divided into two major blocks the delivery block and the virtualization block, as shown in the next two sections. Delivery block This section is responsible for the VD delivery part, including authenticating users, remote access, redirecting users to the correct servers, and so on. This part consists of the following servers: The Desktop Delivery Controller Description : The Desktop Delivery Controller (DDC) is the core service within the XenDesktop deployment. Think of it as the glue that holds everything together within the XenDesktop infrastructure. It is responsible for authenticating the user, managing the user's connections, and redirecting the user to the assigned VD. Workload type: The DDC handles an HTTP/HTTPS connection sent from the web interface services to its XML service, passing the user's credentials and querying for the user's assigned VD. High availability: This is achieved by introducing extra DDCs in the infrastructure within the farm. To achieve high availability and load balancing between them, these DDCs will be configured in the web interface that will be loaded to balance them. Web interface (WI) Description : The web interface is a very critical part of the XenDesktop infrastructure. The web interface has three very important tasks to handle within the XenDesktop infrastructure. They are as follows: 1. It receives the user credentials either explicitly, by a pass-through, or by other means 2. It passes the credentials to the DDC 3. It passes the list of assigned VDs to the user after communicating with the DDC XML service 4. It passes the connection information to the client inside the ica file so the client can establish the connection Workload type: The WI receives an HTTP/HTTPS connection from users initiating connections to the XenDesktop farm either internally and externally. High availability: This is achieved by installing multiple WI servers and load balancing the traffic between the WI servers, using either Windows NLB or other external hardware load balancer devices, such as Citrix NetScaler. Licensing server Description : The licensing server is the one responsible for supplying the DDC with the required license when the virtual desktop agent tries to acquire one from the DDC. Workload type: When you install the DDC, a license server is required to be configured to provide the licenses to the DDC. When you install the license server, you import the licenses you purchased from Citrix to the licensing server. High availability: This can be achieved in two ways, by either installing a licensing server on Windows Cluster or installing another offline licensing server that could be brought online in the case of the first server failure. Database server Description : The Database server is responsible for holding the static and dynamic data as configuration and session information. XenDesktop 5 no longer uses the IMA data store as the central database in which the con figuration information is stored. Instead, a Microsoft SQL Server database is used as the data store for both configuration and session information. Workload type: The database server receives the DB connections from DDC and provisioning servers (PVS), querying for farm/site configuration. It is important to note that the XenDesktop creates a DB that is different from a provisioning services farm database; both databases could be located on the same servers, same instance or different instances, or different servers. High availability: This can be achieved by using SQL clustering plus Microsoft Windows or SQL replication. Virtualization block The virtualization block provides the VD service to the users through the delivery block; I often see that the virtualization block is referenced as a hypervisor stack, but as you saw previously, this can be done using blade PCs or PCs located in the DC, so don't miss that. The virtualization block consists of: Desktop hosting infrastructure Description : This is the infrastructure that hosts the virtual desktop that is accessed by users. This could be either a virtual machine infrastructure, which could be XenServer, Hyper-v, or VMware VSPHERE ESXI servers, or a physical machine infrastructure, for either regular PCs or blade ones. Workload type: Hosts OSs that will provide the desktops to users that are accessing them from the delivery network. The OS could be running as either installed or streamed and can be either physical or virtual. High availability: Depends on the type of DHI used but can be developed by using clustering for hypervisors or adding extra HW in the case of the physical ones. Provisioning servers infrastructure Description : The provisioning servers are usually coupled with the XenDesktop deployment. Most of the organizations I worked with never deployed preinstalled OSs to VMs or physical machines, but rather streamed the OSs to physical or virtual machines since they provide huge storage optimization and utilization. Provisioning servers are the servers that actually stream the OS in the form of VHD over the network to the machines requesting the streamed OS. Workload type: Provisioning servers stream the OS as VHD files over the network to the machines, which can be either physical or virtual. High availability: This can be achieved by adding extra PVS servers to the provisioning servers' farm and configuring them in the DHCP servers or in the bootstrap file. XenDesktop system requirements XenDesktop has different components; each has its own requirements. The detailed requirements for these components can be checked and reviewed on the Citrix XenDesktop requirements website: http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-sys-reqs-wrapper-rho.html Summary In this article, we saw what the XenDesktop architecture consists of and its various components. Resources for Article : Further resources on this subject: Application Packaging in VMware ThinApp 4.7 Essentials [Article] Designing a XenApp 6 Farm [Article] Managing Citrix Policies [Article]
Read more
  • 0
  • 0
  • 3644