Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-networking
Packt
20 Mar 2014
8 min read
Save for later

Networking

Packt
20 Mar 2014
8 min read
(For more resources related to this topic, see here.) Working with vSphere Distributed Switches A vSphere Distributed Switch (vDS) is similar to a standard switch, but vDS spans across multiple hosts instead of creating an individual switch on each host. The vDS is created at the vCenter level, and the configuration is stored in the vCenter database. A cached copy of the vDS configuration is also stored on each host in case of a vCenter outage. Getting ready Log in to the vCenter Server using the vSphere Web Client. How to do it… In this section, you will learn how to create a vDS, dvportgroup, and manage the ESXi host using the vDS. First, we will create a vSphere Distributed Switch. The steps involved in creating a vDS are as follows: Select the datacenter on which the vDS has to be created. Navigate to Actions | New Distributed Switch...., as shown in the following screenshot Enter the Name and location for the vDS and click on Next. Select the version for the vDS, as shown in the following screenshot, and click on Next: In the Edit settings page, provide the following details: Number of uplinks: This specifies the number of physical NIC of the host which would be part of the vDS. Network I/O Control: This option controls the input/output to the network and can be set to either Enabled or Disabled. Default port group: This option lets you create a default port group. To create one, enable the checkbox and provide the Port group name. Click on Next when finished. In the Ready to complete screen, review the settings and click on Finish. The following steps will create a new distributed port group: The next step after creating a vDS is to create a new port group if it is not been created as part of the vDS. Select the vDS and click on Actions | New Distributed Port Group. Provide the name and select the location for the port group and click on Next. In the Configure settings screen, set the following general properties for the port group: Port binding: This provides us with three options, namely, Static, Dynamic, and Ephemeral (no binding). Static binding: This is selected when a VM is connected to the port group where a port is assigned and reserved for the VM. Only when the VM is deleted, the port is freed up. Ephemeral binding: This port is created and assigned to the VM by the host when a VM is powered on and the port is deleted when the VM is powered off. Dynamic binding: This is depreciated in ESXi 5.x version and is no longer in use, but the option is still available in the vSphere Client. Port allocation: This can be set to either Elastic or Fixed. Elastic: The default port is 8, and when all ports are used, a new set of ports is created automatically Fixed: The ports are fixed to 8, and no additional ports are created when all ports are used up Number of ports: This option is set to 8 by default. Network resource pool: This option is enabled only if a user-defined network pool is created; it can be set even after creating the port group. VLAN type: The available options are None, VLAN, VLAN trunking, and Private VLAN. None: This means that no VLAN is used VLAN: This implies that VLAN is used and the ID has to be specified VLAN trunking: This implies that a group of VLANs is being trunked and their respective ID have to be used Private VLAN: This menu is empty if a private VLAN does not exist In the Ready to complete screen, review the settings and click on Finish. The next step after creating a distributed port group is to add the ESXi host to the vDS. While the host is being added, it is possible to migrate the VMkernel and VM port group from the vSS to vDS, or it can be done later. Now, let's see the steps involved: Select the Distributed Switch in the vSphere Web Client. Navigate to Actions | Add and Manage Hosts. In the Select task screen, select Add hosts, as shown in the following screenshot, and click on Next: Click on the + icon to select hosts to be added and click on OK. Click on Next in the Select new hosts screen. Select the physical network adapters, which will be used as an uplink for the vDS, and click on Next. In the Select virtual network adapters screen, you will have the option to migrate the VMkernel interface to the vDS group; select the appropriate option and click on Next. Review any dependencies on the validation page and click on Next. Optionally, you can migrate the VM Network to the vDS port group in the Select VM network adapters screen by selecting the appropriate option and clicking on Next. In the Ready to complete screen, review the settings and click on Finish. An ESXi host can be removed from the vDS only if there is no VM still connected to the vDS. Make sure the VMs are either migrated to the standard switch or to another vDS. The following steps will remove an ESXi host from the Distributed Switch: Browse to the Distributed Switch in the vSphere Web Client. Navigate to Actions | Add and Manage Hosts. In the Select task screen, select Remove hosts, as shown in the following screenshot, and click on Next: Click on the + icon to select new hosts to be removed and click on OK. Click on Next in the Select hosts screen. In the Ready to complete screen, review the settings and click on Finish. When the entire host is being added to the vDS, you can start to migrate the resources from vSS to vDS. The following steps will help you migrate from a Standard to a Distributed Switch: Select the Distributed Switch in the vSphere Web Client. Navigate to Actions | Migrate VM to Another Network. In the Select source and destination networks screen, you have the option to browse to a specific network or no network for source network migration. These options are described as follows: Specific network: This option allows you to select the VMs residing on a particular port group No network: This option implies that VMs that are not connected to any network will be selected for migration In the Destination network option, browse and select the distributed port group for the VM network and click on Next. Select the VM to migrate and click on Next. In the Ready to complete screen, review the settings and click on Finish. How it works... vSphere Distributed Switches extend the capabilities of virtual networking. vDS can be broken into the following two logical sections; one is the data plane and the other is management plane: Data plane: This is also called the I/O plane and it takes cares of the actual packet switching, filtering, tagging, and all networking-related activities. Management plane: This is also known as the control plane. It is a centralized control to manage and configure the data plane functionality. There's more... It is possible to preserve the vSphere Distributed Switch configuration information to a file. You can use these configurations for other deployments and also as a backup. You can restore the port group configuration in case of any misconfiguration. The following steps will export the vSphere Distributed Switch configuration: Select the vSphere Distributed Switch from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Export Configurations. In Configuration to export, you will have the following two options. Select the appropriate one. Distributed Switch and all port groups Distributed Switch only Click on OK. Exporting would begin, and once done, it would ask for saving the configuration. Click on Yes and provide the path to store the file. The import configuration function can be used to create a copy of the exported vDS from the existing configuration file. The following steps will import the vSphere Distributed Switch configuration file: Select the Distributed Switch from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Import distributed port group. In the Import Port Group Configuration option, browse to the backup file and click on Next. Review the import settings and click on Finish. The following steps will restore the vSphere distributed port group configuration: Select the distributed port group from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Restore Configuration. Select one of the following options and click on OK: Restore to a previous configuration: This allows you to restore the configuration of the port group to your previous snapshot Restore configuration from a file: This allows you to restore to the configuration from the file saved on your local system In the Ready to complete screen, review the settings and click on Finish. Summary In this article, we understood the vSphere networking concepts and how to work with vSphere distributed switches. We also discussed some of the more advanced networking configurations available in the distributed switch. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 2799

article-image-storage-scalability
Packt
11 Aug 2015
17 min read
Save for later

Storage Scalability

Packt
11 Aug 2015
17 min read
In this article by Victor Wu and Eagle Huang, authors of the book, Mastering VMware vSphere Storage, we will learn that, SAN storage is a key component of a VMware vSphere environment. We can choose different vendors and types of SAN storage to deploy on a VMware Sphere environment. The advanced settings of each storage can affect the performance of the virtual machine, for example, FC or iSCSI SAN storage. It has a different configuration in a VMware vSphere environment. Host connectivity of Fibre Channel storage is accessed by Host Bus Adapter (HBA). Host connectivity of iSCSI storage is accessed by the TCP/IP networking protocol. We first need to know the concept of storage. Then we can optimize the performance of storage in a VMware vSphere environment. In this article, you will learn these topics: What the vSphere storage APIs for Array Integration (VAAI) and Storage Awareness (VASA) are The virtual machine storage profile VMware vSphere Storage DRS and VMware vSphere Storage I/O Control (For more resources related to this topic, see here.) vSphere storage APIs for array integration and storage awareness VMware vMotion is a key feature in vSphere hosts. An ESXi host cannot provide the vMotion feature if it is without shared SAN storage. SAN storage is a key component in a VMware vSphere environment. In large-scale virtualization environments, there are many virtual machines stored in SAN storage. When a VMware administrator executes virtual machine cloning or migrates a virtual machine to another ESXi host by vMotion, this operation allocates the resource on that ESXi host and SAN storage. In vSphere 4.1 and later versions, it can support VAAI. The vSphere storage API is used by a storage vendor who provides hardware acceleration or offloads vSphere I/O between storage devices. These APIs can reduce the resource overhead on ESXi hosts and improve performance for ESXi host operations, for example, vMotion, virtual machine cloning, creating a virtual machine, and so on. VAAI has two APIs: the hardware acceleration API and the array thin provisioning API. The hardware acceleration API is used to integrate with VMware vSphere to offload storage operations to the array and reduce the CPU overload on the ESXi host. The following table lists the features of the hardware acceleration API for block and NAS: Array integration Features Description Block Fully copy This blocks clone or copy offloading. Block zeroing This is also called write same. When you provision an eagerzeroedthick VMDK, the SCSI command is issued to write zeroes to disks. Atomic Test & Set (ATS) This is a lock mechanism that prevents the other ESXi host from updating the same VMFS metadata. NAS Full file clone This is similar to Extended Copy (XCOPY) hardware acceleration. Extended statistics This feature is enabled in space usage in the NAS data store. Reserved space The allocated space of virtual disk in thick format. The array thin provisioning API is used to monitor the ESXi data store space on the storage arrays. It helps prevent the disk from running out of space and reclaims disk space. For example, if the storage is assigned as 1 x 3 TB LUN in the ESXi host, but the storage can only provide 2 TB of data storage space, it is considered to be 3 TB in the ESXi host. Streamline its monitoring LUN configuration space in order to avoid running out of physical space. When vSphere administrators delete or remove files from the data store that is provisioned LUN, the storage can reclaim free space in the block level. In vSphere 4.1 or later, it can support VAAI features. In vSphere 5.5, you can reclaim the space on thin provisioned LUN using esxcli. VMware VASA is a piece of software that allows the storage vendor to provide information about their storage array to VMware vCenter Server. The information includes storage capability, the state of physical storage devices, and so on. vCenter Server collects this information from the storage array using a software component called VASA provider, which is provided by the storage array vendor. A VMware administrator can view the information in VMware vSphere Client / VMware vSphere Web Client. The following diagram shows the architecture of VASA with vCenter Server. For example, the VMware administrator requests to create a 1 x data store in VMware ESXi Server. It has three main components: the storage array, the storage provider and VMware vCenter Server. The following is the procedure to add the storage provider to vCenter Server: Log in to vCenter by vSphere Client. Go to Home | Storage Providers. Click on the Add button. Input information about the storage vendor name, URL, and credentials. Virtual machine storage profile The storage provider can help the vSphere administrator know the state of the physical storage devices and the capabilities on which their virtual machines are located. It also helps choose the correct storage in terms of performance and space by using virtual machine storage policies. A virtual machine storage policy helps you ensure that a virtual machine guarantees a specified level of performance or capacity of storage, for example, the SSD/SAS/NL-SAS data store, spindle I/O, and redundancy. Before you define a storage policy, you need to specify the storage requirement for your application that runs on the virtual machine. It has two types of storage requirement, which is storage-vendor-specific storage capability and user-defined storage capability. Storage-vendor-specific storage capability comes from the storage array. The storage vendor provider informs vCenter Server that it can guarantee the use of storage features by using storage-vendor-specific storage capability. vCenter Server assigns vendor-specific storage capability to each ESXi data store. User-defined storage capability is the one that you can define and assign storage profile to each ESXi datastore. In vSphere 5.1/5.5, the name of the storage policy is VM storage profile. Virtual machine storage policies can include one or more storage capabilities and assign to one or more VM. The virtual machine can be checked for storage compliance if it is placed on compliant storage. When you migrate, create, or clone a virtual machine, you can select the storage policy and apply it to that machine. The following procedure shows how to create a storage policy and apply it to a virtual machine in vSphere 5.1 using user-defined storage capability: The vSphere ESXi host requires the license edition of Enterprise Plus to enable the VM storage profile feature. The following procedure is adding the storage profile into vCenter Server: Log in to vCenter Server using vSphere Client. Click on the Home button in the top bar, and choose the VM Storage Profiles button under Management. Click on the Manage Storage Capabilities button to create user-defined storage capability. Click on the Add button to create the name of the storage capacity, for example, SSD Storage, SAS Storage, or NL-SAS Storage. Then click on the Close button. Click on the Create VM Storage Profile button to create the storage policy. Input the name of the VM storage profile, as shown in the following screenshot, and then click on the Next button to select the user-defined storage capability, which is defined in step 4. Click on the Finish button. Assign the user-defined storage capability to your specified ESXi data store. Right-click on the data store that you plan to assign the user-defined storage capability to. This capability is defined in step 4. After creating the VM storage profile, click on the Enable VM Storage Profiles button. Then click on the Enable button to enable the profiles. The following screenshot shows Enable VM Storage Profiles: After enabling the VM storage profile, you can see VM Storage Profile Status as Enabled and Licensing Status as Licensed, as shown in this screenshot: We have successfully created the VM storage profile. Now we have to associate the VM storage profile with a virtual machine. Right-click on a virtual machine that you plan to apply to the VM storage profile, choose VM Storage Profile, and then choose Manage Profiles. From the drop-down menu of VM Storage Profile select your profile. Then you can click on the Propagate to disks button to associate all virtual disks or decide which virtual disks you want to associate with that profile by setting manually. Click on OK. Finally, you need to check the compliance of VM Storage Profile on this virtual machine. Click on the Home button in the top bar. Then choose the VM Storage Profiles button under Management. Go to Virtual Machines and click on the Check Compliance Now button. The Compliance Status will display Compliant after compliance checking, as follows: Pluggable Storage Architecture (PSA) exists in the SCSI middle layer of the VMkernel storage stack. PSA is used to allow thirty-party storage vendors to use their failover and load balancing techniques for their specific storage array. A VMware ESXi host uses its multipathing plugin to control the ownership of the device path and LUN. The VMware default Multipathing Plugin (MPP) is called VMware Native Multipathing Plugin (NMP), which includes two subplugins as components: Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP). SATP is used to handle path failover for a storage array, and PSP is used to issue an I/O request to a storage array. The following diagram shows the architecture of PSA: This table lists the operation tasks of PSA and NMP in the ESXi host:   PSA NMP Operation tasks Discovers the physical paths Manages the physical path Handles I/O requests to the physical HBA adapter and logical devices Creates, registers, and deregisters logical devices Uses predefined claim rules to control storage devices Selects an optimal physical path for the request The following is an example of operation of PSA in a VMkernel storage stack: The virtual machine sends out an I/O request to a logical device that is managed by the VMware NMP. The NMP requests the PSP to assign to this logical device. The PSP selects a suitable physical path to send the I/O request. When the I/O operation is completed successfully, the NMP reports that the I/O operation is complete. If the I/O operation reports an error, the NMP calls the SATP. The SATP fails over to the new active path. The PSP selects a new active path from all available paths and continues the I/O operation. The following diagram shows the operation of PSA: VMware vSphere provides three options for the path selection policy. These are Most Recently Used (MRU), Fixed, and Round Robin (RR). The following table lists the advantages and disadvantages of each path: Path selection Description Advantage Disadvantage MRU The ESXi host selects the first preferred path at system boot time. If this path becomes unavailable, the ESXi host changes to the other active path. You can select your preferred path manually in the ESXi host. The ESXi host does not revert to the original path when that l path becomes available again. Fixed You can select the preferred path manually. The ESXi host can revert to the original path when the preferred path becomes available again. If the ESXi host cannot select the preferred path, it selects an available preferred path randomly. RR The ESXi host uses automatic path selection. The storage I/O across all available paths and enable load balancing across all paths. The storage is required to support ALUA mode. You cannot know which path is preferred because the storage I/O across all available paths. The following is the procedure of changing the path selection policy in an ESXi host: Log in to vCenter Server using vSphere Client. Go to the configuration of your selected ESXi host, choose the data store that you want to configure, and click on the Properties… button. Click on the Manage Paths… button. Select the drop-down menu and click on the Change button. If you plan to deploy a third-party MPP on your ESXi host, you need to follow up the storage vendor's instructions for the installation, for example, EMC PowerPath/VE for VMware that it is a piece of path management software for VMware's vSphere server and Microsoft's Hyper-V server. It also can provide load balancing and path failover features. VMware vSphere Storage DRS VMware vSphere Storage DRS (SDRS) is the placement of virtual machines in an ESX's data store cluster. According to storage capacity and I/O latency, it is used by VMware storage vMotion to migrate the virtual machine to keep the ESX's data store in a balanced status that is used to aggregate storage resources, and enable the placement of the virtual disk (VMDK) of virtual machine and load balancing of existing workloads. What is a data store cluster? It is a collection of ESXi's data stores grouped together. The data store cluster is enabled for vSphere SDRS. SDRS can work in two modes: manual mode and fully automated mode. If you enable SDRS in your environment, when the vSphere administrator creates or migrates a virtual machine, SDRS places all the files (VMDK) of this virtual machine in the same data store or different a data store in the cluster, according to the SDRS affinity rules or anti-affinity rules. The VMware ESXi host cluster has two key features: VMware vSphere High Availability (HA) and VMware vSphere Distributed Resource Scheduler (DRS). SDRS is different from the host cluster DRS. The latter is used to balance the virtual machine across the ESXi host based on the memory and CPU usage. SDRS is used to balance the virtual machine across the SAN storage (ESX's data store) based on the storage capacity and IOPS. The following table lists the difference between SDRS affinity rules and anti-affinity rules: Name of SDRS rules Description VMDK affinity rules This is the default SDRS rule for all virtual machines. It keeps each virtual machine's VMDKs together on the same ESXi data store. VMDK anti-affinity rules Keep each virtual machine's VMDKs on different ESXi data stores. You can apply this rule into all virtual machine's VMDKs or to dedicated virtual machine's VMDKs. VM anti-affinity rules Keep the virtual machine on different ESXi data stores. This rule is similar to the ESX DRS anti-affinity rules. The following is the procedure to create a storage DRS in vSphere 5: Log in to vCenter Server using vSphere Client. Go to home and click on the Datastores and Datastore Clusters button. Right-click on the data center and choose New Datastore Cluster. Input the name of the SDRS and then click on the Next button. Choose Storage DRS mode, Manual Mode and Fully Automated Mode. Manual Mode: According to the placement and migration recommendation, the placement and migration of the virtual machine are executed manually by the user.Fully Automated Mode: Based on the runtime rules, the placement of the virtual machine is executed automatically. Set up SDRS Runtime Rules. Then click on the Next button. Enable I/O metric for SDRS recommendations is used to enable I/O load balancing. Utilized Space is the percentage of consumed space allowed before the storage DRS executes an action. I/O Latency is the percentage of consumed latency allowed before the storage DRS executes an action. This setting can execute only if the Enable I/O metric for SDRS recommendations checkbox is selected. No recommendations until utilization difference between source and destination is is used to configure the space utilization difference threshold. I/O imbalance threshold is used to define the aggressive of IOPs load balancing. This setting can execute only if the Enable I/O metric for SDRS recommendations checkbox is selected. Select the ESXi host that is required to create SDRS. Then click on the Next button. Select the data store that is required to join the data store cluster, and click on the Next button to complete. After creating SDRS, go to the vSphere Storage DRS panel on the Summary tab of the data store cluster. You can see that Storage DRS is Enabled. On the Storage DRS tab on the data store cluster, it displays the recommendation, placement, and reasons. Click on the Apply Recommendations button if you want to apply the recommendations. Click on the Run Storage DRS button if you want to refresh the recommendations. VMware vSphere Storage I/O Control What is VMware vSphere Storage I/O Control? It is used to control in order to share and limit the storage of I/O resources, for example, the IOPS. You can control the number of storage IOPs allocated to the virtual machine. If a certain virtual machine is required to get more storage I/O resources, vSphere Storage I/O Control can ensure that that virtual machine can get more storage I/O than other virtual machines. The following table shows example of the difference between vSphere Storage I/O Control enabled and without vSphere Storage I/O Control: In this diagram, the VMware ESXi Host Cluster does not have vSphere Storage I/O Control. VM 2 and VM 5 need to get more IOPs, but they can allocate only a small amount of I/O resources. On the contrary, VM 1 and VM 3 can allocate a large amount of I/O resources. Actually, both VMs are required to allocate a small amount of IOPs. In this case, it wastes and overprovisions the storage resources. In the diagram to the left, vSphere Storage I/O Control is enabled in the ESXi Host Cluster. VM 2 and VM 5 are required to get more IOPs. They can allocate a large amount of I/O resources after storage I/O control is enabled. VM 1, VM 3, and VM 4 are required to get a small amount of I/O resources, and now these three VMs allocate a small amount of IOPs. After enabling storage I/O control, it helps reduce waste and overprovisioning of the storage resources. When you enable VMware vSphere Storage DRS, vSphere Storage I/O Control is automatically enabled on the data stores in the data store cluster. The following is the procedure to be carried out to enable vSphere Storage I/O control on an ESXi data store, and set up storage I/O shares and limits using vSphere Client 5: Log in to vCenter Server using vSphere Client. Go to the Configuration tab of the ESXi host, select the data store, and then click on the Properties… button. Select Enabled under Storage I/O Control, and click on the Close button. After Storage I/O Control is enabled, you can set up the storage I/O shares and limits on the virtual machine. Right-click on the virtual machine and select Edit Settings. Click on the Resources tab in the virtual machine properties box, and select Disk. You can individually set each virtual disk's Shares and Limit field. By default, all virtual machine shares are set to Normal and with Unlimited IOPs. Summary In this article, you learned what VAAI and VASA are. In a vSphere environment, the vSphere administrator learned how to configure the storage profile in vCenter Server and assign to the ESXi data store. We covered the benefits of vSphere Storage I/O Control and vSphere Storage DRS. When you found that it has a storage performance problem in the vSphere host, we saw how to troubleshoot the performance problem, and found out the root cause. Resources for Article: Further resources on this subject: Essentials of VMware vSphere [Article] Introduction to vSphere Distributed switches [Article] Network Virtualization and vSphere [Article]
Read more
  • 0
  • 0
  • 2677

article-image-article-creating-a-sample-c-net-application
Packt
27 Jul 2012
4 min read
Save for later

Creating a sample C#.NET application

Packt
27 Jul 2012
4 min read
First, open C#.NET. Then, go to File | New Project | Windows Form Applications. Type the desired name for our project and click on the OK button. Adding references We need to add a reference to the System.Management.Automation.dll assembly. Adding this assembly is tricky; first, we need to copy the assembly file to our application folder using the following command: Copy %windir%assemblyGAC_MSILSystem.Management.Automation1.0.0.0__31b f3856ad364e35System.Management.Automation.dll C:CodeXA65Sample where C:CodeXA65Sample is the folder of our application. Then we need to add the reference to the assembly. In the Project menu, we need to select Add Reference, click on the Browse tab, search and select the file System. Management.Automation.dll. After referencing the assembly, we need to add the following directive statements to our code: using System.Management.Automation; using System.Management.Automation.Host; using System.Management.Automation.Runspaces; Also, adding the following directive statements will make it easier to work with the collections returned from the commands: using System.Collections.Generic; using System.Collections.ObjectModel; Creating and opening a runspace To use the Microsoft Windows PowerShell and Citrix XenApp Commands from managed code, we must first create and open a runspace. A runspace provides a way for the application to execute pipelines programmatically. Runspaces construct a logical model of execution using pipelines that contains cmdlets, native commands, and language elements. So let's go and create a new function called ShowXAServers for the new runspace: void ShowXAServers() Then the following code creates a new instance of a runspace and opens it: Runspace myRunspace = RunspaceFactory.CreateRunspace(); myRunspace.Open(); The preceding piece of code provides access only to the cmdlets that come with the default Windows PowerShell installation. To use the cmdlets included with XenApp Commands, we must call it using an instance of the RunspaceConfiguration class. The following code creates a runspace that has access to the XenApp Commands: RunspaceConfiguration rsConfig = RunspaceConfiguration.Create(); PSSnapInException snapInException=null; PSSnapInInfo info = rsConfig.AddPSSnapIn ("Citrix.XenApp.Commands", out snapInException); Runspace myRunSpace = RunspaceFactory.CreateRunspace(rsConfig); myRunSpace.Open(); This code specifies that we want to use Windows PowerShell in the XenApp Command context. This step gives us access to Windows PowerShell cmdlets and Citrix-specific cmdlets. Running a cmdlet Next, we need to create an instance of the Command class by using the name of the cmdlet that we want to run. The following code creates an instance of the Command class that will run the Get-XAServer cmdlet, add the command to the Commands collection of the pipeline, and finally run the command calling the Pipeline.Invoke method: Pipeline pipeLine = myRunSpace.CreatePipeline(); Command myCommand = newCommand("Get-XAServer"); pipeLine.Commands.Add(myCommand); Collection commandResults = pipeLine.Invoke(); Displaying results Now we run the command Get-XAServer on the shell and get this output: In the left-hand side column, the properties of the cmdlet are located, and in this case, we are looking for the first one, the ServerName, so we are going to redirect the output of the ServerName property to a ListBox. So the next step will be to add a ListBox and Button controls. The ListBox will show the list of XenApp servers when we click the button. Then we need to add the following code at the end of Function ShowXAServers: foreach (PSObject cmdlet in commandResults) { string cmdletName = cmdlet.Properties["ServerName"].Value. ToString(); listBox1.Items.Add (cmdletName); } The full code of the sample will look like this: And this is the final output of the application when we run it: Passing parameters to cmdlets We can pass parameters to cmdlets, using the Parameters.Add option. We can add multiple parameters. Each parameter will require a line. For example, we can add the ZoneName parameter to filter server members of the US-Zone zone: Command myCommand = newCommand("Get-XAServer"); myCommand.Parameters.Add("ZoneName", "US-ZONE") pipeLine.Commands.Add(myCommand); Summary In this article, we have learned about managing XenApp with Windows PowerShell and developed sample .NET applications on C#.NET. Specially, we saw: How to list all XenApp servers by using Citrix XenApp Commands How to add a reference to the System.Management.Automation.dll assembly How to create and open runspace, which helps to execute pipelines programmatically How to create an instance using the name of cmdlet How to pass parameters to cmdlets Further resources on this subject: Designing a XenApp 6 Farm [Article] Getting Started with XenApp 6 [Article] Microsoft Forefront UAG Building Blocks [Article]
Read more
  • 0
  • 0
  • 2635
Visually different images

article-image-troubleshooting-and-gotchas-oracle-vm-manager-212
Packt
08 Oct 2009
4 min read
Save for later

Troubleshooting and Gotchas in Oracle VM Manager 2.1.2

Packt
08 Oct 2009
4 min read
As more and more users start to explore and use Oracle VM Manager, more troubleshooting and tweaks will come up. This is by no way an exhaustive list and is also not intended to be. Please do participate as much as possible in forums and contribute your tips and tricks with the community. Oracle VM Manager login takes too much time I have faced this issue very often and normally if you are unlucky you ought to get this type of error while installing. For instance this error message says nothing about the memory issue: Failed at "Could not get DeploymentManager".This is typically the result of an invalid deployer URI format beingsupplied, the target server not being in a started state or incorrectauthentication details being supplied.More information is available by enabling logging -- please see theOracle Containers for J2EE Configuration and Administration Guide fordetails.FailedPlease see /var/log/ovm-manager/ovm-manager.log for more information.Deploying application failed.Please check your environment, and re-run the script:/bin/sh scripts/deployApp.shAborting installation. Please check the environment and rerunrunInstaller.sh. But when you upgrade your VM Manager OS with more memory you'll be able to continue with the installation. Sometimes, you may also get all kinds of errors, such as the following one: Internal Exception: java.lang.OutOfMemoryError: Java heap space And they clearly point to the memory issue. This suggests that your OC4J may need more memory. Let's run the following commands to check the log information: cat  /var/log/ovm-manager/oc4j.log | grep "heap" If your OC4J ran out of memory you would typically get that heapsize error. To fix this go back to the console and examine the values of the following OC4J_JVM_ARGS function in the /opt/oc4j/bin/oc4j configuration file: Edit the following OC4J_JVM_ARGS="-XX:PermSize=256m -XX:MaxPermSize=512m function and give more memory to the OC4J. Save the information and quit: Restart the service OC4J: service oc4j stopservice oc4j start HVM guest creation fails Normally there are many actions and functionalities within Oracle VM Manager that require the host to be truly HVM-aware, which means that 64-bit (preferably) Oracle VM Servers must be running with hardware virtualization support on the chipset level. Having said that, both Intel and AMD support it and it is highly unlikely that you will come across new machines that do not support that. However, always check the compatibility within a specific family and check whether the support is turned on or off. Nonetheless, you could be using some reusable older hardware that may or may not support HW-assist virtualization. If you are confronted with the following message: "Error: There is no server supporting hardware virtualization in the selected server pool. " Then you'll have a reason to worry and check your hardware, and carry out the following commands on the VM Server that does not allow you to create a HVM: Cat /proc/cpuinfo | grep –E 'vmx|smx' Use the preceding command if your hardware is HVM-aware, then you should get some reply as shown in the following screenshot: If you don't get a response, then you might have a problem. For instance we pick up another VM Server which we for sure know does not have a HVM support or HW-assist virtualization: Also ensure that the virtualization support is enabled at the HW level in the BIOS. Then run the following commands to see if the Operating System supports HVM: As you have seen in the preceding screenshot, we then quickly logged into the VM Server which we knew does not support HVM and did not get a reply from the 172.22.202.111 VM Server. Whereas, the x64 bit version with built-in, BIOS-enabled HVM support returns the values in the form of xen_caps. xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 So if your CPU does not support HVM, use the PVM (Paravirtualized Method) to create your VM.
Read more
  • 0
  • 0
  • 2596

article-image-deploying-app-v-5-virtual-environment
Packt
12 Aug 2015
10 min read
Save for later

Deploying App-V 5 in a Virtual Environment

Packt
12 Aug 2015
10 min read
In this article written by James Preston, author of the book Microsoft Application Virtualization Cookbook, we will cover the following topics: Enabling the App-V shared content store mode Publishing applications through Microsoft RemoteApp Pre-caching applications in the local store Publishing applications through Citrix StoreFront (For more resources related to this topic, see here.) App-V 5 is the perfect companion for your virtual session or desktop delivery environment, allowing you to abstract applications from the user and desktop, as shown in the following image, and, in turn, reducing infrastructure requirements through features such as shared content store mode.   In this article, we will cover how to deploy App-V5 in these environments. Enabling the App-V shared content store mode In this recipe, we will cover enabling the App-V shared content store mode, which prevents the caching of App-V files on a client so that the application is launched from the server hosting the application directly. This feature is ideal for environments where there is ample network bandwidth between remote desktop session hosts (or client virtual machines in a VDI deployment) and where administrators are looking to reduce the overall need for storage by the hosts. While some files are still cached on the local machine (for example, for shortcuts or Shell extensions), the following screenshot shows the amount of storage saved on an Office 2013 deployment, where the shared content store mode is turned on (the screenshot on the right):   With the shared content store mode enabled, you can check the amount of storage space used by a package by checking the size of the individual package's folders at the following path on a client where the package is deployed (where Package ID is the GUID assigned to that package): C:ProgramDataApp-V<Package ID>. Getting ready To complete these steps, you will need to deploy a Remote Desktop Services environment (on the server RDS). The server RDS must also have the App-V client and any prerequisites deployed on it. How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server DC): Link the App-V5 Settings Group Policy Object to the Remote Desktop Server's OU. Create a Group Policy object for the server RDS. Enable the shared content store mode within that policy. The implementation of the preceding tasks is as follows: On the server DC, load the Group Policy Management console. Expand the tree structure to display the Remote Desktop Server's Organizational Unit and click on Link an Existing GPO…. From the window that appears, select the App-V 5 Settings policy and click on OK. Next, right-click on the OU and select Create a GPO in this domain, and Link it here.... Set the name of the policy as App-V 5 Shared Content Store and click on OK. Let's take a look at the following screenshot: Right-click on the policy you have just created and click on Edit…. In the window that appears, right-click on App-V 5 Shared Content Store and click on Properties. Then, tick the Disable User Configuration settings box and click on OK. Next, navigate to Computer Configuration | Policies | Administrative Templates | System | App-V | Streaming and double-click on Shared Content (SCS) mode. Set the policy to Enabled and click on OK. There's more… To verify that the setting is applied on the server RDS, open a PowerShell session and run the following command: Get-AppvClientConfiguration If the SharedContentStoreMode value is 1 and the SetByGroupPolicy value is True, then the policy is correctly applied. Publishing applications through Microsoft RemoteApp In this recipe, we will publish the Audacity package to the RDS server to be accessed by users through the Remote Desktop Web Access. Getting ready To complete these steps, you will need to deploy a Remote Desktop Services environment (on the server RDS). How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server RDS): Create a Security Group for your remote desktop session hosts. Publish the Audacity package to that Security Group through the App-V Management console. Publish the Audacity package through Server Manager. The implementation of the preceding tasks is as follows: On the server DC, launch the Active Directory Users and Computers console and navigate to demo.org | Domain Groups, and create a new Security Group called RDS Session Hosts. Add the server RDS to the group that you just created. On your Windows 8 client PC, log in to the App-V Management console as Sam Adams, select the Audacity package and click on the Edit option next to the AD ACCESS option. Under FIND VALID ACTIVE DIRECTORY GROUP AND GRANT ACCESS, enter demo.orgRDS Session Hosts and click on Check. In the drop-down menu that appears, select RDS Session Hosts and click on Grant Access. On the server RDS, wait for the App-V Publishing Refresh to occur (or force the process manually) for the Audacity shortcut to appear on the desktop. Launch Server Manager, and from the left-hand side bar, select Remote Desktop. From the left-hand side, select QuickSessionCollection (the collection created by default). Under REMOTEAPP PROGRAMS, navigate to Tasks | Publish RemoteApp Programs. In the window that appears, tick the box next to Audacity and click on Next, as shown in the following screenshot: Note that the path to the Audacity application is pointing at the App-V Installation root in %SYSTEMDRIVE%ProgramDataMicrosoftAppV. Review the confirmation window and click on Publish. On your Windows 8 client, open Internet Explorer and browse to https://rds.demo.org/RDWeb, accepting any invalid SSL certificate prompts and allowing the Remote Desktop plugin to run. Log in as Sam Adams and launch the Audacity application.   There's more… It is possible to limit applications within a remote desktop collection to users in a specific Security Group. To do this, right-click on the application as it appears under REMOTEAPP PROGRAMS and click on Edit Properties:   In the window that appears, click on User Assignment and set the radio button to Only specified users and groups. You will now be able to access the Add… button, which brings up an Active Directory search dialog, from where you can add the Audacity Users security group to limit the application to only the users in that group.   Precaching applications in the local store As an alternative to using the Shared Content Store mode, applications can be forced to be cached within the local store on your RDS session hosts. This would be advantageous in scenarios where the bandwidth from a central high speed storage device is more expensive than providing dedicated storage to the RDS session hosts. Getting ready To complete these tasks, you will need to deploy a Remote Desktop Services environment (on the server RDS). How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server DC): Create a group policy object for the server RDS. Enable background application caching within that policy. The implementation of the preceding tasks is as follows: On the server DC, load the Group Policy Management console. Expand the tree structure to display Remote Desktop Servers Organizational Unit, right-click on the OU, and select Create a GPO in this domain, and Link it here.... Set the name of the policy to App-V 5 Cache Applications and click on OK. Right-click on the policy you have just created and click on Edit…. In the window that appears, right-click on App-V5 Cache Applications and click on Properties, tick the Disable User Configuration settings box and click on OK. Next, navigate to Computer Configuration | Policies | Administrative Templates | System | App-V | Specify what to load in background (aka AutoLoad). Set the policy to Enabled, with Autoload Options set to All, and click on OK. There's more… Individual applications can be targeted for caching using the Mount-AppvClientPackage PowerShell command. For example, to mount the package named Audacity 2.0.6 (which has been already published to the Remote Desktop session host), the administrator would run the following command: Mount-AppvClientPackage –Name "Audacity 2.0.6" This would generate the following result:   Note that the PercentLoaded value is shown as 100 to indicate that the package is completely loaded within the local store. Publishing applications through Citrix StoreFront Apart from being a great addition to the Microsoft Virtual environment, App-V is also supported by Citrix XenDesktop. In this recipe, we will look at publishing the Audacity package through Citrix StoreFront. Getting ready In addition to this, the server XenDesktop and XD-HOST will be used in this recipe. XenDesktop is configured with an installation of XenDesktop 7.6 with a Machine Catalogue containing the server XD-HOST (configured as a Server OS Machine) and a delivery group that has been set up to service both applications and desktops. The server XD-HOST should have the App-V RDS client installed. Finally, the App-V applications that you wish to deploy through Citrix StoreFront must also be published to the server XD-HOST through the App-V Management console; in this case, Audacity. How to do it… The following list shows you the high-level steps involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server XenDesktop): Set up App-V Publishing in Citrix Studio. Publish applications through the Delivery Group. The implementation of the preceding tasks is as follows: On the server XenDesktop, launch Citrix Studio. Navigate to Citrix Studio | Configuration, right-click on App-V Publishing, and click on Add App-V Publishing. In the window that appears, enter the details of your App-V Management and Publishing servers, click on Test connection… to confirm that the details are correct and then click on Save. Navigate to Delivery Groups, right-click on the delivery group you have created, and click on Add Applications. On the introduction page of the wizard that appears, click on Next. On the applications page of the wizard, select Audacity from the list provided (which will be discovered automatically from your server XD-HOST) and click on Next. Note that you can also select to publish multiple applications at the same time. Review the summary screen and click on Finish. There's more… Similar to publishing through the Microsoft remote desktop web app, it is possible to limit access to your applications to specific users or security groups. To limit access, right-click on your application in the Applications tab of the Delivery Groups page and click on Properties.   In the window that appears, select the Limit Visibility tab and select Limit visibility for this application to the users listed below. Click on the Add users… button to choose users and security groups from Active Directory to be included in the group. Summary In this article, we learned about enabling App-V shared content store mode which prevents the caching of App-V sored files on the client system. We also had a look into publishing applications through Microsoft RemoteApp which publishes the Audacity package to the RDS server which enables the user to access from the Remote Desktop Web Access. Then we learned about precaching application in the local store which forces the application to be cached in to the RDS session hosts, which has certain advantages. Finally, we learned about publishing the applications through Citrix StoreFront where we published the Audacity server through Citrix StoreFront. Resources for Article: Further resources on this subject: Virtualization [article] Customization in Microsoft Dynamics CRM [article] Installing Postgre SQL [article]
Read more
  • 0
  • 0
  • 2544

article-image-planning-desktop-virtualization
Packt
16 Oct 2014
3 min read
Save for later

Planning Desktop Virtualization

Packt
16 Oct 2014
3 min read
 This article by Andy Paul, author of the book Citrix XenApp® 7.5 Virtualization Solutions, explains the VDI and its building blocks in detail. (For more resources related to this topic, see here.) The building blocks of VDI The first step in understanding Virtual Desktop Infrastructure (VDI) is to identify what VDI means to your environment. VDI is an all-encompassing term for most virtual infrastructure projects. For this book, we will use the definitions cited in the following sections for clarity. Hosted Virtual Desktop (HVD) Hosted Virtual Desktop is a machine running a single-user operating system such as Windows 7 or Windows 8, sometimes called a desktop OS, which is hosted on a virtual platform within the data center. Users remotely access a desktop that may or may not be dedicated but runs with isolated resources. This is typically a Citrix XenDesktop virtual desktop, as shown in the following figure:   Hosted Virtual Desktop model; each user has dedicated resources Hosted Shared Desktop (HSD) Hosted Shared Desktop is a machine running a multiuser operating system such as Windows 2008 Server or Windows 2012 Server, sometimes called a server OS, possibly hosted on a virtual platform within the data center. Users remotely access a desktop that may be using shared resources among multiple users. This will historically be a Citrix XenApp published desktop, as demonstrated in the following figure:   Hosted Shared Desktop model; each user shares the desktop server resources Session-based Computing (SBC) With Session-based Computing, users remotely access applications or other resources on a server running in the data center. These are typically client/server applications. This server may or may not be virtualized. This is a multiuser environment, but the users do not access the underlying operating system directly. This will typically be a Citrix XenApp hosted application, as shown in the following figure:   Session-based computing model; each user accesses applications remotely, but shares resources Application virtualization In application virtualization, applications are centrally managed and distributed, but they are locally executed. This may be in conjunction with, or separate from, the other options mentioned previously. Application virtualization typically involves application isolation, allowing the applications to operate independently of any other software. This will be an example of Citrix XenApp offline applications as well as Citrix profiled applications, Microsoft App-V application packages, and VMware ThinApp solutions. Have a look at the following figure:   Application virtualization model; the application packages execute locally The preceding list is not a definitive list of options, but it serves to highlight the most commonly used elements of VDI. Other options include client-side hypervisors for local execution of a virtual desktop, hosted physical desktops, and cloud-based applications. Depending on the environment, all of these components can be relevant. Summary In this article, we learned the VDI and understood its building blocks in detail. Resources for Article: Further resources on this subject: Installation and Deployment of Citrix Systems®' CPSM [article] Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article] Introduction to Citrix XenDesktop [article]
Read more
  • 0
  • 0
  • 2518
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introduction-microsoft-remote-desktop-services-and-vdi
Packt
09 Jul 2014
10 min read
Save for later

An Introduction to Microsoft Remote Desktop Services and VDI

Packt
09 Jul 2014
10 min read
(For more resources related to this topic, see here.) Remote Desktop Services and VDI What Terminal Services did was to provide simultaneous access to the desktop running on a server/group of servers. The name was changed to Remote Desktop Services (RDS) with the release of Windows Server 2008 R2 but actually this encompasses both VDI and what was Terminal Services and now referred to as Session Virtualization. This is still available alongside VDI in Windows Server 2012 R2; each user who connects to the server is granted a Remote Desktop session (RD Session) on an RD Session Host and shares this same server-based desktop. Microsoft refers to any kind of remote desktop as a Virtual Desktop and these are grouped into Collections made available to specific group users and managed as one, and a Session Collection is a group of virtual desktop based on session virtualization. It's important to note here that what the users see with Session Virtualization is the desktop interface delivered with Windows Server which is similar to but not the same as the Windows client interface, for example, it does have a modern UI by default. We can add/remove the user interface features in Windows Server to change the way it looked, one of these the Desktop Experience option is there specifically to make Windows Server look more like Windows client and in Windows Server 2012 R2 if you add this feature option in you'll get the Windows store just as you do in Windows 8.1. VDI also provides remote desktops to our users over RDP but does this in a completely different way. In VDI each user accesses a VM running Windows Client and so the user experience looks exactly the same as it would on a laptop or physical desktop. In Windows VDI, these Collections of VMs run on Hyper-V and our users connect to them with RDP just as they can connect to RD Sessions described above. Other parts of RDS are common to both as we'll see shortly but what is important for now is that RDS manages the VDI VMs for us and organises which users are connected to which desktops and so on. So we don't directly create VMs in a Collection or directly set up security on them. Instead a collection is created from a template which is a special VM that is never turned on as the Guest OS is sysprepped and turning it on would instantly negate that. The VMs in a Collection inherit this master copy of the Guest OS and any installed applications as well. The settings for the template VM are also inherited by the virtual desktops in a collection; CPU memory, graphics, networking, and so on, and as we'll see there are a lot of VM settings that specifically apply to VDI rather than for VMs running a server OS as part of our infrastructure. To reiterate VDI in Windows Server 2012 R2, VDI is one option in an RDS deployment and it's even possible to use VDI alongside RD Sessions for our users. For example, we might decide to give RD Sessions for our call center staff and use VDI for our remote workforce. Traditionally, the RD Session Hosts have been set up on physical servers as older versions of Hyper-V and VMware weren't capable of supporting heavy workloads like this. However, we can put up to 4 TB RAM and 64 logical processors into one VM (physical hardware permitting) and run large deployment RD Sessions virtually. Our users connect to our Virtual Desktop Collections of whatever kind with a Remote Desktop Client which connects to the RDS servers using the Remote Desktop Protocol (RDP). When we connect to any server with the Microsoft Terminal Services Client (MSTSC.exe), we are using RDP for this but without setting up RDS there are only two administrative sessions available per server. Many of the advantages and disadvantages of running any kind of remote desktop apply to both solutions. Advantages of Remote Desktops Given that the desktop computing for our users is now going to be done in the data center we only need to deploy powerful desktop and laptops to those users who are going to have difficulty connecting to our RDS infrastructure. Everyone else could either be equipped with thin client devices, or given access in from other devices they already have if working remotely, such as tablets or their home PCs and laptops. Thin client computing has evolved in line with advances in remote desktop computing and the latest devices from 10Zig, Dell, HP among others support multiple hi-res monitors, smart cards, and web cams for unified communications which are also enabled in the latest version of RDP (8.1). Using remote desktops can also reduce overall power consumption for IT as an efficiently cooled data center will consume less power than the sum of thin clients and servers in an RDS deployment; in one case, I saw this saving resulted in a green charity saving 90 percent of its IT power bill. Broadly speaking, managing remote desktops ought to be easier than managing their physical equivalents, for a start they'll be running on standard hardware so installing and updating drivers won't be so much of an issue. RDS has specific tooling for this to create a largely automatic process for keeping our collections of remote desktops in a desired state. VDI doesn't exist in a vacuum and there are other Microsoft technologies to make any desktop management easier with other types of virtualization: User profiles: They have long been a problem for desktop specialists. There are dedicated technologies built into session virtualization and VDI to allow user's settings and data to be stored away from the VHD with the Guest OS on. Techniques such as folder redirection can also help here and new for Windows Server 2012 is User Environment Virtualization (UE-V) which provides a much richer set of tools that can work across VDI, RD Sessions and physical desktops to ensure the user has the same experience no matter what they are using for a Windows Client desktop. Application Virtualization (App-V): This allows us to deploy application to the user who need them when and where they need them, so we don't need to create desktops for different type of users who need special applications; we just need a few of these and only deploy generic applications on these and those that can't make use of App-V. Even if App-V is not deployed VDI administrator have total control over remote desktops, as any we have any number of security techniques at our disposal and if the worst comes to the worst we can remove any installed applications every time a user logs off! The simple fact that the desktop and applications we are providing to our users are now running on servers under our direct control also increase our IT security. Patching is now easy to enable particularly for our remote workforce as when they are on site or working remotely their desktop is still in the data center. RDS in all its forms is then an ideal way of allowing a Bring Your Own Device (BYOD) policy. Users can bring in whatever device they wish into work or work at home on their own device (WHOYD is my own acronym for this!) by using an RD Client on that device and securely connecting with it. Then there are no concerns about partially or completely wiping users' own devices or not being able to because they aren't in a connected state when they are lost or stolen. VDI versus Session Virtualization So why are there these two ways of providing Remote Desktops in Windows and what are the advantages and disadvantages of each? First and foremost Session Virtualization is always going to be much more efficient than VDI as it's going to provide more desktops on less hardware. This makes sense if we look at what's going on in each scenario if we have a hundred remote desktop users to provide for: In Session Virtualization, we are running and sharing one operating system and this will comfortably sit on 25 GB of disk. For memory we need roughly 2 GB per CPU, we can then allocate 100 MB memory to each RD Session. So on a quad CPU Server for a hundred RD Sessions we'll need 8 GB + (100MB*100), so less than 18 GB RAM. If we want to support that same hundred users on VDI, we need to provision a hundred VMs each with its own OS. To do this we need 2.5 TB of disk and if we give each VM 1 GB of RAM then 100 GB of RAM is needed. This is a little unfair on VDI in Windows Server 2012 R2 as we can cut down on the disk space need and be much more efficient in memory than this but even with these new technologies we would need 70 GB of RAM and say 400 GB of disk. Remember that with RDS our users are going to be running the desktop that comes with Windows Server 2012 R2 not Windows 8.1. Our RDS users are sharing the one desktop so they cannot be given administrative rights to that desktop. This may not be that important for them but can also affect what applications can be put onto the server desktop and some applications just won't work on a Server OS anyway. Users' sessions can't be moved from one session host to another while the user is logged in so planned maintenance has to be carefully thought through and may mean working out of hours to patch servers and update applications if we want to avoid interrupting our users' work. VDI on the other hand means that our users are using Windows 8.1 which they will be happier with. And this may well be the deciding factor regardless of cost as our users are paying for this and their needs should take precedence. VDI can be easier to manage without interrupting our users as we can move running VMs around our physical hosts without stopping them. Remote applications in RDS Another part of the RDS story is the ability to provide remote access to individual applications rather than serving up a whole desktop. This is called RD Remote App and it simply provides a shortcut to an individual application running either on a virtual or remote session desktop. I have seen this used for legacy applications that won't run on the latest versions of Windows and to provide access to secure application as it's a simple matter to prevent cut and paste or any sharing of data between a remote application and the local device it's executing on. RD Remote Apps work by publishing only the specified applications installed on our RD Session Hosts or VDI VMs. Summary This article discusses the advantages of Remote Desktops. We also learned how they can be provided in Windows. This article also compares VDI to Session Virtualization. Resources for Article: Further resources on this subject: Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article] Installing Virtual Desktop Agent – server OS and desktop OS [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 2335

article-image-solving-some-not-so-common-vcenter-issues
Packt
05 May 2015
7 min read
Save for later

Solving Some Not-so-common vCenter Issues

Packt
05 May 2015
7 min read
In this article by Chuck Mills, author of the book vCenter Troubleshooting, we will review some of the not-so-common vCenter issues that administrators could face while they work with the vSphere environment. The article will cover the following issues and provide the solutions: The vCenter inventory shows no objects after you log in You get the VPXD must be stopped to perform this operation message Removing the vCenter plugins when they are no longer needed (For more resources related to this topic, see here.) Solving the problem of no objects in vCenter After successfully completing the vSphere 5.5 installation (not an upgrade) process with no error messages whatsoever, and logging in you log in to vCenter with the account you used for the installation. In this case, it is the local administrator account. Surprisingly, you are presented with an inventory of 0. The first thing is to make sure you have given vCenter enough time to start. Considering the previously mentioned account was the account used to install vCenter, you would assume the account is granted appropriate rights that allow you to manage your vCenter Server. Also consider the fact that you can log in and receive no objects from vCenter. Then, you might try logging in with your domain administrator account. This makes you wonder, What is going on here? After installing vCenter 5.5 using the Windows option, remember that the [email protected] user will have administrator privileges for both the vCenter Single Sign-On Server and vCenter Server. You log in using the [email protected] account with the password you defined during the installation of the SSO server: vSphere attaches the permissions along with assigning the role of administrator to the default account [email protected]. These privileges are given for both the vCenter Single Sign-On server and the vCenter Server system. You must log in with this account after the installation is complete. After logging in with this account, you can configure your domain as an identity source. You can also give your domain administrator access to vCenter Server. Remember, the installation does not assign any administrator rights to the user account that was used to install vCenter. For additional information, review the Prerequisites for Installing vCenter Single Sign-On, Inventory Service, and vCenter Server document found at https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-C6AF2766-1AD0-41FD-B591-75D37DDB281F.html. Now that you understand what is going on with the vCenter account, use the following steps to enable the use of your Active Directory account for managing vCenter. Add or verify your AD domain as an identity source using the following procedure: Log in with [email protected]. Select Administration from the menu. Choose Configuration under the Single Sign-On option. You will see the Single Sign-On | Configuration option only when you log in using the [email protected] account. Select the Identity Sources tab and verify that the AD domain is listed. If not, choose Active Directory (Integrated Windows Authentication) found at the top of the window. Enter your Domain name and click on OK at the bottom of the window. Verify that your domain was added to Identity Sources, as shown in the following screenshot: Add the permissions for the AD account using the following steps: Click on Home at the top left of the window. Select vCenter from the menu options. Select vCenter Servers and then choose the vCenter Server object: Select the Manage tab and then the Permissions tab found in the vCenter Object window. Review the image that follows the steps to verify the process. Click on the green + icon to add permission. Choose the Add button located at the bottom of the window. Select the AD domain found in the drop-down option at the top of the window. Choose a user or group you want to assign permission to (the account named Chuck was selected for this example). Verify that the user or group is selected in the window. Use the drop-down options to choose the level of permissions (verify that Propagate to children is checked). Now, you should be able to log into vCenter with your AD account. See the results of the successful login in the following screenshot: Now, by adding the permissions to the account, you are able to log into vCenter using your AD credentials. The preceding screenshot shows the results of the changes, which is much different than the earlier attempt. Fixing the VPXD must be stopped to perform this operation message It has been mentioned several times in this article that the Virtual Center Service Appliance (VCSA) is the direction VMware is moving in when it comes to managing vCenter. As the number of administrators using it keeps increasing, the number of problems will also increase. One of the components an administrator might have problems with is the Virtual Centre Server service. This service should not be running during any changes to the database or the account settings. However, as with most vSphere components, there are times when something happens and you need to stop or start a service in order to fix the problem. There are times when an administrator who works within the VCSA appliance encounters the following error: This service can be stopped using the web console, by performing the following steps: Log into the console using https://ip-of-vcsa:5480. Enter your username and password: Choose vCenter Server after logging in. Make sure the Summary tab is selected. Click on the Stop button to stop the server: This should work most of the time, but if you find that using the web console is not working, then you need to log into the VCSA appliance directly and use the following procedure to stop the server: Connect to the appliance by using an SSH client such as Putty or mRemote. Type the command chkconfig. This will list all the services and their current status: Verify that vmware-vxpd is on: You can stop the server by using service vmware-vpxd stop command: After completing your work, you can start the server using one of the following methods: Restart the VCSA appliance Use the web console by clicking on the Start button on the vCenter Summary page Type service vmware-vpxd start on the SSH command line This should fix the issues that occur when you see the VPXD must be stopped to perform this operation message. Removing unwanted plugins in vSphere Administrators add and remove tools from their environment based on the needs and also the life of the tool. This is no different for the vSphere environment. As the needs of the administrator change, so does the usage of the plugins used in vSphere. The following section can be used to remove any unwanted plugins from your current vCenter. So, if you have lots of plugins and they are no longer needed, use the follow procedure to remove them: Log into your vCenter using http://vCenter_name or IP_address/mob and enter your username and password: Click on the content link under Properties: Click on ExtensionManager, which is found in the VALUE column: Highlight, right-click, and Copy the extension to be removed. Check out the Knowledge Base 1025360 found at http://Kb.vmware.com/kb/1025360 to get an overview of the plugins and their names. Select UnregisterExtension near the bottom of the page: Right-click on the plugin name and Paste it into the Value field: Click on Invoke Method to remove the plugin: This will give you the Method Invocation Result: void message. This message informs you that the selected plugin has been removed. You can repeat this process for each plugin that you want to remove. Summary In this article, we covered some of the not-so-common challenges an administrator could encounter in the vSphere environment. It provided the troubleshooting along with the solutions to the following issues: Seeing NO objects after logging into vCenter with the account you used to install it How to get past the VPXD must be stopped error when you are performing certain tasks within vCenter Removing the unwanted plugins from vCenter Server Resources for Article: Further resources on this subject: Availability Management [article] The Design Documentation [article] Design, Install, and Configure [article]
Read more
  • 0
  • 0
  • 2278

article-image-integration-system-center-operations-manager-2012-sp1
Packt
17 May 2013
9 min read
Save for later

Integration with System Center Operations Manager 2012 SP1

Packt
17 May 2013
9 min read
(For more resources related to this topic, see here.) This article provides tips and techniques to allow administrators to integrate Operations Manager 2012 with Virtual Machine Manager 2012 to monitor the health and performance of virtual machine hosts and their virtual machines, as well as to use the Operations Manager reporting functionality. In a hybrid hypervisor environment (for example, Hyper-V, VMware), using Operations Manager management packs ( MPs ) (for example, Veeam MP), you can monitor the Hyper-V hosts and the VMware hosts, which allow you to use only the System Center Console to manage and monitor the hybrid hypervisor environment. You can also monitor the health and availability of the VMM infrastructure, management, database, and library servers. The following screenshot will show you the diagram views of the virtualized environment through the Operations Manager: Installing System Center Operations Manager 2012 SP1 This recipe will guide you through the process of installing a System Center Operations Manager for the integration with VMM. Operations Manager has an integrated product and company knowledge for proactive tuning. It also allows the user to compute the OS, applications, services, and out-of-the-box network monitoring, reporting, and many more features extensibility through management packs, thus providing a cross-platform visibility. The deployment used on this recipe assumes a small environment with all components being installed on the same server. For datacenters and enterprise deployments, it is recommended to distribute the features and services across multiple servers to allow for scalability. For a complete design reference and complex implementation of SCOM 2012, follow up the Microsoft Operations Manager deployment guide available at http://go.microsoft.com/fwlink/?LinkId=246682. When planning, use Operations Guide for System Center 2012—Operations Manager (http://go.microsoft.com/fwlink/p/?LinkID=207751) to determine the hardware requirements. Getting ready Before starting, check out the system requirements and design planning for System Center Operations Manager 2012 SP1 at http://technet.microsoft.com/en-us/library/jj656654.aspx My recommendation is to deploy on a Windows Server 2012 and the SQL Server 2012 SP1 version. How to do it... Carry out the following steps to install Operations Manager 2012 SP1: Browse to the SCOM installation folder and click on Setup. Click on Install. On the Select the features to install page, select the components that apply to your environment, and then click on Next as shown in the following screenshot: The recommendation is to have a dedicated server, but it all depends on the size of the deployment. You can select all of the components to be installed on the same server for a small deployment. Type in the location where you'd install Operations Manager 2012 SP1, or accept the default location and click on Next. The installation will check if your system has passed all of the requirements. A screen showing the issues will be displayed if any of the requirements are not met, and you will be asked to fix and verify it again before continuing with the installation, as shown in the following screenshot: If all of the prerequisites are met, click on Next to proceed with the setup. On the Specify an installation option page, if this is the first Operations Manager, select the Create the first Management Server in a new management group option and provide a value in the Management group name field. Otherwise, select the Add a management server to an existing management group option as shown in the following screenshot: Click on Next to continue, accept the EULA, and click on Next. On the Configure the operational database page, type the server and instance name of the server and the SQL Server port number. It is recommended to keep the default values in the Database name, Database size (MB), Data file folder, and Log file folder boxes. Click on Next. The installation account needs DB owner rights on the database. On the SQL Server instance for Reporting Services page, select the instance where you want to host the Reporting Services (SSRS). Make sure the SQL Server has the SQL Server Full-Text Search and Analysis server component installed. On the Configure Operations Manager accounts page, provide the domain account credentials (for example, labsvc-scom) for the Operations Manager services. You can use a single domain account. For account requirements, see the Microsoft Operations Manager deployment guide at http://go.microsoft.com/fwlink/?LinkId=246682. On the Help improve System Center 2012 – Operations Manager page, select the desired options and click on Next. On the Installation Summary page, review the options and click on Install, and then on click on Close. The Operations Manager console will open. How it works... When deploying SCOM 2012, it is important to consider the placement of the components. Work on the SCOM design before implementing it. See the OpsMgr 2012 Design Guide available at http://blogs.technet.com/b/momteam/archive/2012/04/13/ opsmgr-2012-design-guide.aspx. On the Configure Operational Database page, if you are installing the first management server, a new operational database will be created. If you are installing additional management servers, an existing database will be used. On the SQL Server instance for Reporting Services page, make sure you have previously configured the Reporting Services at SQL setup using the Reporting Services Configuration Manager tool, and that the SQL Server Agent is running. During the OpsMgr setup, you will be required to provide the Management Server Action Account credentials and the System Center Configuration service and System Center Data Access service account credentials too. The recommendation is to use a domain account so that you can use the same account for both the services. The setup will automatically assign the local computer Administrators group to the Operations Manager administrator's role. The single-server scenario combines all roles onto a single instance and supports the following services: monitoring and alerting, reporting, audit collection, agentless-exception management, and data. If you are planning to monitor the network, it is recommended to move the SQL Server tempdb database to a separate disk that has multiple spindles. There's more... To confirm the health of the management server, carry out the following steps: In the OpsMgr console, click on the Administration workspace. In Device Management, select Management Servers to confirm that the installed server has a green check mark in the Health State column. See also The Deploying System Center 2012 – Operations Manager article available at http://technet.microsoft.com/en-us/library/hh278852.aspx Installing management packs After installing Operations Manager, you need to install some management packs and agents on the Hyper-V servers and on the VMM server. This recipe will guide you through the installation, but first make sure you have installed the Operations Manager Operations console on the VMM management server. You need to import the following management packs for the VMM 2012 SP1 integration: Windows Server operating system Windows Server 2008 operating system (Discovery) Internet Information Services 2003 Internet Information Services 7 Internet Information Services library SQL Server Core Library Getting ready Before you begin, make sure the correct version of PowerShell is installed, that is, PowerShell v2 for SC 2012 and PowerShell v3 for SC2012 SP1. How to do it... Carry out the following steps to install the required MPs in order to integrate with VMM 2012 SP1: In the OpsMgr console, click on the Administration workspace on the bottom-left pane. On the left pane, right-click on Management Packs and click on Import Management Packs. In the Import Management Packs wizard, click on Add, and then click on Add from catalog. In the Select Management Packs from Catalog dialog box, for each of the following management packs, repeat the steps 5 to 7: Windows Server operating system Windows Server 2008 operating system (Discovery) Internet Information Services 2003 Internet Information Services 7 Internet Information Services library SQL Server Core Library There are numerous management packs for Operations Manager. You can use this recipe to install other OpsMgr MPs from the catalog web service. You can also download the MPs from the Microsoft System Center Marketplace, which contains the MPs and documentation from Microsoft and some non-Microsoft companies. Save them to a shared folder and then import. See http://systemcenter.pinpoint. microsoft.com/en-US/home. In the Find field, type in the management pack to search in the online catalog and click on Search. The Management packs in the catalog list will show all of the packs that match the search criterion. To import, select the management pack, click on Select, and then click on Add as shown in the following screenshot: In the View section, you can refine the search by selecting, for example, to show only those management packs released within the last three months. The default view lists all of the management packs in the catalog. Click on OK after adding the required management packs. On the Select Management Packs page, the MPs will be listed with either a green icon, a yellow icon, or a red icon. The green icon indicates that the MP can be imported. The yellow information icon means that it is dependent on other MPs that are available in the catalog, and you can fix the dependency by clicking on Resolve. The red error icon indicates that it is dependent on other MPs, but the dependent MPs are not available in the catalog. Click on Import if all management packs have their icon statuses as green. On the Import Management Packs page, the progress for each management pack will be displayed. Click on Close when the process is finished. How it works... You can import the management packs available for Operations Manager using the following: The OpsMgr console: You can perform the following actions in the Management Packs menu of the Administration workspace: Import directly from Microsoft's online catalog Import from disk/share Download the management pack from the online catalog to import at a later time The Internet browser: You can download the management pack from the online catalog to import at a later time, or to install on an OpsMgr that is not connected to the Internet While using the OpsMgr console, verify whether all management packs show a green status. Any MP displaying the yellow information icon or the red error icon in the import list will not be imported. If there is no Internet connection on the OpsMgr, use an Internet browser to locate and download the management pack to a folder/share. Then copy the management pack to the OpsMgr server and use the option to import from disk/share. See also The Installing System Center Operations Manager 2012 SP1 recipe Visit Microsoft System Center Marketplace available at http://go.microsoft.com/fwlink/?LinkId=82105
Read more
  • 0
  • 0
  • 2249

Packt
16 Apr 2014
9 min read
Save for later

Introduction to Veeam® Backup & Replication for VMware

Packt
16 Apr 2014
9 min read
(For more resources related to this topic, see here.) Veeam Backup & Replication v7 for VMware is a modern solution for data protection and disaster recovery for virtualized VMware vSphere environments of any size. Veeam Backup & Replication v7 for VMware supports VMware vSphere and VMware Infrastructure 3 (VI3), including the latest version VMware vSphere 5.5 and Microsoft Windows Server 2012 R2 as the management server(s). Its modular approach and scalability make it an obvious choice regardless of the environment size or complexity. As your data center grows, Veeam Backup & Replication grows with it to provide complete protection for your environment. Remember, your backups aren't really that important, but your restore is! Backup strategies A common train of thought when dealing with backups is to follow the 3-2-1 rule: 3: Keep three copies of your data—one primary and two backups 2: Store the data in two different media types 1: Store at least one copy offsite This simple approach ensures that no matter what happens, you will be able to have a recoverable copy of your data. Veeam Backup & Replication lets you accomplish this goal by utilizing the backup copy jobs. Back up your production environment once, then use the backup copy jobs to copy the backed-up data to a secondary location, utilizing the Built-in WAN Acceleration features and to tape for long-term archival. You can even "daisy-chain" these jobs to each other, which ensures that as soon as the backup job is finished, the copy jobs are fired automatically. This allows you to easily accomplish the 3-2-1 rule without the need for complex configurations that makes it hard to manage. Combining this with a Grandfather-Father-Son (GFS) backup media rotation scheme, for tape-based archiving, ensures that you always have a recoverable media available. In such a scheme, there are three, or more, backup cycles: daily, weekly, and monthly. The following table shows how you might create a GFS rotation schedule: Monday Tuesday Wednesday Thursday Friday         WEEK 1 MON TUE WED THU WEEK 2 MON TUE WED THU WEEK 3 MON TUE WED THU WEEK 4 MON TUE WED THU MONTH 1 "Grandfather" tapes are kept for a year, "Father" tapes for a month, and "Son" tapes for a week. In addition, quarterly, half-yearly, and/or annual backups could also be separately retained if required. Recovery point objective and recovery time objective Both these terms come into play when defining your backup strategy. The recovery point objective (RPO) is a definition of how much data you can afford to lose. If you run backups every 24 hours, you have, in effect, defined that you can afford to lose up to a day's worth of data for a given application or infrastructure. If that is not the case, you need to have a look at how often you back up that particular application. The recovery time objective (RTO) is a measure of the amount of time it should take to restore your data and return the application to a steady state. How long can your business afford to be without a given application? 2 hours? 24 hours? A week? It all depends, and it is very important that you as a backup administrator have a clear understanding of the business you are supporting to evaluate these important parameters. Basically, it boils down to this: If there is a disaster, how much downtime can your business afford? If you don't know, talk to the people in your organization who know. Gather information from the various business units in order to assist in determining what they consider acceptable. Odds are that your views as an IT professional might not coincide with the views of the business units; determine their RPO and RTO values, and determine a backup strategy based on that. Native tape support By popular demand, native tape support was introduced in Veeam Backup & Replication v7. While the most effective method of backup might be disk based, lots and lots of customers still want to make use of their existing investment in tape technology. Standalone drives, tape libraries, and Virtual Tape Libraries (VTL) are all supported and make it possible to use tape-based solutions for long-term archival of backup data. Basically any tape device recognized by the Microsoft Windows server on which Backup & Replication is installed is also supported by Veeam. If Microsoft Windows recognizes the tape device, so will Backup & Replication. It is recommended that customers check the user guide and Veeam Forums (http://forums.veeam.com) for more information on native tape support. Veeam Backup & Replication architecture Veeam Backup & Replication consists of several components that together make up the complete architecture required to protect your environment. This distributed backup architecture leaves you in full control over the deployment, and the licensing options make it easy to scale the solution to fit your needs. Since it works on the VM layer, it uses advanced technologies such as VMware vSphere Changed Block Tracking (CBT) to ensure that only the data blocks that have changed since the last run are backed up, ensuring that the backup is performed as quickly as possible and that the least amount of data needs to be transferred each time. By talking directly to the VMware vStorage APIs for Data Protection (VADP), Veeam Backup & Replication can back up VMs without the need to install agents or otherwise touch the VMs directly. It simply tells the vSphere environment that it wants to take a backup of a given VM; vSphere then creates a snapshot of the VM, and the VM is read from the snapshot to create the backup. Once the backup is finished, the snapshot is removed, and changes that happened to the VM while it was backed up are rolled back into the production VM. By integrating with VMware Tools and Microsoft Windows VSS, application-consistent backups are provided if available in the VMs that are being backed up. For Linux-based VMs, VMware Tools are required and its native quiescence option is used. Not only does it let you back up your VMs and restore them if required, but you can also use it to replicate your production environment to a secondary location. If your secondary location has a different network topology, it helps you remap and re-IP your VMs in case there is a need to failover a specific VM or even an entire datacenter. Of course, failback is also available once the reason for the failover is rectified and normal operations can resume. Veeam Backup & Replication components The Veeam Backup & Replication suite consists of several components, which in combination, make up the backup and replication architecture. Veeam backup server: This is installed on a physical or virtual Microsoft Windows server. Veeam backup server is the core component of an implementation, and it acts as the configuration and control center that coordinates backup, replication, recovery verification, and restore tasks. It also controls jobs scheduling and resource allocation, and is the main entry point configuring the global settings for the backup infrastructure. The backup server uses the following services and components: Veeam Backup Service: This is the main components that coordinates all operations, such as backup, replication, recovery verification, and restore tasks. Veeam Backup Shell: This is the application user interface. Veeam Backup SQL Database: This is used by the other components to store data about the backup infrastructure, backup and restore jobs, and component configuration. This database instance can be installed locally or on a remote server. Veeam Backup PowerShell Snap-in: These are extensions to Microsoft Windows PowerShell that add a set of cmdlets for management of backup, replication, and recovery tasks through automation. Backup proxy Backup proxies are used to offload the Veeam backup server and are essential as you scale your environment. Backup proxies can be seen as data movers, physical or virtual, that run a subset of the components required on the Veeam backup server. These components, which include the Veeam transport service, can be installed in a matter of seconds and are fully automated from the Veeam backup server. You can deploy and remove proxy servers as you see fit, and Veeam Backup &Replication will distribute the backup workload between available backup proxies, thus reducing the load on the backup server itself and increasing the amount of simultaneous backup jobs that can be performed. Backup repository A backup repository is just a location where Veeam Backup & Replication can store backup files, copies of VMs, and metadata. Simply put, it's nothing more than a folder on the assigned disk-based backup storage. Just as you can offload the backup server with multiple proxies, you can add multiple repositories to your infrastructure and direct backup jobs directly to them to balance the load. The following repository types are supported: Microsoft Windows or Linux server with local or directly attached storage: Any storage that is seen as a local/directly attached storage on a Microsoft Windows or Linux server can be used as a repository. That means that there is great flexibility when it comes to selecting repository storage; it can be locally installed storage, iSCSI/FC SAN LUNs, or even locally attached USB drives. When a server is added as a repository, Veeam Backup & Replication deploys and starts the Veeam transport service, which takes care of the communication between the source-side transport service on the Veeam backup server (or proxy) and the repository. This ensures efficient data transfer over both LAN and WAN connections. Common Internet File System (CIFS) shares: CIFS (also known as Server Message Block (SMB)) shares are a bit different as Veeam cannot deploy transport services to a network share directly. To work around this, the transport service installed on a Microsoft Windows proxy server handles the connection between the repository and the CIFS share. Summary In this article, we will learned about various backup strategies and also went through some components of Veeam® Backup and Replication. Resources for Article: Further resources on this subject: VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [article] Use Of ISO Image for Installation of Windows8 Virtual Machine [article] An Introduction to VMware Horizon Mirage [article]
Read more
  • 0
  • 0
  • 2246
article-image-configuring-organization-network-services
Packt
22 Aug 2014
9 min read
Save for later

Configuring organization network services

Packt
22 Aug 2014
9 min read
This article by Lipika Pal, the author of the book VMware vCloud Director Essentials, teaches you to configure organization network services. Edge devices can be used as DNS relay hosts owing to the release of vCloud Networking and Security suite 5.1. However, before we jump onto how to do it and why you should do it, let us discuss the DNS relay host technology itself. (For more resources related to this topic, see here.) If your client machines want to send their DNS queries, they contact DNS relay, which is nothing but a host. The queries are sent by the relay host to the provider's DNS server or any other entity specified using the Edge device settings. The answer received by the Edge device is then sent back to the machines. The Edge device also stores the answer for a short period of time, so any other machine in your network searching for the same address receives the answer directly from the Edge device without having to ask internet servers again. In other words, the Edge device has this tiny memory called DNS cache that remembers the queries. The following diagram illustrates one of the setups and its workings: In this example, you see an external interface configured on Edge to act as a DNS relay interface. On the client side, we configured Client1 VM such that it uses the internal IP of the Edge device (192.168.1.1) as a DNS server entry. In this setup, Client1 requests DNS resolution (step 1) for the external host, google.com, from Edge's gateway internal IP. To resolve google.com, the Edge device will query its configured DNS servers (step 2) and return that resolution to Client1 (step 3). Typical uses of this feature are as follows: DMZ environment Multi-tenant environment Accelerated resolution time Configuring DNS relay To configure DNS relay in a vShield Edge device, perform the following steps. Configure DNS relay when creating an Edge device or when there is an Edge device available. This is an option for an organization gateway and not for a vApp or Org network. Now, let's develop an Edge gateway in an organization vDC while enabling DNS relay by executing the following steps: Open the vCloud Director URL in a supported browser, for example, https://serverFQDN/cloud. Log in to the cloud as the administrator. You will be presented with the Home screen. Click on the Organization VDCs link and on the right-hand side, you will see some organization vDCs created. Click on any organization vDC. Doing this will take you to the vDC page. Click on the Administration page and double-click on Virtual Datacenter. Then click on the Edge Gateways tab. Click on the green-colored + sign as shown in the following screenshot: On the Configure Edge Gateway screen, click on the Configure IP Settings section. Use the other default settings and click on Next. On the Configure External Networks screen, select the external network and click on Add. You will see a checkbox on this same screen. Use the default gateway for DNS relay. Once you do, select it and click on Next, as shown in the following screenshot: Select the default value on the Configure IP Settings page and click on Next. Specify a name for this Edge gateway and click on Next. Review the information and click on Finish. Let's look an alternative way to configure this, assuming you already have an Edge gateway and are trying to configure DNS Relay. Execute the following steps to configure it: Open the vCloud Director URL in a supported browser, for example, https://serverFQDN/cloud. Log in to the cloud as the administrator. You will be presented with the Home screen. On the Home screen, click on Edge Gateways. Select an appropriate Edge gateway, right-click, and select Properties, as shown in the following screenshot: Click on the Configure External Networks tab. Scroll down and select the Use default gateway for DNS Relay. checkbox, as shown in the following screenshot: Click on OK. In this section, we learned to configure DNS relay. In the next section, we discuss the configuration of a DHCP service in vCloud Director. DHCP services in vCloud Director vShield Edge devices support IP address pooling using the DHCP service. vShield Edge DHCP service listens on the vShield Edge internal interface for DHCP discovery. It uses the internal interface's IP address on vShield Edge as the default gateway address for all clients. The broadcast and subnet mask values of the internal interface are used for the container network. However, when you translate this with vCloud, not all types of networks support DHCP. That said, the Direct Connect network does not support DHCP. So, only routed and isolated networks support the vCNS DHCP service. The following diagram illustrates a routed organization vCD network: In the preceding diagram, the DHCP service provides an IP address from the Edge gateway to the Org networks connected to it. The following diagram shows how vApp is connected to a routed external network and gets a DHCP service: The following diagram shows a vApp network and a vApp connected to it, and DHCP IP address being obtained from the vShield Edge device: Configuring DHCP pools in vCloud Director The following actions are required to set up Edge DHCP: Add DHCP IP pools Enable Edge DHCP services As a prerequisite, you should know which Edge device is connected to which Org vDC network. Execute the following steps to configure DHCP pool: Open up a supported browser. Go to the URL of the vCD server; for example, https://serverFQDN/cloud. Log in to vCD by typing an administrator user ID and password. Click on the Edge Gateways link. Select the appropriate gateway, right-click on it, and select Edge Gateway Services, as shown in the following screenshot: The first service is DHCP, as shown in the following screenshot: Click on Add. From the drop-down combobox, select the network that you want the DHCP to applied be on. Specify the IP range. Select Enable Pool and click on OK, as shown in the following screenshot: Click on the Enable DHCP checkbox and then on OK. In this section, we learned about the DHCP pool, its functionality, and how to configure it. Understanding VPN tunnels in vCloud Director It's imperative that we first understand the basics of CloudVPN tunnels and then move on to a use case. We can then learn to configure a VPN tunnel. A VPN tunnel is an encrypted or more precisely, encapsulated network path on a public network. This is often used to connect two different corporate sites via the Internet. In vCloud Director, you can connect two organizations through an external network, which can also be used by other organizations. The VPN tunnel prevents users in other organizations from being able to monitor or intercept communications. VPNs must be anchored at both ends by some kind of firewall or VPN device. In vCD, the VPNs are facilitated by vShield Edge devices. When two systems are connected by a VPN tunnel, they communicate like they are on the same network. Let's have a look at the different types of VPN tunnels you can create in vCloud Director: VPN tunnels between two organization networks in the same organization VPN tunnels between two organization networks in two different organizations VPN tunnels between an organization network and a remote network outside of VMware vCloud While only a system administrator can create an organization network, organization administrators have the ability to connect organization networks using VPN tunnels. If the VPN tunnel connects two different organizations, then the organization administrator from each organization must enable the connection. A VPN cannot be established between two different organizations without the authorization of either both organization administrators or the system administrator. It is possible to connect VPN tunnels between two different organizations in two different instances of vCloud Director. The following is a diagram of a VPN connection between two different organization networks in a single organization: The following diagram shows a VPN tunnel between two organizations. The basic principles are exactly the same. vCloud Director can also connect VPN tunnels to remote devices outside of vCloud. These devices must be IPSec-enabled and can be network switches, routers, firewalls, or individual computer systems. This ability to establish a VPN tunnel to a device outside of vCD can significantly increase the flexibility of vCloud communications. The following diagram illustrates a VPN tunnel to a remote network: Configuring a virtual private network To configure an organization-to-organization VPN tunnel in vCloud Director, execute the following steps: Start a browser. Insert the URL of the vCD server into it, for example, https://serverFQDN/cloud. Log in to vCD using the administrator user ID and password. Click on the Manage & Monitor tab. Click on the Edge Gateways link in the panel on the left-hand side. Select an appropriate gateway, right-click, and select Edge Gateway Services. Click on the VPN tab. Click on Configure Public IPs. Specify a public IP and click on OK, as shown in the following screenshot: Click on Add to add the VPN endpoint. Click on Establish VPN to and specify an appropriate VPN type (in this example, it is the first option), as shown in the following screenshot: If this VPN is within the same organization, then select the Peer Edge Gateway option from the dropdown. Then, select the local and peer networks. Select the local and peer endpoints. Now click on OK. Click on Enable VPN and then on OK. This section assumes that either the firewall service is disabled or the default rule is set to accept all on both sides. In this section, we learned what VPN is and how to configure it within a vCloud Director environment. In the next section, we discuss static routing and various use cases and implementation.
Read more
  • 0
  • 0
  • 2212

article-image-getting-started-hyper-v-architecture-and-components
Packt
04 Jun 2015
19 min read
Save for later

Getting Started with Hyper-V Architecture and Components

Packt
04 Jun 2015
19 min read
In this article by Vinícius R. Apolinário, author of the book Learning Hyper-V, we will cover the following topics: Hypervisor architecture Type 1 and 2 Hypervisors Microkernel and Monolithic Type 1 Hypervisors Hyper-V requirements and processor features Memory configuration Non-Uniform Memory Access (NUMA) architecture (For more resources related to this topic, see here.) Hypervisor architecture If you've used Microsoft Virtual Server or Virtual PC, and then moved to Hyper-V, I'm almost sure that your first impression was: "Wow, this is much faster than Virtual Server". You are right. And there is a reason why Hyper-V performance is much better than Virtual Server or Virtual PC. It's all about the architecture. There are two types of Hypervisor architectures. Hypervisor Type 1, like Hyper-V and ESXi from VMware, and Hypervisor Type 2, like Virtual Server, Virtual PC, VMware Workstation, and others. The objective of the Hypervisor is to execute, manage and control the operation of the VM on a given hardware. For that reason, the Hypervisor is also called Virtual Machine Monitor (VMM). The main difference between these Hypervisor types is the way they operate on the host machine and its operating systems. As Hyper-V is a Type 1 Hypervisor, we will cover Type 2 first, so we can detail Type 1 and its benefits later. Type 1 and Type 2 Hypervisors Hypervisor Type 2, also known as hosted, is an implementation of the Hypervisor over and above the OS installed on the host machine. With that, the OS will impose some limitations to the Hypervisor to operate, and these limitations are going to reflect on the performance of the VM. To understand that, let me explain how a process is placed on the processor: the processor has what we call Rings on which the processes are placed, based on prioritization. The main Rings are 0 and 3. Kernel processes are placed on Ring 0 as they are vital to the OS. Application processes are placed on Ring 3, and, as a result, they will have less priority when compared to Ring 0. The issue on Hypervisors Type 2 is that it will be considered an application, and will run on Ring 3. Let's have a look at it: As you can see from the preceding diagram, the hypervisor has an additional layer to access the hardware. Now, let's compare it with Hypervisor Type 1: The impact is immediate. As you can see, Hypervisor Type 1 has total control of the underlying hardware. In fact, when you enable Virtualization Assistance (hardware-assisted virtualization) at the server BIOS, you are enabling what we call Ring -1, or Ring decompression, on the processor and the Hypervisor will run on this Ring. The question you might have is "And what about the host OS?" If you install the Hyper-V role on a Windows Server for the first time, you may note that after installation, the server will restart. But, if you're really paying attention, you will note that the server will actually reboot twice. This behavior is expected, and the reason it will happen is because the OS is not only installing and enabling Hyper-V bits, but also changing its architecture to the Type 1 Hypervisor. In this mode, the host OS will operate in the same way a VM does, on top of the Hypervisor, but on what we call parent partition. The parent partition will play a key role as the boot partition and in supporting the child partitions, or guest OS, where the VMs are running. The main reason for this partition model is the key attribute of a Hypervisor: isolation. For Microsoft Hyper-V Server you don't have to install the Hyper-V role, as it will be installed when you install the OS, so you won't be able to see the server booting twice. With isolation, you can ensure that a given VM will never have access to another VM. That means that if you have a compromised VM, with isolation, the VM will never infect another VM or the host OS. The only way a VM can access another VM is through the network, like all other devices in your network. Actually, the same is true for the host OS. This is one of the reasons why you need an antivirus for the host and the VMs, but this will be discussed later. The major difference between Type 1 and Type 2 now is that kernel processes from both host OS and VM OS will run on Ring 0. Application processes from both host OS and VM OS will run on Ring 3. However, there is one piece left. The question now is "What about device drivers?" Microkernel and Monolithic Type 1 Hypervisors Have you tried to install Hyper-V on a laptop? What about an all-in-one device? A PC? A server? An x64 based tablet? They all worked, right? And they're supposed to work. As Hyper-V is a Microkernel Type 1 Hypervisor, all the device drivers are hosted on the parent partition. A Monolithic Type 1 Hypervisor hosts its drivers on the Hypervisor itself. VMware ESXi works this way. That's why you should never use a standard ESXi media to install an ESXi host. The hardware manufacturer will provide you with an appropriate media with the correct drivers for the specific hardware. The main advantage of the Monolithic Type 1 Hypervisor is that, as it always has the correct driver installed, you will never have a performance issue due to an incorrect driver. On the other hand, you won't be able to install this on any device. The Microkernel Type 1 Hypervisor, on the other hand, hosts its drivers on the parent partition. That means that if you installed the host OS on a device, and the drivers are working, the Hypervisor, and in this case Hyper-V, will work just fine. There are other hardware requirements. These will be discussed later in this article. The other side of this is that if you use a generic driver, or a wrong version of it, you may have performance issues, or even driver malfunction. What you have to keep in mind here is that Microsoft does not certify drivers for Hyper-V. Device drivers are always certified for Windows Server. If the driver is certified for Windows Server, it is also certified for Hyper-V. But you always have to ensure the use of correct driver for a given hardware. Let's take a better look at how Hyper-V works as a Microkernel Type 1 Hypervisor: As you can see from the preceding diagram, there are multiple components to ensure that the VM will run perfectly. However, the major component is the Integration Components (IC), also called Integration Services. The IC is a set of tools that you should install or upgrade on the VM, so that the VM OS will be able to detect the virtualization stack and run as a regular OS on a given hardware. To understand this more clearly, let's see how an application accesses the hardware and understand all the processes behind it. When the application tries to send a request to the hardware, the kernel is responsible for interpreting this call. As this OS is running on an Enlightened Child Partition (Means that IC is installed), the Kernel will send this call to the Virtual Service Client (VSC) that operates as a synthetic device driver. The VSC is responsible for communicating with the Virtual Service Provider (VSP) on the parent partition, through VMBus, so the VSC can use the hardware resource. The VMBus will then be able to communicate with the hardware for the VM. The VMBus, a channel-based communication, is actually responsible for communicating with the parent partition and hardware. For the VMBus to access the hardware, it will communicate directly with a component on the Hypervisor called hypercalls. These hypercalls are then redirected to the hardware. However, only the parent partition can actually access the physical processor and memory. The child partitions access a virtual view of these components that are translated on the guest and the host partitions. New processors have a feature called Second Level Address Translation (SLAT) or Nested Paging. This feature is extremely important on high performance VMs and hosts, as it helps reduce the overhead of the virtual to physical memory and processor translation. On Windows 8, SLAT is a requirement for Hyper-V. It is important to note that Enlightened Child Partitions, or partitions with IC, can be Windows or Linux OS. If the child partitions have a Linux OS, the name of the component is Linux Integration Services (LIS), but the operation is actually the same. Another important fact regarding ICs is that they are already present on Windows Server 2008 or later. But, if you are running a newer version of Hyper-V, you have to upgrade the IC version on the VM OS. For example, if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012 R2, you probably don't have to worry about it. But if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012, then you have to upgrade the IC on the VM to match the parent partition version. Running guest OS Windows Server 2012 R2 on a VM on top of Hyper-V 2012 is not recommended. For Linux guest OS, the process is the same. Linux kernel version 3 or later already have LIS installed. If you are running an old version of Linux, you should verify the correct LIS version of your OS. To confirm the Linux and LIS versions, you can refer to an article at http://technet.microsoft.com/library/dn531030.aspx. Another situation is when the guest OS does not support IC or LIS, or an Unenlightened Child Partition. In this case, the guest OS and its kernel will not be able to run as an Enlightened Child Partition. As the VMBus is not present in this case, the utilization of hardware will be made by emulation and performance will be degraded. This only happens with old versions of Windows and Linux, like Windows 2000 Server, Windows NT, and CentOS 5.8 or earlier, or in case that the guest OS does not have or support IC. Now that you understand how the Hyper-V architecture works, you may be thinking "Okay, so for all of this to work, what are the requirements?" Hyper-V requirements and processor features At this point, you can see that there is a lot of effort for putting all of this to work. In fact, this architecture is only possible because hardware and software companies worked together in the past. The main goal of both type of companies was to enable virtualization of operating systems without changing them. Intel and AMD created, each one with its own implementation, a processor feature called virtualization assistance so that the Hypervisor could run on Ring 0, as explained before. But this is just the first requirement. There are other requirement as well, which are as follows: Virtualization assistance (also known as Hardware-assisted virtualization): This feature was created to remove the necessity of changing the OS for virtualizing it. On Intel processors, it is known as Intel VT-x. All recent processor families support this feature, including Core i3, Core i5, and Core i7. The complete list of processors and features can be found at http://ark.intel.com/Products/VirtualizationTechnology. You can also use this tool to check if your processor meets this requirement which can be downloaded at: https://downloadcenter.intel.com/Detail_Desc.aspx?ProductID=1881&DwnldID=7838. On AMD Processors, this technology is known as AMD-V. Like Intel, all recent processor families support this feature. AMD provides a tool to check processor compatibility that can be downloaded at http://www.amd.com/en-us/innovations/software-technologies/server-solution/virtualization. Data Execution Prevention (DEP): This is a security feature that marks memory pages as either executable or nonexecutable. For Hyper-V to run, this option must be enabled on the System BIOS. For an Intel-based processor, this feature is called Execute Disable bit (Intel XD bit) and No Execute Bit (AMD NX bit). This configuration will vary from one System BIOS to another. Check with your hardware vendor how to enable it on System BIOS. x64 (64-bit) based processor: This processor feature uses a 64-bit memory address. Although you may find that all new processors are x64, you might want to check if this is true before starting your implementation. The compatibility checkers above, from Intel and AMD, will show you if your processor is x64. Second Level Address Translation (SLAT): As discussed before, SLAT is not a requirement for Hyper-V to work. This feature provides much more performance on the VMs as it removes the need for translating physical and virtual pages of memory. It is highly recommended to have the SLAT feature on the processor ait provides more performance on high performance systems. As also discussed before, SLAT is a requirement if you want to use Hyper-V on Windows 8 or 8.1. To check if your processor has the SLAT feature, use the Sysinternals tool—Coreinfo— that can be downloaded at http://technet.microsoft.com/en-us/sysinternals/cc835722.aspx. There are some specific processor features that are not used exclusively for virtualization. But when the VM is initiated, it will use these specific features from the processor. If the VM is initiated and these features are allocated on the guest OS, you can't simply remove them. This is a problem if you are going to Live Migrate this VM from a host to another host; if these specific features are not available, you won't be able to perform the operation. At this moment, you have to understand that Live Migration moves a powered-on VM from one host to another. If you try to Live Migrate a VM between hosts with different processor types, you may be presented with an error. Live Migration is only permitted between the same processor vendor: Intel-Intel or AMD-AMD. Intel-AMD Live Migration is not allowed under any circumstance. If the processor is the same on both hosts, Live Migration and Share Nothing Live Migration will work without problems. But even within the same vendor, there can be different processor families. In this case, you can remove these specific features from the Virtual Processor presented to the VM. To do that, open Hyper-V Manager | Settings... | Processor | Processor Compatibility. Mark the Migrate to a physical computer with a different processor version option. This option is only available if the VM is powered off. Keep in mind that enabling this option will remove processor-specific features for the VM. If you are going to run an application that requires these features, they will not be available and the application may not run. Now that you have checked all the requirements, you can start planning your server for virtualization with Hyper-V. This is true from the perspective that you understand how Hyper-V works and what are the requirements for it to work. But there is another important subject that you should pay attention to when planning your server: memory. Memory configuration I believe you have heard this one before "The application server is running under performance". In the virtualization world, there is an obvious answer to it: give more virtual hardware to the VM. Although it seems to be the logical solution, the real effect can be totally opposite. During the early days, when servers had just a few sockets, processors, and cores, a single channel made the communication between logical processors and memory. But server hardware has evolved, and today, we have servers with 256 logical processors and 4 TB of RAM. To provide better communication between these components, a new concept emerged. Modern servers with multiple logical processors and high amount of memory use a new design called Non-Uniform Memory Access (NUMA) architecture. Non-Uniform Memory Access (NUMA) architecture NUMA is a memory design that consists of allocating memory to a given node, or a cluster of memory and logical processors. Accessing memory from a processor inside the node is notably faster than accessing memory from another node. If a processor has to access memory from another node, the performance of the process performing the operation will be affected. Basically, to solve this equation you have to ensure that the process inside the guest VM is aware of the NUMA node and is able to use the best available option: When you create a virtual machine, you decide how many virtual processors and how much virtual RAM this VM will have. Usually, you assign the amount of RAM that the application will need to run and meet the expected performance. For example, you may ask a software vendor on the application requirements and this software vendor will say that the application would be using at least 8 GB of RAM. Suppose you have a server with 16 GB of RAM. What you don't know is that this server has four NUMA nodes. To be able to know how much memory each NUMA node has, you must divide the total amount of RAM installed on the server by the number of NUMA nodes on the system. The result will be the amount of RAM of each NUMA node. In this case, each NUMA node has a total of 4 GB of RAM. Following the instructions of the software vendor, you create a VM with 8 GB of RAM. The Hyper-V standard configuration is to allow NUMA spanning, so you will be able to create the VM and start it. Hyper-V will accommodate 4 GB of RAM on two NUMA nodes. This NUMA spanning configuration means that a processor can access the memory on another NUMA node. As mentioned earlier, this will have an impact on the performance if the application is not aware of it. On Hyper-V, prior to the 2012 version, the guest OS was not informed about the NUMA configuration. Basically, in this case, the guest OS would see one NUMA node with 8 GB of RAM, and the allocation of memory would be made without NUMA restrictions, impacting the final performance of the application. Hyper-V 2012 and 2012 R2 have the same feature—the guest OS will see the virtual NUMA (vNUMA) presented to the child partition. With this feature, the guest OS and/or the application can make a better choice on where to allocate memory for each process running on this VM. NUMA is not a virtualization technology. In fact, it has been used for a long time, and even applications like SQL Server 2005 already used NUMA to better allocate the memory that its processes are using. Prior to Hyper-V 2012, if you wanted to avoid this behavior, you had two choices: Create the VM and allocate the maximum vRAM of a single NUMA node for it, as Hyper-V will always try to allocate the memory inside of a single NUMA node. In the above case, the VM should not have more than 4 GB of vRAM. But for this configuration to really work, you should also follow the next choice. Disable NUMA Spanning on Hyper-V. With this configuration disabled, you will not be able to run a VM if the memory configuration exceeds a single NUMA node. To do this, you should clear the Allow virtual machines to span physical NUMA nodes checkbox on Hyper-V Manager | Hyper-V Settings... | NUMA Spanning. Keep in mind that disabling this option will prevent you from running a VM if no nodes are available. You should also remember that even with Hyper-V 2012, if you create a VM with 8 GB of RAM using two NUMA nodes, the application on top of the guest OS (and the guest OS) must understand the NUMA topology. If the application and/or guest OS are not NUMA aware, vNUMA will not have effect and the application can still have performance issues. At this point you are probably asking yourself "How do I know how many NUMA nodes I have on my server?" This was harder to find in the previous versions of Windows Server and Hyper-V Server. In versions prior to 2012, you should open the Performance Monitor and check the available counters in Hyper-V VM Vid NUMA Node. The number of instances represents the number of NUMA Nodes. In Hyper-V 2012, you can check the settings for any VM. Under the Processor tab, there is a new feature available for NUMA. Let's have a look at this screen to understand what it represents: In Configuration, you can easily confirm how many NUMA nodes the host running this VM has. In the case above, the server has only 1 NUMA node. This means that all memory will be allocated close to the processor. Multiple NUMA nodes are usually present on servers with high amount of logical processors and memory. In the NUMA topology section, you can ensure that this VM will always run with the informed configuration. This is presented to you because of a new Hyper-V 2012 feature called Share Nothing Live Migration, which will be explained in detail later. This feature allows you to move a VM from one host to another without turning the VM off, with no cluster and no shared storage. As you can move the VM turned on, you might want to force the processor and memory configuration, based on the hardware of your worst server, ensuring that your VM will always meet your performance expectations. The Use Hardware Topology button will apply the hardware topology in case you moved the VM to another host or in case you changed the configuration and you want to apply the default configuration again. To summarize, if you want to make sure that your VM will not have performance problems, you should check how many NUMA nodes your server has and divide the total amount of memory by it; the result is the total memory on each node. Creating a VM with more memory than a single node will make Hyper-V present a vNUMA to the guest OS. Ensuring that the guest OS and applications are NUMA aware is also important, so that the guest OS and application can use this information to allocate memory for a process on the correct node. NUMA is important to ensure that you will not have problems because of host configuration and misconfiguration on the VM. But, in some cases, even when planning the VM size, you will come to a moment when the VM memory is stressed. In these cases, Hyper-V can help with another feature called Dynamic Memory. Summary In this we learned about the Hypervisor architecture and different Hypervisor types. We explored in brief about Microkernel and Monolithic Type 1 Hypervisors. In addition to this, this article also explains the Hyper-V requirements and processor features, Memory configuration and the NUMA architecture. Resources for Article: Further resources on this subject: Planning a Compliance Program in Microsoft System Center 2012 [Article] So, what is Microsoft © Hyper-V server 2008 R2? [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 2190

article-image-openvz-container-administration
Packt
11 Nov 2014
11 min read
Save for later

OpenVZ Container Administration

Packt
11 Nov 2014
11 min read
In this article by Mark Furman, the author of OpenVZ Essentials, we will go over the various aspects of OpenVZ administration. Some of the things we are going to go over in this article are as follows: Listing the containers that are running on the server Starting, stopping, suspending, and resuming containers Destroying, mounting, and unmounting containers Setting quota on and off Creating snapshots of the containers in order to back up and restore the container to another server (For more resources related to this topic, see here.) Using vzlist The vzlist command is used to list the containers on a node. When you run vzlist on its own without any options, it will only list the containers that are currently running on the system: vzlist In the previous example, we used the vzlist command to list the containers that are currently running on the server. Listing all the containers on the server If you want to list all the containers on the server instead of just the containers that are currently running on the server, you will need to add -a after vzlist. This will tell vzlist to include all of the containers that are created on the node inside its output: vzlist -a In the previous example, we used the vzlist command with an -a flag to tell vzctl that we want to list all of the containers that have been created on the server. The vzctl command The next command that we are going to cover is the vzctl command. This is the primary command that you are going to use when you want to perform tasks with the containers on the node. The initial functions of the vzctl command that we will go over are how to start, stop, and restart the container. Starting a container We use vzctl to start a container on the node. To start a container, run the following command: vzctl start 101Starting Container ...Setup slm memory limitSetup slm subgroup (default)Setting devperms 20002 dev 0x7d00Adding IP address(es) to pool:Adding IP address(es): 192.168.2.101Hostname for Container set: gotham.example.comContainer start in progress... In the previous example, we used the vzctl command with the start option to start the container 101. Stopping a container To stop a container, run the following command: vzctl stop 101Stopping container ...Container was stoppedContainer is unmounted In the previous example, we used the vzctl command with the stop option to stop the container 101. Restarting a container To restart a container, run the following command: vzctl restart 101Stopping Container ...Container was stoppedContainer is unmountedStarting Container... In the previous example, we used the vzctl command with the restart option to restart the container 101. Using vzctl to suspend and resume a container The following set of commands will use vzctl to suspend and resume a container. When you use vzctl to suspend a container, it creates a save point of the container to a dump file. You can then use vzctl to resume the container to the saved point it was in before the container was suspended. Suspending a container To suspend a container, run the following command: vzctl suspend 101 In the previous example, we used the vzctl command with the suspend option to suspend the container 101. Resuming a container To resume a container, run the following command: vzctl resume 101 In the previous example, we used the vzctl command with the resume option to resume operations on the container 101. In order to get resume or suspend to work, you may need to enable several kernel modules by running the following:modprobe vzcptmodprobe vzrst Destroying a container You can destroy a container that you created by using the destroy argument with vzctl. This will remove all the files including the configuration file and the directories created by the container. In order to destroy a container, you must first stop the container from running. To destroy a container, run the following command: vzctl destroy 101Destroying container private area: /vz/private/101 Container private area was destroyed. In the previous example, we used the vzctl command with the destroy option to destroy the container 101. Using vzctl to mount and unmount a container You are able to mount and unmount a container's private area located at /vz/root/ctid, which provides the container with root filesystem that exists on the server. Mounting and unmounting containers come in handy when you have trouble accessing the filesystem for your container. Mounting a container To mount a container, run the following command: vzctl mount 101 In the previous example, we used the vzctl command with the mount option to mount the private area for the container 101. Unmounting a container To unmount a container, run the following command: vzctl umount 101 In the previous example, we used the vzctl command with the umount option to unmount the private area for the container 101. Disk quotas Disk quotas allow you to define special limits for your container, including the size of the filesystem or the number of inodes that are available for use. Setting quotaon and quotaoff for a container You can manually start and stop the containers disk quota by using the quotaon and quotaoff arguments with vzctl. Turning on disk quota for a container To turn on disk quota for a container, run the following command: vzctl quotaon 101 In the previous example, we used the vzctl command with the quotaon option to turn disk quota on for the container 101. Turning off disk quota for a container To turn off disk quota for a container, run the following command: vzctl quotaoff 101 In the previous example, we used the vzctl command with the quotaoff option to turn off disk quota for the container 101. Setting disk quotas with vzctl set You are able to set the disk quotas for your containers on your server using the vzctl set command. With this command, you can set the disk space, disk inodes, and the quota time. To set the disk space for container 101 to 2 GB, use the following command: vzctl set 101 --diskspace 2000000:2200000 --save In the previous example, we used the vzctl set command to set the disk space quota to 2 GB with a 2.2 GB barrier. The two values that are separated with a : symbol and are the soft limit and the hard limit. The soft limit in the example is 2000000 and the hard limit is 2200000. The soft limit can be exceeded up to the value of the hard limit. The hard limit should never exceed its value. OpenVZ defines soft limits as barriers and hard limits as limits. To set the inode disk for container 101 to 1 million inodes, use the following command: vzctl set 101 --diskinodes 1000000:1100000 --save In the previous example, we used the vzctl set command to set the disk inode limits to a soft limit or barrier of 1 million inodes and a hard limit or limit or 1.1 million inodes. To set the quota time or the period of time in seconds that the container is allowed to exceed the soft limit values of disk quota and inode quota, use the following command: vzctl set 101 --quotatime 900 --save In the previous example, we used the vzctl command to set the quota time to 900 seconds or 15 minutes. This means that once the container soft limit is broken, you will be able to exceed the quota to the value of the hard limit for 15 minutes before the container reports that the value is over quota. Further use of vzctl set The vzctl set command allows you to make modifications to the container's config file without the need to manually edit the file. We are going to go over a few of the options that are essential to administer the node. --onboot The --onboot flag allows you to set whether or not the container will be booted when the node is booted. To set the onboot option, use the following command: vzctl set 101 --onboot In the previous example, we used the vzctl command with the set option and the --onboot flag to enable the container to boot automatically when the server is rebooted, and then saved to the container configuration file. --bootorder The --bootorder flag allows you to change the boot order priority of the container. The higher the value given, the sooner the container will start when the node is booted. To set the bootorder option, use the following command: vzctl set 101 --bootorder 9 --save In the previous example, we used the vzctl command with the set option and the bootorder flag to tell that we would like to change the priority of the order that the container is booted in, and then we save the option to the container's configuration file. --userpasswd The --userpasswd flag allows you to change a user's password that belongs to the container. If the user does not exist, then the user will be created. To set the userpasswd option, use the following command: vzctl set 101 --userpasswd admin:changeme In the previous example, we used the vzctl command with the set option and the --userpasswd flag and change the password for the admin user to the password changeme. --name The --name flag allows you to give the container a name that when assigned, can be used in place of the CTID value when using vzctl. This allows for an easier way to memorize your containers. Instead of focusing on the container ID, you will just need to remember the container name to access the container. To set the name option, use the following command: vzctl set 101 --name gotham --save In the previous example, we used the vzctl command with the set option to set our container 101 to use the name gotham and then save the changes to containers configuration file. --description The --description flag allows you to add a description for the container to give an idea of what the container is for. To use the description option, use the following command: vzctl set 101 --description "Web Development Test Server" --save In the previous example, we used the vzctl command with the set option and the --description flag to add a description of the container "Web Development Test Server". --ipadd The --ipadd flag allows you to add an IP address to the specified container. To set the ipadd option, use the following command: vzctl set 101 --ipadd 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipadd flag to add the IP address 192.168.2.103 to container 101 and then save the changes to the containers configuration file. --ipdel The --ipdel flag allows you to remove an IP address from the specified container. To use the ipdel option, use the following command: vzctl set 101 --ipdel 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipdel flag to remove the IP address 192.168.2.193 from the container 101 and then save the changes to the containers configuration file. --hostname The --hostname flag allows you to set or change the hostname for your container. To use the hostname option, use the following command: vzctl set 101 --hostname gotham.example.com --save In the previous example, we used the vzctl command with the set option and the --hostname flag to change the hostname of the container to gotham.example.com. --disable The --disable flag allows you to disable a containers startup. When this option is in place, you will not be able to start the container until this option is removed. To use the disable option, use the following command: vzctl set 101 --disable In the preceding example, we used the vzctl command with the set option and the --disable flag to prevent the container 101 from starting and then save the changes to the container's configuration file. --ram The --ram flag allows you to set the value for the physical page limit of the container and helps to regulate the amount of memory that is available to the container. To use the ram option, use the following command: vzctl set 101 --ram 2G --save In the previous example, we set the physical page limit to 2 GB using the --ram flag. --swap The --swap flag allows you to set the value of the amount of swap memory that is available to the container. To use the swap option, use the following command: vzctl set 101 --swap 1G --save In the preceding example, we set the swap memory limit for the container to 1 GB using the --swap flag. Summary In this article, we learned to administer the containers that are created on the node by using the vzctl command, and the vzlist command to list containers on the server. The vzctl command has a broad range of flags that can be given to it to allow you to perform many actions to a container. It allows you to start, stop, and restart, create, and destroy a container. You can also suspend and unsuspend the current state of the container, mount and unmount a container, issue changes to the container's config file by using vzctl set. Resources for Article: Further resources on this subject: Basic Concepts of Proxmox Virtual Environment [article] A Virtual Machine for a Virtual World [article] Backups in the VMware View Infrastructure [article]
Read more
  • 0
  • 0
  • 2188
Packt
14 Sep 2015
6 min read
Save for later

Getting Started – Understanding Citrix XenDesktop and its Architecture

Packt
14 Sep 2015
6 min read
In this article written by Gurpinder Singh, author of the book Troubleshooting Citrix Xendesktop, the author wants us to learn about the following topics: Hosted shared vs hosted virtual desktops Citrix FlexCast delivery technology Modular framework architecture What's new in XenDesktop 7.x (For more resources related to this topic, see here.) Hosted shared desktops (HSD) vs hosted virtual desktops (HVD) Instead of going through the XenDesktop architecture; firstly, we would like to explain the difference between the two desktop delivery platforms HSD and HVD. It is a common question that is asked by every System Administrator whenever there is a discussion on the most suited desktop delivery platform for the enterprises. Desktop Delivery platform depends on the requirements for the enterprise. Some choose Hosted Shared Desktops (HSD)or Server Based Computing (XenApp) over Hosted Virtual Desktop (XenDesktop); where single server desktop is shared among multiple users, and the environment is locked down using Active Directory GPOs. XenApp is cost effective platform when compared between XenApp and XenDesktop and many small to mid-sized enterprises prefer to choose this platform due to its cost benefits and less complexity. However, the preceding model does pose some risks to the environment as the same server is being shared by multiple users and a proper design plan is required to configure proper HSD or XenApp Published Desktop environment. Many enterprises have security and other user level dependencies where they prefer to go with hosted virtual desktops solution. Hosted virtual desktop or XenDesktop runs a Windows 7 or Windows 8 desktop running as virtual machine hosted on a data centre. In this model, single user connects to single desktop and therefore, there is a very less risk of having desktop configuration impacted for all users. XenDesktop 7.x and above versions now also enable you to deliver server based desktops (HSD) along with HVD within one product suite. XenDesktop also provides HVD pooled desktops which work on a shared OS image concept which is similar to HSD desktops with a difference of running Desktop Operating System instead of Server Operating System. Please have a look at the following table which would provide you a fair idea on the requirement and recommendation on both delivery platforms for your enterprise. Customer Requirement Delivery Platform User needs to work on one or two applications and often need not to do any updates or installation on their own. Hosted Shared Desktop User work on their own core set of applications for which they need to change system level settings, installations and so on. Hosted virtual Desktops (Dedicated) User works on MS Office and other content creation tools Hosted Shared Desktop User needs to work on CPU and graphic intensive applications that requires video rendering Hosted Virtual Desktop (Blade PCs) User needs to have admin privileges to work on specific set of applications. Hosted Virtual Desktop (Pooled) You can always have mixed set of desktop delivery platforms in your environment focussed on the customer need and requirements. Citrix FlexCast delivery technology Citrix FlexCast is a delivery technology that allows Citrix administrator to personalize virtual desktops to meet the performance, security and flexibility requirements of end users. There are different types of user requirements; some need standard desktops with standard set of apps and others require high performance personalized desktops. Citrix has come up with a solution to meet these demands with Citrix FlexCast Technology. You can deliver any kind of virtualized desktop with FlexCast technology, there are five different categories in which FlexCast models are available. Hosted Shared or HSD Hosted Virtual Desktop or HVD Streamed VHD Local VMs On-Demand Apps The detailed discussion on these models is out of scope for this article. To read more about the FlexCast models, please visit http://support.citrix.com/article/CTX139331. Modular framework architecture To understand the XenDesktop architecture, it is better to break down the architecture into discrete independent modules rather than visualizing it as an integrated one single big piece. Citrix provided this modularized approach to design and architect XenDesktop to solve end customers set of requirements and objectives. This modularized approach solves customer requirements by providing a platform that is highly resilient, flexible and scalable. This reference architecture is based on information gathered by multiple Citrix consultants working on a wide range of XenDesktop implementations. Have a look at the basic components of the XenDesktop architecture that everyone should be aware of before getting involved with troubleshooting: We won't be spending much time on understanding each component of the reference architecture, http://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/xendesktop-deployment-blueprint.pdf in detail as this is out of scope for this book. We would be going through each component quickly. What's new in XenDesktop 7.x With the release of Citrix XenDesktop 7, Citrix has introduced a lot of improvements over previous releases. With every new product release, there is lot of information published and sometimes it becomes very difficult to get the key information that all system administrators would be looking for to understand what has been changed and what the key benefits of the new release are. The purpose of this section would be to highlight the new key features that XenDesktop 7.x brings to the kitty for all Citrix administrators. This section would not provide you all the details regarding the new features and changes that XenDesktop 7.x has introduced but highlights the key points that every Citrix administrator should be aware of while administrating XenDesktop 7. Key Highlights: XenApp and XenDesktop are part of now single setup Cloud integration to support desktop deployments on the cloud IMA database doesn't exist anymore IMA is replaced by FMA (Flexcast Management Architecture) Zones Concept are no more zones or ZDC (Data Collectors) Microsoft SQL is the only supported Database Sites are used instead of Farms XenApp and XenDesktop can now share consoles, Citrix Studio and Desktop Director are used for both products Shadowing feature is deprecated; Citrix recommends Microsoft Remote Assistance to be used Locally installed applications integrated to be used with Server based desktops HDX & mobility features Profile Management is included MCS can now be leveraged for both Server & Desktop OS MCS now works with KMS Storefront replaces Web Interface Remote-PC Access No more Citrix Streaming Profile Manager; Citrix recommends MS App-V Core component is being replaced by a VDA agent Summary We should now have a basic understanding on desktop virtualization concepts, Architecture, new features in XenDesktop 7.x, XenDesktop delivery models based on FlexCast Technology. Resources for Article: Further resources on this subject: High Availability, Protection, and Recovery using Microsoft Azure [article] Designing a XenDesktop® Site [article] XenMobile™ Solutions Bundle [article]
Read more
  • 0
  • 0
  • 2169

article-image-introducing-vsphere-vmotion
Packt
16 Aug 2016
5 min read
Save for later

Introducing vSphere vMotion

Packt
16 Aug 2016
5 min read
In this article by Abhilash G B and Rebecca Fitzhugh author of the book Learning VMware vSphere, we are mostly going to be talking about howvSphere vMotion is a VMware technology used to migrate a running virtual machine from one host to another without altering its power-state. The beauty of the whole process is that it is transparent to the applications running inside the virtual machine. In this section we will understand the inner workings of vMotion and learn how to configure it. There are different types of vMotion, such as: Compute vMotion Storage vMotion Unified vMotion Enhanced vMotion (X-vMotion) Cross vSwitch vMotion Cross vCenter vMotion Long Distance vMotion (For more resources related to this topic, see here.) Compute vMotion is the default vMotion method and is employed by other features such as DRS, FT and Maintenance Mode. When you initiate a vMotion, it initiates an iterative copy of all memory pages. After the first pass, all the dirtied memory pages are copied again by doing another pass and this is done iteratively until the amount of pages left over to be copied is small enough to be transferred and to switch over the state of the VM to the destination host. During the switch over, the virtual machine's device state are transferred and resumed at the destination host.You can initiate up to 8 simultaneous vMotion operations on a single host. Storage vMotion is used to migrate the files backing a virtual machine (virtual disks, configuration files, logs) from one datastore to another while the virtual machine is still running. When you initiate a storage vMotion, it starts a sequential copy of source disk in 64 MB chunks. While a region is being copied, all the writes issued to that region are deferred until the region is copied. An already copied source region is monitored for further writes. If there is a write I/O, then it will be mirrored to the destination disk as well. This process of mirror writes to the destination virtual disk continues until the sequential copy of the entire source virtual disk is complete. Once the sequential copy is complete, all subsequent READS/WRITES are issued to the destination virtual disk. Keep in mind though that while the sequential copy is still in progress all the READs are issued to the source virtual disk. Storage vMotion is used be Storage DRS. You initiate up to 2 simultaneous SvMotion operations on a single host. Unified vMotion is used to migrate both the running state of a virtual machine and files backing it from one host and datastore to another. Unified vMotion uses a combination of both Compute and Storage vMotion to achieve the migration. First, the configuration files and the virtual disks are migrated and only then the migration of live state of the virtual machine will begin. You can initiate up to 2 simultaneous Unified vMotion operations on a single host. Enhanced vMotion (X-vMotion) is used to migrate virtual machine between hosts that do not share storage. Both the virtual machine's running state and the files backing it are transferred over the network to the destination. The migration procedure is same as the compute and storage vMotion. In fact, Enhanced vMotion uses Unified vMotion to achieve the migration. Since the memory and disk states are transferred over vMotion network, ESXi hosts maintain a transmit buffer at the source and a receive buffer at the destination. The transmit buffer collects and places data on to the network, while the receive buffer will collect data received via the network and flushes it to the storage. You can initiate up to 2 simultaneous X-vMotion operations on a single host. Cross vSwitch vMotion allows you to choose a destination port group for the virtual machine. It is important to note that unless the destination port group supports the same L2 network, the virtual machine will not be able to communicate over the network. Cross vSwitch vMotion allows changing from Standard vSwitch to VDS, but not from VDS to Standard vSwitch. vSwitch to vSwitch and VDS to VDS is supported. Cross vCenter vMotion allows migrating virtual machines beyond the vCenter's boundary. This is a new enhancement with vSphere 6.0. However, for this to be possible both the vCenter's should be in the same SSO Domain and should be in Enhanced Linked Mode. Infrastructure requirement for Cross vCenter vMotion has been detailed in the VMware Knowledge Base article 2106952 at the following link:http://kb.vmware.com/kb/2106952. Long Distance vMotion allows migrating virtual machines over distances with a latency not exceeding 150 milliseconds. Prior to vSphere 6.0, the maximum supported network latency for vMotion was 10 milliseconds. Using the provisioning interface You can configure a Provisioning Interface to send all non-active data of the virtual machine being migrated. Prior to vSphere 6.0, vMotion used the vmkernel interface which has the default gateway configured on it (which in most cases is the management interface vmk0) to transfer non-performance impacting vMotion data. Non-performance impacting vMotion data includes the Virtual Machine's home directory, older delta in the snapshot chain, base disks etc. Only the live data will hit the vMotion interface. The Provisioning Interface is nothing but a vmkernel interface with Provisioning Traffic enabled on this. The procedure to do this is very similar to how you would configure a vmkernel interface for Management or vMotion traffic. You will have to edit the settings of the intended vmk interface and set Provisioning traffic as the enabled service: It is important to keep in mind that the provisioning interface is not just meant for VMotion data, but if enabled it will be used for cold migrations, cloning operations and virtual machine snapshots. The provisioning interface can be configured to use a different gateway other than vmkernel's default gateway. Further resources on this subject: Cloning and Snapshots in VMware Workstation [article] Essentials of VMware vSphere [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 2134