Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Servers

95 Articles
article-image-lets-breakdown-numbers
Packt
24 Oct 2013
8 min read
Save for later

Let's Breakdown the Numbers

Packt
24 Oct 2013
8 min read
(For more resources related to this topic, see here.) John Kirkland is an awesome "accidental" SQL Server DBA for Red Speed Bicycle LLC—a growing bicycle startup based in the United States. The company distributes bikes, bicycle parts, and accessories to various distribution points around the world. To say that they are performing well financially is an understatement. They are booming! They've been expanding their business to Canada, Australia, France, and the United Kingdom in the last three years. The company has upgraded their SQL Server 2000 database recently to the latest version of SQL Server 2012. Linda, from the Finance Group, asked John if they can migrate their Microsoft Access Reports into the SQL Server 2012 Reporting Services. John installed SSRS 2012 in a native mode. He decided to build the reports from the ground up so that the report development process would not interrupt the operation in the Finance Group. There is only one caveat; John has never authored any reports in SQL Server Reporting Services (SSRS) before. Let's give John a hand and help him build his reports from the ground up. Then, we'll see more of his SSRS adventures as we follow his journey throughout this article. Here's the first report requirement for John: a simple table that shows all the sales transactions in their database. Linda wants to see a report with the following data: Date Sales Order ID Category Subcategory Product Name Unit Price Quantity Line Total We will build our report, and all succeeding reports in this article, using the SQL Server Data Tools (SSDT). SSDT is Visual Studio shell which is an integrated environment used to build SQL Server database objects. You can install SSDT from the SQL Server installation media. In June 2013, Microsoft released SQL Server Data Tools-Business Intelligence (SSDTBI). SSDTBI is a component that contains templates for SQL Server Analysis Services (SSAS), SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS) for Visual Studio 2012. SSDTBI replaced Business Intelligence Development Studio (BIDS) from the previous versions of SQL Server. You have two options in creating your SSRS reports: SSDT or Visual Studio 2012. If you use Visual Studio, you have to install the SSDTBI templates. Let's create a new solution and name it SSRS2012Blueprints. For the following exercises, we're using SSRS 2012 in native mode. Also, make a note that we're using the AdventureWorks2012 Sample database all throughout this article unless otherwise indicated. You can download the sample database from CodePlex. Here's the link: http://msftdbprodsamples.codeplex.com/releases/view/55330. Defining a data source for the project Now, let's define a shared data source and shared dataset for the first report. A shared dataset and data source can be shared among the reports within the project: Right-click on the Shared Data Sources folder under the SSRS2012Bueprints solution in the Solution Explorer window, as shown in the following illustration. If the Solution Explorer window is not visible, access it by navigating to Menu | View | Solution Explorer, or press Ctrl + Alt + L: Select Add New Data Source which displays the Shared Data Source Properties window. Let's name our data source DS_SSRS2012Blueprint. For this demonstration, let's use the wizard to create the connection string. As a good practice, I use the wizard for setting up connection strings for my data connections. Aside from convenience, I'm quite confident that I'm getting the right connections that I want. Another option for setting the connection is through the Connection Properties dialog box, as shown in the next screenshot. Clicking on the Edit button next to the connection string box displays the Connection Properties dialog box: Shared versus embedded data sources and datasets: as a good practice, always use shared data sources and shared datasets where appropriate. One characteristic of a productive development project is using reusable objects as much as possible. For the connection, one option is to manually specify the connection string as shown: Data Source=localhost;Initial Catalog=AdventureWorks2012 We may find this option as a convenient way of creating our data connections. But if you're new to the report environment you're currently working on, you may find setting up the connection string manually more cumbersome than setting it up through the wizard. Always test the connection before saving your data source. After testing, click on the OK buttons on both dialog boxes. Defining the dataset for the project Our next step is to create the shared dataset for the project. Before doing that, let's create a stored procedure named dbo.uspSalesDetails. This is going to be the query for our dataset. Download the T-SQL codes included in this article if you haven't done so already. We're going to use the T-SQL file named uspSalesDetails_Ch01.sql for this article. We will use the same stored procedure for this whole article, unless otherwise indicated. Right-click on the Shared Datasets folder in Solution Explorer, just like we did when we created the data source. That displays the Shared Datasets Properties dialog. Let's name our dataset ds_SalesDetailReport. We use the query type stored procedure, and select or type uspSalesDetails on the Select or enter stored procedure name drop-down combo box. Click on OK when you're done: Before we work on the report itself, let's examine our dataset. In the Solution Explorer window, double-click on the dataset ds_SalesDetailReport.rsd, which displays the Shared Dataset Properties dialog box. Notice that the fields returned by our stored procedure have been automatically detected by the report designer. You can rename the field as shown: Ad-hoc Query (Text Query Type) versus Stored Procedure: as a good practice, always use a stored procedure where a query is used. The primary reason for this is that a stored procedure is compiled into a single execution plan. Using stored procedures will also allow you to modify certain elements of your reports without modifying the actual report. Creating the report file Now, we're almost ready to build our first report. We will create our report by building it from scratch by performing the following steps: Going back to the Solution Explorer window, right-click on the Reports folder. Please take note that selecting the Add New Report option will initialize Report Wizard. Use the wizard to build simple tabular or matrix reports. Go ahead if you want to try the wizard but for the purpose of our demonstration, we'll skip the wizard. Select Add, instead of Add New Report, then select New Item: Selecting New Item displays the Add New Item dialog box as shown in the following screenshot. Choose the Report template (default report template) in the template window. Name the report SalesDetailsReport.rdl. Click on the Add button to add the report to our project: Clicking on the Add button displays the empty report in the report designer. It looks similar to the following screenshot: Creating a parameterized report You may have noticed that the stored procedure we created for the shared dataset is parameterized. It has the following parameters: It's a good practice to test all the queries on the database just to make sure we get the datasets that we need. Doing so will eliminate a lot of data quality issues during report execution. This is also the best time to validate all our data. We want our report consumers to have the correct data that is needed for making critical decisions. Let's execute the stored procedure in SQL Server Management Studio (SSMS) and take a look at the execution output. We want to make sure that we're getting the results that we want to have on the report. Now, we add a dataset to our report based on the shared dataset that we had previously created: Right-click on the Datasets folder in the Report Data window. If it's not open, you can open it by navigating to Menu | View | Report Data, or press Ctrl + Alt + D: Selecting Add Dataset displays the Dataset Properties. Let's name our report dataset tblSalesReport. We will use this dataset as the underlying data for the table element that we will create to hold our report data. Indicate that we want to use a shared dataset. A list of the project shared datasets is displayed. We only have one at this point, which is the ds_SalesDetailsReport. Let's select that one, then click on OK. Going back to the Report Data window, you may notice that we now have more objects under the Parameters and Datasets folders. Switch to the Toolbox window. If you don't see it, then go to Menu | View | Toolbox, or press Ctrl + Alt + X. Double-click or drag a table to the empty surface of the designer. Let's add more columns to the table to accommodate all eight dataset fields. Click on the table, then right-click on the bar on the last column and select Insert Column | Right. To add data to the report, let's drag each element from the dataset to their own cell at the table data region. There are three data regions in SSRS: table, matrix, and list. In SSRS 2012, a fourth data region has been added but you can't see that listed anywhere. It's called tablix. Tablix is not shown as an option because it is built into those three data regions. What we're doing in the preceding screenshot is essentially dragging data into the underlying tablix data region. But how can I add my parameters into the report? you may ask. Well, let's switch to the Preview tab. We should now see our parameters already built into the report because we specified them in our stored procedure. Our report should look similar to the following screenshot:
Read more
  • 0
  • 0
  • 1467

article-image-microsoft-sql-server-2008-r2-hierarchies-collections-and-mds-metadata
Packt
21 Jul 2011
9 min read
Save for later

Microsoft SQL Server 2008 R2: Hierarchies, Collections, and MDS Metadata

Packt
21 Jul 2011
9 min read
  Microsoft SQL Server 2008 R2 Master Data Services Manage and maintain your organization's master data effectively with Microsoft SQL Server 2008 R2 Master Data Services         Read more about this book       (For more resources on this subject, see here.) The reader is advised to refer the previous article on Creating and Using Models since this article is related to it. Master Data Services includes a Hierarchy Management feature, where we can: Browse all levels of a hierarchy Move members within a hierarchy Access the Explorer grid and all its functionality for all members of a given hierarchy. As we've seen already, there are two types of hierarchies in MDS—Derived Hierarchies and Explicit Hierarchies. We will look at how to create and use both types of hierarchies now. Derived Hierarchies In our example scenario, as we have stores in many different cities and states, we have a requirement to create a "Stores by Geography" hierarchy. In order to create the hierarchy, carry out the following steps: Navigate to the System Administration function, which can be accessed using the Master Data Manager home page. Hover over the Manage menu and click on the Derived Hierarchies menu item, which will open the Derived Hierarchy Maintenance page. Click on the green plus icon to add a Derived Hierarchy, which will open the Add Derived Hierarchy page. Enter Stores By Geography as the Derived Hierarchy Name and click on save. The Edit Derived Hierarchy page will now be displayed, where we can build the hierarchy. On the left-hand side of the screen we can pick entities to be in our hierarchy, whereas the middle pane of the screen displays the hierarchy in its current state. A preview of the hierarchy with real data is also available on the right-hand side. Drag the Store entity from the left-hand side of the screen and drop it onto the red Current Levels : Stores By Geography node in the center of the screen: The choice of entities of on the left hand side will now change to the only two entities that are related to Store, namely City and StoreType. Repeat the drag-and-drop process, but this time drag the City entity onto the red Current Levels node so that the Current Levels hierarchy is as follows: The Available Entities and Hierarchies pane will now be updated to show the State entity, as this is the only entity related to the City entity. Drag the State entity over to the red Current levels node, above the City entity. The Available Entities and Hierarchies pane will now be updated to show the Country. Drag the Country entity over to the red Current Levels node, above the State entity. This is the last step in building our Stores By Geography hierarchy, which will now be complete. We will now look at how we can browse and edit our new hierarchy. Exploring Derived Hierarchies Before we make any changes to the Derived Hierarchy, we will explore the user interface, so that we are comfortable with how it is used. Carry out the following in order to browse the new hierarchy features: Navigate to the home page and select the Explorer function. Within the Explorer function, hover over the Hierarchies menu, where a menu item called Derived: Stores By Geography should appear. Click on this new item, which will display the Derived Hierarchy, as shown below: The buttons above the hierarchy tree structure are as follows (from left to right): Pin Selected Item—Hides all members apart from the select item and all of its descendants. This option can be useful when browsing large hierarchies. Locate Parent of Selected Item—The immediate parent of the selected member could be hidden, if someone has chosen to pin the item (as above). Locate Parent of Selected Item will locate and display the members parent, as well as any other children of the parent. Refresh Hierarchy—Refreshes the hierarchy tree to display the latest version, as edits could occur outside the immediate tree structure. Show/Hide Names—Toggles the hierarchy view to be either the member code and the name, or just the code. The default is to show the member name and code. Show/Hide Attributes—On the right-hand side of the screen (not shown) the children of the selected item are shown in the Explorer grid, along with all their attributes. This button shows or hides the Explorer grid. View Metadata—Displays a pop-up window that will display the metadata for the selected member. We will discuss metadata towards the end of this article. Select the DE {Germany} member by clicking on it. Note: the checkboxes are not how members are selected; instead, clicking on the member name will select the member. Use the Pin Selected Item button to pin the DE {Germany} member, which will hide the siblings of Germany as shown below: To now locate the parent of DE {Germany}, and display the parent's other children (for example, USA and United Kingdom), click on DE {Germany}, then click on the Locate Parent of Selected Item button. The hierarchy tree will revert back to the original structure that we encountered. Now that we have returned to the original hierarchy structure, expand the US member until the member CA {California} is visible. Click on this member, which will display some of the cities, which we have loaded for the State of California: Editing multiple entitiesThe above point illustrates one of the useful features of the hierarchy editor. Although we can edit the individual entities using their respective Explorer grids, with a Derived Hierarchy, we can edit multiple entities on a single page. We don't actually need to edit the cities for the moment, but we do want to look at showing and hiding the Explorer grid. Click on the Show/Hide Attributes button to hide the Explorer grid. Click on the button again to make the Explorer grid reappear. Finally, we're able to look at the Metadata for the Derived Hierarchy. Click on the View Metadata button to open the Metadata Explorer, which is shown below. This is where we would look for any auxiliary information about the Derived Hierarchy, such as a description to explain what is in the hierarchy. We'll look at metadata in detail at the end of this article. We will now look at how we add a new member in a Derived Hierarchy. Adding a member in a Derived Hierarchy Adding a member in a Derived Hierarchy achieves exactly the same thing as adding a member in the entity itself. The difference is that the member addition process when carried out in a Derived Hierarchy is slightly simplified, as the domain attribute (for example, City in the case of the Store entity) gets automatically completed by MDS. This because in a Derived Hierarchy we choose to add a Store in a particular City, which negates the need to specify the City itself. In our example scenario, we wish to open a new Store in Denver. Carry out the following steps to add the new Store: Expand the US {United States} member of the Stores By Geography hierarchy, and then expand the CO {Colorado} member. Click on the 136 {Denver} member. On the far right-hand side of the screen, the Stores for Denver (of which there are none) will be shown. Click on the green plus icon to begin the process of adding a Store. Enter the Name as AW Denver and enter the Code as 052. Click on the save icon to create the member. Click on the pencil icon to edit the attributes of the new member. Note that the City attribute is already completed for us. Complete the remaining attributes with test data of your choice. Click on the save icon to save the attribute values. Click on the green back arrow button at the top of the screen in order to return to the Derived Hierarchy. Notice that we now have a new Store that exists in the Derived Hierarchy, as well as a new row in the Explorer grid on the right-hand side of the screen. We will now continue to explore the functionality in the hierarchy interface by using Explicit Hierarchies. Explicit Hierarchies Whereas Derived Hierarchies rely on the relationships between different entities in order to exist, all the members within Explicit Hierarchies come from a single entity. The hierarchy is made by making explicit relationships between leaf members and the consolidated members that are used to give the hierarchy more than one level. Explicit Hierarchies are useful in order to represent a ragged hierarchy, which is a hierarchy where the leaf members exist at different levels across the hierarchy. In our example scenario, we wish to create a hierarchy that shows the reporting structures for our stores. Most stores report to a regional center, with the regional centers reporting to Head Office. However, some stores that are deemed to be important report directly to Head Office, which is why we need the Explicit Hierarchy. Creating an Explicit Hierarchy As we saw when creating the original Store entity in the previous article, an Explicit Hierarchy can get automatically created for us when we create an Entity. While that is always an option, right now we will cover how to do this manually. In order to create the Explicit Hierarchy, carry out the following steps: Navigate to the System Administration function. Hover over the Manage menu and click on the Entities menu item. Select the Store entity and then click on the pencil icon to edit the entity. Select Yes from Enable explicit hierarchies and collections drop-down. Enter Store Reporting as the Explicit hierarchy name. Uncheck the checkbox called Include all leaf members in mandatory hierarchy. If the checkbox is unchecked, a special hierarchy node called Unused will be created, where leaf members that are not required in the hierarchy will reside. If the checkbox is checked, then all leaf members will be included in the Explicit Hierarchy. This is shown next: Click on the save icon to make the changes to the entity, which will return us to the Entity Maintenance screen, and conclude the creation of the hierarchy.
Read more
  • 0
  • 0
  • 1445

article-image-sql-server-2008-r2-multiserver-management-using-utility-explorer
Packt
30 Jun 2011
6 min read
Save for later

SQL Server 2008 R2: Multiserver Management Using Utility Explorer

Packt
30 Jun 2011
6 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook         Read more about this book       (For more resources on Microsoft SQL Server, see here.) The UCP collects configuration and performance information that includes database file space utilization, CPU utilization, and storage volume utilization from each enrolled instance. Using Utility Explorer helps you to troubleshoot the resource health issues identified by SQL Server UCP. The issues might include mitigating over-utilized CPU on a single instance or multiple instances. UCP also helps in reporting troubleshooting information using SQL Server Utility on issues that might include resolving a failed operation to enroll an instance of SQL Server with a UCP, troubleshooting failed data collection resulting in gray icons in the managed instance list view on a UCP, mitigating performance bottlenecks, or resolving resource health issues. The reader will benefit by referring the previous articles on Best Practices for SQL Server 2008 R2 Administration and Managing the Core Database Engine before proceeding ahead. Getting ready The UCP and all managed instances of SQL Server must satisfy the following prerequisites: UCP SQL Server instance version must be SQL Server 2008 SP2[10.00.4000.00] or higher The managed instances must be a database engine only and the edition must be Datacenter or Enterprise on a production environment UCP managed account must operate within a single Windows domain or domains with two-way trust relationships The SQL Server service accounts for UCP and managed instances must have read permission to Users in Active Directory To set up the SQL Server Utility you need to: Create a UCP from the SQL Server Utility Enroll data-tier applications Enroll instances of SQL Server with the UCP Define Global and Instance level policies, and manage and monitor the instances. Since the UCP itself becomes a managed instance automatically, once the UCP wizard is completed, the Utility Explorer content will display a graphical view of various parameters, as follows: How to do it... To define the global and instance level policies to monitor the multiple instances, use the Utility Explorer from SSMS tool and complete the following steps: Click on Utility Explorer; populate the server that is registered as utility control point. On the right-hand screen, click on the Utility Administration pane. The evaluable time period and tolerance for percent violations are configurable using Policy tab settings. The default upper threshold utilization is 70 percent for CPU, data file space, and storage volume utilization values. To change the policies use the slider-controls (up or down) to the right of each policy description. For this recipe, we have modified the upper thresholds for CPU utilization as 50 percent and data file space utilization as 80 percent. We have also reduced the upper limit for the storage volume utilization parameter. The default lower threshold utilization is 0 percent for CPU, data file space, and storage volume utilization values. To change the policies, use the slider-controls (up only) to the right of each policy description. For this recipe, we have modified (increased) the lower threshold for CPU utilization to 5 percent. Once the threshold parameters are changed, click Apply to take into effect. For the default system settings, either click on the Restore Defaults button or the Discard button, as shown in the following screenshot: Now, let us test whether the defined global policies are working or not. From the Query Editor, open a new connection against SQL instances, which is registered as Managed Instance on UCP, and execute the following time-intensive TSQL statements: create table test ( x int not null, y char(896) not null default (''), z char(120) not null default('') ) go insert test (x) select r from ( selectrow_number() over (order by (select 1)) r from master..spt_values a, master..spt_values b ) p where r <= 4000000 go create clustered index ix_x on test (x, y) with fillfactor=51 go The script will simulate a data load process that will lead into a slow performance on managed SQL instance. After a few minutes, right-click on the Managed Instances option on Utility Explorer, which will produce the following screenshot of managed instances: In addition to the snapshot of utilization information, click on the Managed Instances option on Utility Explorer to obtain information on over-utilized database files on an individual instance (see the next screenshot): We should now have completed the strategic steps to manage multiple instances using the Utility Explorer tool. How it works... The unified view of instances from Utility Explorer is the starting point of application and multi-server management that helps the DBAs to manage the multiple instances efficiently. Within the UCP, each managed instance of SQL Server is instrumented with a data collection set that queries configuration and performance data and stores it back in UMDW on the UCP every 15 minutes. By default, the data-tier applications automatically become managed by the SQL Server utility. Both of these entities are managed and monitored based on the global policy definitions or individual policy definitions. Troubleshooting resource health issues identified by an SQL Server UCP might include mitigating over-utilized CPU on a computer on an instance of SQL Server, or mitigating over-utilized CPU for a data-tier application. Other issues might include resolving over-utilized file space for database files or resolving over-utilization of allocated disk space on a storage volume. The managed instances health parameter collects the following system resource information: CPU utilization for the instance of SQL Server Database files utilization Storage volume space utilization CPU utilization for the computer Status for each parameter is divided into four categories: Well-utilized: Number of managed instances of an SQL Server, which are not violating resource utilization policies. Under-utilized: Number of managed resources, which are violating resource underutilization policies. Over-utilized: Number of managed resources, which are violating resource overutilization policies. No Data Available: Data is not available for managed instances of SQL Server as either the instance of SQL Server has just been enrolled and the first data collection operation is not completed, or because there is a problem with the managed instance of SQL Server collecting and uploading data to the UCP. The data collection process begins immediately, but it can take up to 30 minutes for data to appear in the dashboard and viewpoints in the Utility Explorer content pane. However, the data collection set for each managed instance of an SQL Server will send relevant configuration and performance data to the UCP every 15 minutes. Summary This article on SQL Server 2008 R2 covered Multiserver Management Using Utility Explorer. Further resources on this subject: Best Practices for Microsoft SQL Server 2008 R2 Administration Microsoft SQL Server 2008 R2: Managing the Core Database Engine [Article] Managing Core Microsoft SQL Server 2008 R2 Technologies [Article] SQL Server 2008 R2 Technologies: Deploying Master Data Services [Article] Getting Started with Microsoft SQL Server 2008 R2 [Article] Microsoft SQL Server 2008 - Installation Made Easy [Article] Creating a Web Page for Displaying Data from SQL Server 2008 [Article] Ground to SQL Azure migration using MS SQL Server Integration Services [Article] Microsoft SQL Server 2008 High Availability: Understanding Domains, Users, and Security [Article]
Read more
  • 0
  • 0
  • 1429
Visually different images

article-image-tuning-server-performance-memory-management-and-swap
Packt
24 Jun 2015
7 min read
Save for later

Tuning server performance with memory management and swap

Packt
24 Jun 2015
7 min read
In this article, by Jonathan Hobson, the author of Troubleshooting CentOS, we will learn about memory management, swap, and swappiness. (For more resources related to this topic, see here.) A deeper understanding of the underlying active processes in CentOS 7 is an essential skill for any troubleshooter. From high load averages to slow response times, system overloads to dead and dying processes, there comes a time when every server may start to feel sluggish, act impoverished, or fail to respond, and as a consequence, it will require your immediate attention. Regardless of how you look at it, the question of memory usage remains critical to the life cycle of a system, and whether you are maintaining system health or troubleshooting a particular service or application, you will always need to remember that the use of memory is a critical resource to your system. For this reason, we will begin by calling the free command in the following way: # free -m The main elements of the preceding command will look similar to this:          Total   used   free   shared   buffers   cached Mem:     1837     274   1563         8         0       108 -/+ buffers/cache: 164   1673 Swap:     2063       0   2063 In the example shown, I have used the -m option to ensure that the output is formatted in megabytes. This makes it easier to read, but for the sake of troubleshooting, rather than trying to understand every numeric value shown, let's reduce the scope of the original output to highlight the relevant area of concern: -/+ buffers/cache: 164   1673 The importance of this line is based on the fact that it accounts for the associated buffers and caches to illustrate what memory is currently being used and what is held in reserve. Where the first value indicates how much memory is being used, the second value tells us how much memory is available to our applications. In the example shown, this instance translates into 164 MB of used memory and 1673 MB of available memory. Bearing this in mind, let me draw your attention to the final line in order that we can examine the importance of swap: Swap:     2063       0   2063 Swapping typically occurs when memory usage is impacting performance. As we can see from the preceding example, the first value tells us that there is a total amount of system swap set at 2063 MB, with the second value indicating how much swap is being used (0 MB), while the third value shows the amount of swap that is still available to the system as a whole (2063 MB). So yes, based on the example data shown here, we can conclude that this is a healthy system, and no swap is being used, but while we are here, let's use this time to discover more about the swap space on your system. To begin, we will revisit the contents of the proc directory and reveal the total and used swap size by typing the following command: # cat /proc/swaps Assuming that you understand the output shown, you should then investigate the level of swappiness used by your system with the following command: # cat /proc/sys/vm/swappiness Having done this, you will now see a numeric value between the ranges of 0-100. The numeric value is a percentage and it implies that, if your system has a value of 30, for example, it will begin to use swap memory at 70 percent occupation of RAM. The default for all Linux systems is usually set with a notional value between 30 to 60, but you can use either of the following commands to temporarily change and modify the swappiness of your system. This can be achieved by replacing the value of X with a numeric value from 1-100 by typing: # echo X > /proc/sys/vm/swappiness Or more specifically, this can also be achieved with: # sysctl -w vm.swappiness=X If you change your mind at any point, then you have two options in order to ensure that no permanent changes have been made. You can either repeat one of the preceding two commands and return the original values, or issue a full system reboot. On the other hand, if you want to make the change persist, then you should edit the /etc/sysctl.conf file and add your swappiness preferences in the following way: vm.swappiness=X When complete, simply save and close the file to ensure that the changes take effect. The level of swappiness controls the tendency of the kernel to move a process out of the physical RAM on to a swap disk. This is memory management at work, but it is important to realize that swapping will not occur immediately, as the level of swappiness is actually expressed as a percentage value. For this reason, the process of swapping should be viewed more as a measurement of preference when using the cache, and as every administrator will know, there is an option for you to clear the swap by using the commands swapoff -a and swapon -a to achieve the desired result. The golden rule is to realize that a system displaying a level of swappiness close to the maximum value (100) will prefer to begin swapping inactive pages. This is because a value of 100 is a representative of 0 percent occupation of RAM. By comparison, the closer your system is to the lowest value (0), the less likely swapping is to occur as 0 is representative of 100 percent occupation of RAM. Generally speaking, we would all probably agree that systems with a very large pool of RAM would not benefit from aggressive swapping. However, and just to confuse things further, let's look at it in a different way. We all know that a desktop computer will benefit from a low swappiness value, but in certain situations, you may also find that a system with a large pool of RAM (running batch jobs) may also benefit from a moderate to aggressive swap in a fashion similar to a system that attempts to do a lot but only uses small amounts of RAM. So, in reality, there are no hard and fast rules; the use of swap should be based on the needs of the system in question rather than looking for a single solution that can be applied across the board. Taking this further, special care and consideration should be taken while making changes to the swapping values as RAM that is not used by an application is used as disk cache. In this situation, by decreasing swappiness, you are actually increasing the chance of that application not being swapped-out, and you are thereby decreasing the overall size of the disk cache. This can make disk access slower. However, if you do increase the preference to swap, then because hard disks are slower than memory modules, it can lead to a slower response time across the overall system. Swapping can be confusing, but by knowing this, we can also appreciate the hidden irony of swappiness. As Newton's third law of motion states, for every action, there is an equal and opposite reaction, and finding the optimum swappiness value may require some additional experimentation. Summary In this article, we learned some basic yet vital commands that help us gauge and maintain server performance with the help of swapiness. Resources for Article: Further resources on this subject: Installing CentOS [article] Managing public and private groups [article] Installing PostgreSQL [article]
Read more
  • 0
  • 0
  • 1408

article-image-getting-ready-your-first-biztalk-services-solution
Packt
24 Mar 2014
5 min read
Save for later

Getting Ready for Your First BizTalk Services Solution

Packt
24 Mar 2014
5 min read
(For more resources related to this topic, see here.) Deployment considerations You will need to consider the BizTalk Services edition required for your production use as well as the environment for test and/or staging purposes. This depends on decision points such as: Expected message load on the target system Capabilities that are required now versus 6 months down the line IT requirements around compliance, security, and DR The list of capabilities across different editions is outlined in the Windows Azure documentation page at http://www.windowsazure.com/en-us/documentation/articles/biztalk-editions-feature-chart. Note on BizTalk Services editions and signup BizTalk Services is currently available in four editions: Developer, Basic, Standard, and Premium, each with varying capabilities and prices. You can sign up for BizTalk Services from the Azure portal. The Developer SKU contains all features needed to try and evaluate without worrying about production readiness. We use the Developer edition for all examples. Provisioning BizTalk Services BizTalk Services deployment can be created using the Windows Azure Management Portal or using PowerShell. We will use the former in this example. Certificates and ACS Certificates are required for communication using SSL, and Access Control Service is used to secure the endpoints of the BizTalk Services deployment. First, you need to know whether you need a custom domain for the BizTalk Services deployment. In the case of test or developer deployments, the answer is mostly no. A BizTalk Services deployment will autogenerate a self-signed certificate with an expiry of close to 5 years. The ACS required for deployment will also be autocreated. Certificate and Access Control Service details are required for sending messages to bridges and agreements and can be retrieved from the Dashboard page post deployment. Storage requirements You need to create an Azure SQL database for tracking data. It is recommended to use the Business edition with the appropriate size; for test purposes, you can start with the 1 GB Web edition. You also need to pass the storage account credentials to archive message data. It is recommended that you create a new Azure SQL database and Storage account for use with BizTalk Services only. The BizTalk Services create wizard Now that we have the security and storage details figured out, let us create a BizTalk Services deployment from the Azure Management Portal: From the Management portal, navigate to New | App Services | BizTalk Service | Custom Create. Enter a unique name for the deployment, keeping the following values—EDITION: Developer, REGION: East US, TRACKING DATABASE: Create a new SQL Database instance. In the next page, retain the default database name, choose the SQL server, and enter the server login name and password. There can be six SQL server instances per Azure subscription. In the next page, choose the storage account for archiving and monitoring information. Deploy the solution. The BizTalk Services create wizard from Windows Azure Management Portal The deployment takes roughly 30 minutes to complete. After completion, you will see the status of the deployment as Active. Navigate to the deployment dashboard page; click on CONNECTION INFORMATION and note down the ACS credentials and download the deployment SSL certificate. The SSL certificate needs to be installed on the client machine where the Visual Studio SDK will be used. BizTalk portal registration We have one step remaining, and that is to configure the BizTalk Services Management portal to view agreements, bridges, and their tracking data. For this, perform the following steps: Click on Manage from the Dashboard screen. This will launch <mydeployment>.portal.biztalk.windows.net, where the BizTalk Portal is hosted. Some of the fields, such as the user's live ID and deployment name, will be auto-populated. Enter the ACS Issuer name and ACS Issuer secret noted in the previous step and click on Register. BizTalk Services Portal registration Creating your first BizTalk Services solution Let us put things into action and use the deployment created earlier to address a real-world multichannel sales scenario. Scenario description A trader, Northwind, manages an e-commerce website for online customer purchases. They also receive bulk orders from event firms and corporates for their goods. Northwind needs to develop a solution to validate an order and route the request to the right inventory location for delivery of the goods. The incoming request is an XML file with the order details. The request from event firms and corporates is over FTP, while e-commerce website requests are over HTTP. Post processing of the order, if the customer location is inside the US, then the request are forwarded to a relay service at a US address. For all other locations, the order needs to go to the central site and is sent to a Service Bus Queue at IntlAddress with the location as a promoted property. Prerequisites Before we start, we need to set up the client machine to connect to the deployment created earlier by performing the following steps: Install the certificate downloaded from the deployment on your client box to the trusted root store. This authenticates any SSL traffic that is between your client and the integration solution on Azure. Download and install the BizTalk Services SDK (https://go.microsoft.com/fwLink/?LinkID=313230) so the developer project experience lights up in Visual Studio 2012. Download the BizTalk Services EAI tools' Message Sender and Message Receiver samples from the MSDN Code Gallery available at http://code.msdn.microsoft.com/windowsazure. Realizing the solution We will break down the implementation details into defining the incoming format and creating the bridge, including transports to process incoming messages and the creation of the target endpoints, relay, and Service Bus Queue. Creating a BizTalk Services project You can create a new BizTalk Services project in Visual Studio 2012. BizTalk Services project in Visual Studio Summary This article discussed deployment considerations, provisioning BizTalk Services, BizTalk portal registration, and prerequisites for creating your first BizTalk Services solution. Resources for Article: Further resources on this subject: Using Azure BizTalk Features [Article] BizTalk Application: Dynamics AX Message Outflow [Article] Setting up a BizTalk Server Environment [Article]
Read more
  • 0
  • 0
  • 1272
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime