Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-optimizing-your-mysql-servers-performance-using-indexes
Packt
29 Jun 2010
11 min read
Save for later

Optimizing your MySQL Servers' performance using Indexes

Packt
29 Jun 2010
11 min read
Introduction One of the most important features of relational database management systems—MySQL being no exception—is the use of indexes to allow rapid and efficient access to the enormous amounts of data they keep safe for us. In this article, we will provide some useful recipes for you to get the most out of your databases. Infinite storage, infinite expectations We have got accustomed to nearly infinite storage space at our disposal—storing everything from music to movies to high resolution medical imagery, detailed geographical information,or just plain old business data. While we take it for granted that we hardly ever run out of space, we also expect to be able to locate and retrieve every bit of information we save in an instant. There are examples everywhere in our lives—business and personal: Your pocket music player's library can easily contain tens of thousands of songs and yet can be browsed effortlessly by artist name or album title, or show you last week's top 10 rock songs. Search engines provide thousands of results in milliseconds for any arbitrary search term or combination. A line of business application can render your sales numbers charted and displayed on a map, grouped by sales district in real-time. These are a few simple examples, yet for each of them huge amounts of data must be combed to quickly provide just the right subset to satisfy each request. Even with the immense speed of modern hardware, this is not a trivial task to do and requires some clever techniques. Speed by redundancy Indexes are based on the principle that searching in sorted data sets is way faster than searching in unsorted collections of records. So when MySQL is told to create an index on one or more columns, it copies these columns' contents and stores them in a sorted manner. The remaining columns are replaced by a reference to the original table with the unsorted data. This combines two benefits—providing fast retrieval while maintaining reasonably efficient storage requirements. So, without wasting too much space this approach enables you to create several of those indexes (or indices, both are correct) at a relatively low cost. However, there is a drawback to this as well: while reading data, indexes allow for immense speeds, especially in large databases; however, they do slow down writing operations. In the course of INSERTs, UPDATEs, and DELETEs, all indexes need to be updated in addition to the data table itself. This can place significant additional load on the server, slowing down all operations. For this reason, keeping the number of indexes as low as possible is paramount, especially for the largest tables where they are most important. In this article, you'll find some recipes that will help you to decide how to define indexes and show you some pitfalls to avoid. Storage engine differences We will not go into much detail here about the differences between the MyISAM and the InnoDB storage engines offered by MySQL. However, regarding indexes there are some important differences to know between MySQL's two most important storage engines. They influence some decisions you will have to make. MyISAM In the figure below you can see a simplified schema of how indexes work with the MyISAM storage engine. Their most important property can be summed up as "all indexes are created equal". This means that there is no technical difference between the primary and secondary keys. The diagram shows a single (theoretical) data table called books. It has three columns named isbn, title, and author. This is a very simple schema, but it is sufficient for explanation purposes. The exact definition can be found in the Adding indexes to tables recipe in this article. For now, it is not important. MyISAM tables store information in the order it is inserted. In the example, there are three records representing a single book each. The ISBN number is declared as the primary key for this table. As you can see, the records are not ordered in the table itself—the ISBN numbers are out of what would be their lexical order. Let's assume they have been inserted by someone in this order. Now, have a look at the first index—the PRIMARY KEY. The index is sorted by the isbn column. Associated with each index entry is a row pointer that leads to the actual data record in the books table. When looking up a specific ISBN number in the primary key index, the database server follows the row pointer to retrieve the remaining data fields. The same holds true for the other two indexes IDX_TITLE and IDX_AUTHOR, which are sorted by the respective fields and also contain a row pointer each. Looking up a book's details by any one of the three possible search criteria is a two-part operation: first, find the index record, and then follow the row pointer to get the rest of the data. With this technique you can insert data very quickly because the actual data records are simply appended to the table. Only the relatively small index records need to be kept in order, meaning much less data has to be shuffled around on the disk. There are drawbacks to this approach as well. Even in cases where you only ever want to look up data by a single search column, there will be two accesses to the storage subsystem—one for the index, another for the data. InnoDB However, InnoDB is different. Its index system is a little more complicated, but it has some advantages: Primary (clustered) indexes Whereas in MyISAM all indexes are structured identically, InnoDB makes a distinction between the primary key and additional secondary ones. The primary index in InnoDB is a clustered index. This means that one or more columns of each record make up a unique key that identifies this exact record. In contrast to other indexes, a clustered index's main property is that it itself is part of the data instead of being stored in a different location. Both data and index are clustered together. An index is only serving its purpose if it is stored in a sorted fashion. As a result, whenever you insert data or modify the key column(s), it needs to be put in the correct location according to the sort order. For a clustered index, the whole record with all its data has to be relocated. That is why bulk data insertion into InnoDB tables is best performed in correct primary key order to minimize the amount of disk I/O needed to keep the records in index order. Moreover, the clustered index should be defined so that it is hardly ever changed for existing rows, as that too would mean relocating full records to different sectors on the disk. Of course, there are significant advantages to this approach. One of the most important aspects of a clustered key is that it actually is a part of the data. This means that when accessing data through a primary key lookup, there is no need for a two-part operation as with MyISAM indexes. The location of the index is at the same time the location of the data itself—there is no need for following a row pointer to get the rest of the column data, saving an expensive disk access. Secondary indexes Consider if you were to search for a book by title to find out the ISBN number. An index on the name column is required to prevent the database from scanning through the whole (ISBN-sorted) table. In contrast to MyISAM, the InnoDB storage engine creates secondary indexes differently. Instead of record pointers, it uses a copy of the whole primary key for each record to establish the connection to the actual data contents. In the previous figure, have a look at the IDX_TITLE index. Instead of a simple pointer to the corresponding record in the data table, you can see the ISBN number duplicated as well. This is because the isbn column is the primary key of the books table. The same goes for the other indexes in the figure—they all contain the book ISBN number as well. You do not need to (and should not) specify this yourself when creating and indexing on InnoDB tables, it all happens automatically under the covers. Lookups by secondary index are similar to MyISAM index lookups. In the first step, the index record that matches your search term is located. Then secondly, the remaining data is retrieved from the data table by means of another access—this time by primary key. As you might have figured, the second access is optional, depending on what information you request in your query. Consider a query looking for the ISBN numbers of all known issues of Moby Dick: SELECT isbn FROM books WHERE title LIKE 'Moby Dick%'; Issued against a presumably large library database, it will most certainly result in an index lookup on the IDX_TITLE key. Once the index records are found, there is no need for another lookup to the actual data pages on disk because the ISBN number is already present in the index. Even though you cannot see the column in the index definition, MySQL will skip the second seek saving valuable I/O operations. But there is a drawback to this as well. MyISAM's row pointers are comparatively small. The primary key of an InnoDB table can be much bigger—the longer the key, the more the data that is stored redundantly. In the end, it can often be quite difficult to decide on the optimal balance between increased space requirements and maintenance costs on index updates. But do not worry; we are going to provide help on that in this article as well. General requirements for the recipes in this article All the recipes in this article revolve around changing the database schema. In order to add indexes or remove them, you will need access to a user account that has an effective INDEX privilege or the ALTER privilege on the tables you are going to modify. While the INDEX privilege allows for use of the CREATE INDEX command, ALTER is required for the ALTER TABLE ADD INDEX syntax. The MySQL manual states that the former is mapped to the latter automatically. However, an important difference exists: CREATE INDEX can only be used to add a single index at a time, while ALTER TABLE ADD INDEX can be used to add more than one index to a table in a single go. This is especially relevant for InnoDB tables because up to MySQL version 5.1 every change to the definition of a table internally performs a copy of the whole table. While for small databases this might not be of any concern, it quickly becomes infeasible for large tables due to the high load copying may put on the server. With more recent versions this might have changed, but make sure to consult your version's manual. In the recipes throughout this article, we will consistently use the ALTER TABLE ADD INDEX syntax to modify tables, assuming you have the appropriate privileges. If you do not, you will have to rewrite the statements to use the CREATE INDEX syntax. Adding indexes to tables Over time requirements for a software product usually change and affect the underlying database as well. Often the need for new types of queries arises, which makes it necessary to add one or more new indexes to perform these new queries fast enough. In this recipe, we will add two new indexes to an existing table called books in the library schema. One will cover the author column, the other the title column. The schema and table can be created like this: mysql> CREATE DATABASE library;mysql> USE library;mysql> CREATE TABLE books (isbn char(13) NOT NULL,author varchar(64) default NULL,title varchar(64) NOT NULL,PRIMARY KEY (isbn)) ENGINE=InnoDB; Getting ready Connect to the database server with your administrative account.
Read more
  • 0
  • 0
  • 1739

article-image-build-your-own-application-access-twitter-using-java-and-netbeans-part-4
Packt
28 Jun 2010
7 min read
Save for later

Build your own Application to access Twitter using Java and NetBeans: Part 4

Packt
28 Jun 2010
7 min read
In the 3rd part of this article series, we learnt about: Added a Tabbed Pane component to your SwingAndTweet application, to show your own timeline on one tab and your friend’s timeline on another tab Used a JScrollPane component to add vertical and horizontal scrollbars to your friends’ timeline list Used the getFriendsTimeline() method from the Twitter4J API to get the 20 most recent tweets from your friend’s timeline Applied font styles to your JLabel components via the Font class Added a black border to separate each individual tweet by using the BorderFactory and Color classes Added the date and time of creation of each individual tweet by using the getCreatedAt() method from the twitter4j.Status interface, along with the Date class. All those things, we learnt in the third part of the article series were a big improvement for our Twitter client, but wouldn’t it be cool if you could click on the URL links from your friends’ time line and then a web browser window would open automatically to show you the related webpage? Well, after reading this part of the article series, you’ll be able to integrate this functionality into your own Twitter client among other things. Here are the links to the earlier articles of this article series: Read Build your own Application to access Twitter using Java and NetBeans: Part 1 Read Build your own Application to access Twitter using Java and NetBeans: Part 2 Read Build your own Application to access Twitter using Java and NetBeans: Part 3 Using a JEditorPane component Till now, we’ve been working with JPanel objects to show your Twitter information inside the JTabbedPane component. But as you can see from your friends’ tweets, the URL links that show up aren’t clickable. And how can we make them clickable? Well, fortunately for us there’s a Swing component called JEditorPane that will let us use HTML markup, so the URL hyperlinks will show up as if you were on a web page. Cool, huh? Now let’s start with the dirty job… Open your NetBeans IDE along with your SwingAndTweet project, and make sure you’re in the Source View. Scroll up to the import declarations section and type import javax.swing.JEditorPane; right below the last import declaration, so your code looks as shown below: Now scroll down to the last line of code, JLabel statusUser;, and type JEditorPane statusPane; just below that line, as shown in the following screenshot: The next step is to add statusPane to the JTabbedPane1 component in your application. Scroll through the code until you locate the //code for the Friends timelineline and the try-block code below that line; then type the following code block just above the for statement: String paneContent = new String();statusPane = new JEditorPane();statusPane.setContentType("text/html");statusPane.setEditable(false); The following screenshot shows how your code must look like after inserting the above block of code (the red square indicates the lines you must add): Now scroll down through the code inside the for statement and type paneContent = paneContent + statusUser.getText() + "<br>" + statusText.getText() + "<hr>"; right after the jPanel1.add(individualStatus); line, as shown below: Then add the following two lines of code after the closing brace of the try block: statusPane.setText(paneContent);jTabbedPane1.add("Friends - Enhanced", statusPane); The following screenshot shows how your code must look like after the insertion: Run your application and log into your Twitter account. A new tab will appear in your Twitter client, and if you click on it you’ll see your friends’ latest tweets, as in the following screenshot: If you take a good look at the screen, you’ll notice the new tab you added with the JEditorPane component doesn’t show a vertical scroll bar so you can scroll up and down to see the complete list. That’s pretty easy to fix: First add the import javax.swing.JScrollPane; line to the import declarations section and then replace the jTabbedPane1.add("Friends - Enhanced", statusPane); line you added on step 9 with the following lines: JScrollPane editorScrollPane = new JScrollPane(statusPane, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, // vertical bar policy JScrollPane.HORIZONTAL_SCROLLBAR_NEVER ); // horizontal bar policyjTabbedPane1.add("Friends - Enhanced", editorScrollPane); Your code should now look like this: Run your Twitter application again and this time you’ll see the vertical scrollbar: Let’s stop for a while to review our progress so far. In the first step above the exercise, you added an import declaration to tell the Java compiler that we need to use an object from the JEditorPane class. In step 3, you added a JEditorPane object called statusPane to your application. This object acts as a container for your friends’ tweets. And in case you’re wondering why we didn’t use a regular JPanel object, just remember that we want to make the URL links in your friends’ tweets clickable, so when you click on one of them, a web browser window will pop up to show you the web page associated to that hyperlink. Now let’s get back to our exercise. In step 4, you added four lines to your application’s code. The first line: String paneContent = new String(); creates a String variable called paneContent to store the username and text of each individual tweet from your friends’ timeline. The next three lines: statusPane = new JEditorPane();statusPane.setContentType("text/html");statusPane.setEditable(false); create a JEditorPane object called statusPane, set its content type to text/html so we can include HTML markup and make the statusPane non-editable, so nothing gets messed up when showing your friends’ timeline. Now that we have the statusPane ready to roll, we need to fill it up with the information related to each individual tweet from your friends. That’s why we need the paneContent variable. In step 6, you inserted the following line: paneContent = paneContent + statusUser.getText() + "<br>" + statusText.getText() + "<hr>"; inside the for block to add the username and the text of each individual tweet to the paneContent variable. The <br> HTML tag inserts a line break so the username appears in one line and the text of each tweet appears in another line. The <hr> HTML tag inserts a horizontal line to separate one tweet from the other. Once the for loop ends, we need to add the information from the paneContent variable to the JEditorPane object called statusPane. That’s why in step 7, you added the following line: statusPane.setText(paneContent); and then the jTabbedPane1.add("Friends - Enhanced", statusPane); line creates a new tab in the jTabbedPane1 component and adds the statusPane component to it, so you can see the friends timeline with HTML markup. In step 10, you learned how to create a JScrollPane object called editorScrollPane to add scrollbars to your statusPane component and integrate them into the jTabbedPane1 container. In this example, the JScrollPane constructor requires arguments: the statusPane component, the vertical scrollbar policy and the horizontal scrollbar policy. There are three options you can choose for your vertical and horizontal scrollbars: show them as needed, never show them or always show them. In this specific case, we need the vertical scrollbar to show up as needed, in case the list of your friends’ tweets doesn’t fit the screen, so we use the JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED policy. And since we don’t need the horizontal bar to show up because the statusPane component can adjust its horizontal size to fit your application’s window, we use the JScrollPane.HORIZONTAL_SCROLLBAR_NEVER policy. The last line of code from step 10 adds the editorScrollPane component to the jTabbedPane1 container instead of adding the statusPane component directly, because now the JEditorPane component is contained within the JScrollPane component. Now let’s see how to convert the URL links to real hyperlinks.
Read more
  • 0
  • 0
  • 1813

article-image-oracle-enterprise-manager-grid-control
Packt
28 Jun 2010
9 min read
Save for later

Oracle Enterprise Manager Grid Control

Packt
28 Jun 2010
9 min read
Evolution of IT systems: As architectural patterns moved from monolithic systems to distributed systems, not all IT systems were moved to the newest patterns. Some new systems were built with new technologies and patterns whereas existing systems that were performing well enough continued on earlier technologies. Best of breed approach: With multi-tiered architectures, enterprises had the choice of building each tier using best of breed technology for that tier. For example, one system could be built using a J2EE container from vendor A, but a database from vendor B. Avoiding single vendors and technologies: Enterprises wanted to avoid dependence on any single vendor and technology. This led to systems being built using different technologies. For example, an order-booking system built using .NET technologies on Windows servers, but an order shipment system built using J2EE platform on Linux servers. Acquisitions and Mergers: Through acquisitions and mergers, enterprises have inherited IT systems that were built using different technologies. Frequently, new systems were added to integrate the systems of two enterprises but the new systems were totally different from the existing systems. For example, using BPEL process manager to integrate a CRM system with a transportation management system. We see that each factor for diversity in the data center has some business or strategic value. At the same time, such diversity makes management of the data center more complex. To manage such data centers we need a special product like Oracle's Enterprise Manager Grid Control that can provide a unified and centralized management solution for the wide array of products. In any given data center, there are lots of repetitive operations that need to be executed on multiple servers (like applying security patches on all Oracle Databases). As data centers move away from high-end servers to a grid of inexpensive servers, the number of IT resources increases in the data center and so does the cost of executing repetitive operations on the grid. Enterprise Manager Grid Control provides solutions to reduce the cost of any grid by automating repetitive operations that can be simultaneously executed on multiple servers. Enterprise Manager Grid Control works as a force multiplier by providing support for executing the same operations on multiple servers at the cost of one operation. As organizations put more emphasis on business and IT alignment, that requires a view of IT resources overlaid with business processes and applications is required. Enterprise Manager Grid Control provides such a view and improves the visibility of IT and business processes in a given data center. By using Enterprise Manager Grid Control, administrators can see IT issues in the context of business processes and they can understand how business processes are affected by IT performance. In this article, we will get to know more about Oracle's Enterprise Manager Grid Control by covering the following aspects: Key features of Enterprise Manager Grid Control: Comprehensive view of data center Unmanned monitoring Historical data analysis Configuration management Managing multiple entities as one Service level management Scheduling Automating provisioning Information publishing Synthetic transaction Manage from anywhere Enterprise Manager Product family Range of products managed by Enterprise Manager: Range of products EM extensibility Enterprise Manager Grid Control architecture. Multi-tier architecture Major components High availability Summary of learning Key features of Enterprise Manager Grid Control Typical applications in today's world are built with multi-tiered architecture; to manage such applications a system administrator has to navigate through multiple management tools and consoles that come along with each product. Some of the tools have a browser interface, some have a thick client interface, or even a command line interface. Navigating through multiple management tools often involves doing some actions from a browser or running some scripts or launching a thick client from the command line. For example, to find bottlenecks in a J2EE application in the production environment, an administrator has to navigate through the management console for the HTTP server, the management console for the J2EE container, and the management console for the database. Enterprise Manager Grid Control is a systems management product for the monitoring and management of all of the products in the data center. For the scenario explained above, Enterprise Manager provides a common management interface to manage an HTTP server, J2EE server and database. Enterprise Manager provides this unified solution for all products in a data center. In addition to basic monitoring, Enterprise Manager provides a unified interface for many other administration tasks like patching, configuration compliance, backup-recovery, and so on. Some key features of Enterprise Manager are explained here. Comprehensive view of the data center Enterprise Manager provides a comprehensive view of the data center, where an administrator can see all of the applications, servers, databases, network devices, storage devices, and so on, along with performance and configuration data. As the number of all such resources is very high, this Enterprise Manager highlights the resources that need immediate attention or that may need attention in near future. For example, a critical security patch is available that needs to be applied on some Fusion Middleware servers, or a server that has 90% CPU utilization. The following figure shows one such view of a data center, where users can see all entities that are monitored, that are up, that are down, that have performance alerts, that have configuration violations and so on. The user can drill down to fine-grained views from this top-level view. The data in the top-level view and the fine-grained drill-down view can be broadly summarized in the following categories: Performance data Data that shows how an IT resource is performing, that includes the current status, overall availability over a period of time, and other performance indicators that are specific to the resource like the average response time for a J2EE server. Any violation of acceptable performance thresholds is highlighted in this view. Configuration data Configuration data is the configuration parameters or, configuration files captured from an IT resource. Besides the current configuration, changes in configuration are also tracked and available from Enterprise Manager. Any violation of configuration conformance is also available. For example, if a data center policy mandates that only port 80 should be open on all servers, Enterprise Manager captures any violation of that policy. Status of scheduled operations In any data center there are some scheduled operations, these operations could be something like a system administration task such as taking a backup of a database server or some batch process that moves data across systems, for example, moving orders from fulfillment to shipping. Enterprise Manager provides a consolidated view of the status of all such scheduled operations. Inventory Enterprise Manager provides a listing of all hardware and software resources with details like version numbers. All of these resources are also categorized in different buckets – for example, Oracle Application Server, WebLogic application Server, WebSphere application are all categorized in the middleware bucket. This categorization helps the user to find resources of the same or similar type. Enterprise Manager. It also captures the finer details of software resources—like patches applied. The following figure shows one such view where the user can see all middleware entities like Oracle WebLogic Server, IBM WebSphere Server, Oracle Application Server, and so on. Oracle Enterprise Manager Grid Control" border="0" alt="Oracle Enterprise Manager Grid Control" title="Oracle Enterprise Manager Grid Control" /> Unmanned monitoring Enterprise Manager monitors IT resources around the clock and it gathers all performance indicators at every fixed interval. Whenever a performance indicator goes beyond the defined acceptable limit, Enterprise Manager records that occurrence. For example, if the acceptable limit of CPU utilization for a server is 70%, then whenever CPU utilization of the server goes above 70% then that occurrence is recorded. Enterprise Manager can also send notification of any such occurrence through common notification mechanisms like email, pager, SNMP trap, and so on. Historical data analysis All of the performance indicators captured by Enterprise Manager are saved in the repository. Enterprise Manager provides some useful views of the data using the system administrator that can analyze data over a period of time. Besides the fine-grained data that is collected at every fixed interval, it also provides coarse views by rolling up the data every hour and every 24 hours. Configuration management Enterprise Manager gathers configuration data for IT resources at regular intervals and checks for any configuration compliance violation. Any such violation is captured and can be sent out as a notification. Enterprise Manager comes with many out-of-the-box configuration compliance rules that represent best practices; in addition to that, system administrators can configure their own rules. All of the configuration data is also saved in the Enterprise Manager repository. Using data, the system administrator can compare the configuration of two similar IT resources or compare the configuration of the same IT resource at two different points in time. The system administrator can also see the configuration change history. Managing multiple entities as one Most of the more recent applications are built with multi-tiered architecture and each tier may run on different IT resources. For example, an order booking application can have all of its presentation and business logic running on a J2EE server, all business data persisted in a database, all authentication and authorization performed through an LDAP server, and all of the traffic to the application routed through an HTTP server. To monitor such applications, all of the underlying resources need to be monitored. Enterprise Manager provides support for grouping such related IT resources. Using this support, the system administrator can monitor all related resources as one entity and all performance indicators for all related entities can be monitored from one interface. Service level management Enterprise Manager provides necessary constructs and interfaces for managing service level agreements that are based on the performance of IT resources. Using these constructs, the user can define indicators to measure service levels and expected service levels. For example, a service representing a web application can have the same average JSP response time as a service indicator, the expected service level for this service is to have the service indicator below three seconds for 90% of the time during business hours. Enterprise Manager keeps track of all such indicators and violations in the context of a service and at any time the user can see the status of such service level agreements over a defined time period.
Read more
  • 0
  • 0
  • 1902
Visually different images

article-image-displaying-posts-and-pages-using-wordpress-loop
Packt
28 Jun 2010
12 min read
Save for later

Displaying Posts and Pages Using Wordpress Loop

Packt
28 Jun 2010
12 min read
(For more resources on Wordpress, see here.) The Loop is the basic building block of WordPress template files. You'll use The Loop when displaying posts and pages, both when you're showing multiple items or a single one. Inside of The Loop you use WordPress template tags to render information in whatever manner your design requires. WordPress provides the data required for a default Loop on every single page load. In addition, you're able to create your own custom Loops that display post and page information that you need. This power allows you to create advanced designs that require a variety of information to be displayed. This article will cover both basic and advanced Loop usage and you'll see exactly how to use this most basic WordPress structure. Creating a basic Loop The Loop nearly always takes the same basic structure. In this recipe, you'll become acquainted with this structure, find out how The Loop works, and get up and running in no time. How to do it... First, open the file in which you wish to iterate through the available posts. In general, you use The Loop in every single template file that is designed to show posts. Some examples include index.php, category.php, single.php, and page.php. Place your cursor where you want The Loop to appear, and then insert the following code: <?phpif( have_posts() ) { while( have_posts() ) { the_post(); ?> <h2><?php the_title(); ?></h2> <?php }}?> Using the WordPress theme test data with the above Loop construct, you end up with something that looks similar to the example shown in following screenshot: Depending on your theme's styles, this output could obviously look very different. However, the important thing to note is that you've used The Loop to iterate over available data from the system and then display pieces of that data to the user in the way that you want to. From here, you can use a wide variety of template tags in order to display different information depending on the specific requirements of your theme. How it works... A deep understanding of The Loop is paramount to becoming a great WordPress designer and developer, so you should understand each of the items in the above code snippet fairly well. First, you should recognize that this is just a standard while loop with a surrounding if conditional. There are some special WordPress functions that are used in these two items, but if you've done any PHP programming at all, you should be intimately familiar with the syntax here. If you haven't experienced programming in PHP, then you might want to check out the syntax rules for if and while constructs at http://php.net/if and http://php.net/ while, respectively. The next thing to understand about this generic loop is that it depends directly on the global $wp_query object. $wp_query is created when the request is parsed, request variables are found, and WordPress figures out the posts that should be displayed for the URL that a visitor has arrived from. $wp_query is an instance of the WP_Query object, and the have_posts and the_post functions delegate to methods on that object. The $wp_query object holds information about the posts to be displayed and the type of page being displayed (normal listing, category archive, date archive, and so on). When have_posts is called in the if conditional above, the $wp_query object determines whether any posts matched the request that was made, and if so, whether there are any posts that haven't been iterated over. If there are posts to display, a while construct is used that again checks the value of have_posts. During each iteration of the while loop, the the_post function is called. the_post sets an index on $wp_query that indicates which posts have been iterated over. It also sets up several global variables, most notably $post. Inside of The Loop, the the_title function uses the global $post variable that was set up in the_post to produce the appropriate output based on the currently-active post item. This is basically the way that all template tags work. If you're interested in further information on how the WP_Query class works, you should read the documentation about it in the WordPress Codex at http://codex.wordpress.org/Function_ Reference/WP_Query. You can find more information about The Loop at http://codex. wordpress.org/The_Loop. Displaying ads after every third post If you're looking to display ads on your site, one of the best places to do it is mixed up with your main content. This will cause visitors to view your ads, as they're engaged with your work, often resulting in higher click-through rates and better paydays for you. How to do it... First, open the template in which you wish to display advertisements while iterating over the available posts. This will most likely be a listing template file like index.php or category. php. Decide on the number of posts that you wish to display between advertisements. Place your cursor where you want your loop to appear, and then insert the following code: <?phpif( have_posts() ) { $ad_counter = 0; $after_every = 3; while( have_posts() ) { $ad_counter++; the_post(); ?> <h2><?php the_title(); ?></h2> <?php // Display ads $ad_counter = $ad_counter % $after_every; if( 0 == $ad_counter ) { echo '<h2 style="color:red;">Advertisement</h2>'; } }}?> If you've done everything correctly, and are using the WordPress theme test data, you should see something similar to the example shown in the following screenshot: Obviously, the power here comes when you mix in paying ads or images that link to products that you're promoting. Instead of a simple heading element for the Advertisement text, you could dynamically insert JavaScript or Flash elements that pull in advertisements for you. How it works... As with the basic Loop, this code snippet iterates over all available posts. In this recipe, however, a counter variable is declared that counts the number of posts that have been iterated over. Every time that a post is about to be displayed, the counter is incremented to track that another post has been rendered. After every third post, the advertisement code is displayed because the value of the $ad_counter variable is equal to 0. It is very important to put the conditional check and display code after the post has been displayed. Also, notice that the $ad_counter variable will never be greater than 3 because the modulus operator (%) is being used every time through The Loop. Finally, if you wish to change the frequency of the ad display, simply modify the $after_every variable from 3 to whatever number of posts you want to display between ads. Removing posts in a particular category Sometimes you'll want to make sure that posts from a certain category never implicitly show up in the Loops that you're displaying in your template. The category could be a special one that you use to denote portfolio pieces, photo posts, or whatever else you wish to remove from regular Loops. How to do it... First, you have to decide which category you want to exclude from your Loops. Note the name of the category, and then open or create your theme's functions.php file. Your functions. php file resides inside of your theme's directory and may contain some other code. Inside of functions.php, insert the following code: add_action('pre_get_posts', 'remove_cat_from_loops');function remove_cat_from_loops( $query ) { if(!$query->get('suppress_filters')) { $cat_id = get_cat_ID('Category Name'); $excluded_cats = $query->get('category__not_in'); if(is_array($excluded_cats)) { $excluded_cats[] = $cat_id; } else { $excluded_cats = array($cat_id); } $query->set('category__not_in', $excluded_cats); } return $query;} How it works... In the above code snippet, you are excluding the category with the name Category Name. To exclude a different category, change the Category Name string to the name of the category you wish to remove from loops. You are filtering the WP_Query object that drives every Loop. Before any posts are fetched from the database, you dynamically change the value of the category__not_in variable in the WP_Query object. You append an additional category ID to the existing array of excluded category IDs to ensure that you're not undoing work of some other developer. Alternatively, if the category__not_in variable is not an array, you assign it an array with a single item. Every category ID in the category__not_in array will be excluded from The Loop, because when the WP_Query object eventually makes a request to the database, it structures the query such that no posts contained in any of the categories identified in the category__not_in variable are fetched. Please note that the denoted category will be excluded by default from all Loops that you create in your theme. If you want to display posts from the category that you've marked to exclude, then you need to set the suppress_filters parameter to true when querying for posts, as follows: query_posts( array( 'cat'=>get_cat_ID('Category Name'), 'suppress_filters'=>true)); Removing posts with a particular tag Similar to categories, it could be desirable to remove posts with a certain tag from The Loop. You may wish to do this if you are tagging certain posts as asides, or if you are saving posts that contain some text that needs to be displayed in a special context elsewhere on your site. How to do it... First, you have to decide which tag you want to exclude from your Loops. Note the name of the tag, and then open or create your theme's functions.php file. Inside of functions.php, insert the following code: add_action('pre_get_posts', 'remove_tag_from_loops');function remove_tag_from_loops( $query ) { if(!$query->get('suppress_filters')) { $tag_id = get_term_by('name','tag1','post_tag')->term_id; $excluded_tags = $query->get('tag__not_in'); if(is_array( $excluded_tags )) { $excluded_tags[] = $tag_id; } else { $excluded_tags = array($tag_id); } $query->set('tag__not_in', $excluded_tags); } return $query;} How it works... In the above code snippet, you are excluding the tag with the slug tag1. To exclude a different tag, change the string tag1 to the name of the tag that you wish to remove from all Loops. When deciding what tags to exclude, the WordPress system looks at a query parameter named tag__not_in, which is an array. In the above code snippet, the function appends the ID of the tag that should be excluded directly to the tag__not_in array. Alternatively, if tag__not_in isn't already initialized as an array, it is assigned an array with a single item, consisting of the ID for the tag that you wish to exclude. After that, all posts with that tag will be excluded from WordPress Loops. Please note that the chosen tag will be excluded, by default, from all Loops that you create in your theme. If you want to display posts from the tag that you've marked to exclude, then you need to set the suppress_filters parameter to true when querying for posts, as follows: query_posts( array( 'tag'=>get_term_by('name','tag1','post_tag')->term_id, 'suppress_filters'=>true )); Highlighting sticky posts Sticky posts are a feature added in version 2.7 of WordPress and can be used for a variety of purposes. The most frequent use is to mark posts that should be "featured" for an extended period of time. These posts often contain important information or highlight things (like a product announcement) that the blog author wants to display in a prominent position for a long period of time. How to do it... First, place your cursor inside of a Loop where you're displaying posts and want to single out your sticky content. Inside The Loop, after a call to the_post, insert the following code: <?phpif(is_sticky()) { ?> <div class="sticky-announcer"> <p>This post is sticky.</p> </div> <?php}?> Create a sticky post on your test blog and take a look at your site's front page. You should see text appended to the sticky post, and the post should be moved to the top of The Loop. You can see this in the following screenshot: How it works... The is_sticky function checks the currently-active post to see if it is a sticky post. It does this by examining the value retrieved by calling get_option('sticky_posts'), which is an array, and trying to find the active post's ID in that array. In this case, if the post is sticky then the sticky-announcer div is output with a message. However, there is no limit to what you can do once you've determined if a post is sticky. Some ideas include: Displaying a special icon for sticky posts Changing the background color of sticky posts Adding content dynamically to sticky posts Displaying post content differently for sticky posts
Read more
  • 0
  • 0
  • 7696

article-image-jasperreports-36-creating-report-model-beans-java-applications
Packt
26 Jun 2010
3 min read
Save for later

JasperReports 3.6: Creating a Report from Model Beans of Java Applications

Packt
26 Jun 2010
3 min read
(For more resources on JasperReports, see here.) Getting ready You need a Java JAR file that contains class files for the JavaBeans required for this recipe. A custInvoices.jar file is contained in the source code (chap4). Unzip the source code file for this article and copy the Task5 folder from the unzipped source code to a location of your choice. How to do it... Let's start using Java objects as data storage units. Open the ModelBeansReport.jrxml file from the Task5 folder of the source code for this article (chapt 4). The Designer tab of iReport shows a report containing data in the Title, Column Header, Customer Group Header1 and Detail 1 sections, as shown in the following screenshot: If you have not made any database connection so far in your iReport installation, you will see an Empty datasource shown selected in a drop-down list just below the main menu. Click on the Report Datasources icon, shown encircled to the right of the drop-down list in the following screenshot: A new window named Connections / Datasources will open, as shown next. This window lists an Empty data source as well as the datasources you have made so far. Click the New button at the top-right of the Connections / Datasources window. This will open a new Datasource selection window, as shown in the following screenshot: Select JavaBeans set datasource from the datasource types, as shown next. Click the Next button. A new window named JavaBeans set datasource will open, as shown in the following screenshot: Enter CustomerInvoicesJavaBeans as the name of your new connection in the text box beside the Name field, as shown in the following screenshot: Enter com.CustomerInvoicesFactory as the name of the factory class in the text box beside the Factory class field, as shown in the following screenshot: This com.CustomerInvoicesFactory class provides iReport with access to JavaBeans that contain your data. Enter getBeanCollection as the name of the static method in the text box beside The static method... field, as shown in the following screenshot: Leave the rest of the fields at their default values. Click the Test button to test your new connection to the JavaBeans datasource. You will see an Exception message dialog. This exception message occurs because iReport can't find your factory class. Dismiss the message box by clicking OK. Click the Save button at the bottom of the JavaBeans set datasource window and close the Connections / Datasources window as well.
Read more
  • 0
  • 0
  • 2583

article-image-jasperreports-36-using-multiple-relational-databases-generate-report
Packt
26 Jun 2010
4 min read
Save for later

JasperReports 3.6: Using Multiple Relational Databases to Generate a Report

Packt
26 Jun 2010
4 min read
(For more resources on JasperReports, see here.) Refer to the installPostgreSQL.txt file included in the source code download (chap4) to install and run PostgreSQL, which should be up and running before you proceed. The source code also includes two files named copySampleDataIntoPGS.txt and copySamplePaymentStatusDataIntoPGS.txt. The copySampleDataIntoPGS.txt file will help you to create a database named jasperdb5 and create a table named CustomerInvoices with five columns (InvoiceID, CustomerName, InvoicePeriod, ProductName, and InvoiceValue) and copy sample data for this article. Similarly, the copySamplePaymentStatusDataIntoPGS.txt file will help you to create a database named jasperdb5a and create a table named PaymentDetails with two columns (InvoiceID and PaymentStatus) and copy sample data. You will be using two JRXML files MultiDBReport.jrxml and PaymentStatusSubreport.jrxml in this recipe. You will find these files in the Task4 folder of the source code download for this chapter. The MultiDBReport.jrxml file is the master report, which uses the other file as a subreport. The master report has to refer to its subreport using a complete path (you cannot use relative paths). This means you have to copy the two JRXML files to the c:JasperReportsCookBookSamples folder on your PC. I have hardcoded this complete path in the master report (MultiDBReport.jrxml). How to do it... You are about to discover tricks for using multiple databases in a single report in the following simple steps: Open the PaymentStatusSubreport.jrxml file from the c:JasperReportsCookBookSamples folder. The Designer tab of iReport shows an empty report, as shown in the following screenshot: Right-click on the Parameters node in the Report Inspector window on the left of the Designer tab, as shown next. Choose the Add Parameter option from the pop-up menu. The Parameters node will expand to show the newly added parameter named parameter1 at the end of the parameters list. Click on parameter1, its properties will appear in the Properties window below the palette of components on the right of your iReport main window. Click on the Name property of the parameter and type InvoiceID as its value. The name of the parameter1 parameter will change to InvoiceID. Click on the Parameter Class property and select java.lang.Integer as its value. Click on the Default Value Expression property and enter 0 as its value, as shown in the following screenshot. Leave the rest of the parameter properties at their default values. Click the Report query button on the right of the Preview tab; a Report query dialog will appear, as shown in the following screenshot: Type SELECT * FROM paymentdetails WHERE invoiceid = $P{InvoiceID} in the Query editor. The fields of the paymentdetails table will be shown in the lower-half of the Report query dialog. Click the OK button, as shown in the following screenshot: Double-click the Fields node in the Report Inspector window. You will see that it contains invoiceid and paymentstatus fields, as shown below. Drag-and-drop the paymentstatus field from the Fields node into the top-left corner of the Detail 1 section, as shown in the following screenshot: Select PaymentDetails in the datasources drop-down list, as shown in the left image given below. Then switch to the Preview tab; a Parameter prompt dialog will appear, which will ask you for the invoice ID, as shown in the right image given below. Enter 1001 as the value of the InvoiceID parameter. You will see a report containing a single record showing the payment status of the invoice having the ID 1001. Switch back to the Designer tab. Click anywhere in the Page Header section; its properties will appear in the Properties window below the palette. Select the Band height property and set 0 as its value, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 3131
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-jasperreports-36-creating-report-relational-data
Packt
24 Jun 2010
2 min read
Save for later

JasperReports 3.6: Creating a Report from Relational Data

Packt
24 Jun 2010
2 min read
(For more resources on JasperReports, see here.) Getting ready You will need PostgreSQL to follow this recipe. Refer to the installPostgreSQL.txt file included in the source code download (chap4), which shows how you will install and run PostgreSQL. Note that your installation of PostgreSQL should be up and running before you proceed. The source code for this article also includes a file named CreateDbIntoPGS.txt, which will help you to create a database named jasperdb5. How to do it... The following simple steps will show you how to connect iReport to a database: Run iReport; it will open with a Welcome Window, as shown in the following screenshot: If you have not made any database connection so far in your iReport installation, you will see an Empty datasource shown selected in a drop-down list just below the main menu. Click on the Report Datasources icon shown encircled to the right of the drop-down list, as shown in the following screenshot: A new window named Connections / Datasources will open, as shown in the following screenshot. This window lists an Empty datasource as well as the datasources you have made so far. Click the New button shown at the top right of the Connections / Datasources window. This will open a new Datasource selection window, as shown in the following screenshot: You will see Database JDBC connection is selected by default. Click the Next button at the bottom of the Datasource window. A new window named Database JDBC connection will open, as shown in the following screenshot: Enter PG as the name for your new database connection in the input box beside the Name field. PG is just a name for the database connection you are creating. You can give any name and create any number of database connections. Click on the JDBC Driver drop-down list; it will drop-down to show a list of available JDBC drivers. As you are connecting to the PostgreSQL database, select the Postgre SQL (org.postgresql.Driver) option from the drop-down list, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 1148

article-image-jasperreports-36-creating-report-xml-data-using-xpath
Packt
24 Jun 2010
3 min read
Save for later

JasperReports 3.6: Creating a Report from XML Data using XPath

Packt
24 Jun 2010
3 min read
XML is a popular data source used in many applications. JasperReports allows you to generate reports directly from XML data. This first section of the article teaches you how to connect iReport to an XML file stored on your PC. In the second section of this article by Bilal Siddiqui, author of JasperReports 3.6 Development Cookbook, you will create a report from data stored in an XML file. In order to process an XML file and extract information from it, JasperReports uses XPath, which is a popular query language to filter XML data. So you will also learn how to use XPath expressions for report generation. (For more resources on JasperReports, see here.) Connecting to an XML datasource XML is a popular data source used in many applications. JasperReports allows you to generate reports directly from XML data. This section teaches you how to connect iReport to an XML file stored on your PC. Getting ready You need an XML file that contains report data. The EventsData.xml file is contained in the source code download (chap4). Unzip the source code file for this article (chap:4) and copy the Task2 folder from the unzipped source code to a location of your choice. How to do it... Run iReport; it will open showing a Welcome Window, as shown in the following screenshot: If you have not made any database connection so far in your iReport installation, you will see an Empty datasource shown selected in a drop-down list just below the main menu. Click on the Report Datasources icon shown encircled to the right of the drop-down list in the screen-shot shown below: A new window named Connections / Datasources will open, as shown below. This window lists an Empty datasource as well as the data sources you have made so far. Click the New button at the top-right of the Connections / Datasources window. This will open a new Datasource selection window, as shown in the following screenshot: Select XML file datasource from the datasources type list. Click Next. A new window named XML file datasource will open, as in the following screenshot: Enter XMLDatasource as the name for your new connection for the XML datasource in the text box beside the Name text field, as shown in the following screenshot: Click the Browse button beside the XML file text box to browse to the EventsData.xml file located in the Task2 folder that you copied in the Getting ready section. Click the Open button, as shown in the following screenshot: Select the Use the report XPath expression when filling the report option in the XML file datasource window, as shown in the following screenshot: Leave the other fields at their default values. Click the Test button to test the new XML datasource connection. You will see a Connection test successful message dialog. Click the Save button to save the newly created connection. A Connections / Datasources window will open showing your new XML datasource connection set as the default connection in the connections list, as shown highlighted in the following screenshot:
Read more
  • 0
  • 0
  • 4376

article-image-debugging-java-programs-using-jdb
Packt
23 Jun 2010
6 min read
Save for later

Debugging Java Programs using JDB

Packt
23 Jun 2010
6 min read
In this article by Nataraju Neeluru, we will learn how to debug a Java program using a simple command-line debugging tool called JDB. JDB is one of the several debuggers available for debugging Java programs. It comes as part of the Sun's JDK. JDB is used by a lot of people for debugging purposes, for the main reason that it is very simple to use, lightweight and being a command-line tool, is very fast. Those who are familiar with debugging C programs with gdb, will be more inclined to use JDB for debugging Java programs. We will cover most of the commonly used and needed JDB commands for debugging Java programs. Nothing much is assumed to read this article, other than some familiarity with Java programming and general concepts of debugging like breakpoint, stepping through the code, examining variables, etc. Beginners may learn quite a few things here, and experts may revise their knowledge. (For more resources on Java, see here.) Introduction JDB is a debugging tool that comes along with the Sun's JDK. The executable exists in JAVA_HOME/bin as 'jdb' on Linux and 'jdb.exe' on Windows (where JAVA_HOME is the root directory of the JDK installation). A few notes about the tools and notation used in this article: We will use 'jdb' on Linux for illustration throughout this article, though the JDB command set is more or less same on all platforms. All the tools (like jdb, java) used in this article are of JDK 5, though most of the material presented here holds true and works in other versions also. '$' is the command prompt on the Linux machine on which the illustration is carried out. We will use 'JDB' to denote the tool in general, and 'jdb' to denote the particular executable in JDK on Linux. JDB commands are explained in a particular sequence. If that sequence is changed, then the output obtained may be different from what is shown in this article. Throughout this article, we will use the following simple Java program for debugging: public class A{ private int x; private int y; public A(int a, int b) { x = a; y = b; } public static void main(String[] args) { System.out.println("Hi, I'm main.. and I'm going to call f1"); f1(); f2(3, 4); f3(4, 5); f4(); f5(); } public static void f1() { System.out.println("I'm f1..."); System.out.println("I'm still f1..."); System.out.println("I'm still f1..."); } public static int f2(int a, int b) { return a + b; } public static A f3(int a, int b) { A obj = new A(a, b); obj.reset(); return obj; } public static void f4() { System.out.println("I'm f4 "); } public static void f5() { A a = new A(5, 6); synchronized(a) { System.out.println("I'm f5, accessing a's fields " + a.x + " " + a.y); } } private void reset() { x = 0; y = 0; }} Let us put this code in a file called A.java in the current working directory, compile it using 'javac -g A.java' (note the '-g' option that makes the Java compiler generate some extra debugging information in the class file), and even run it once using 'java A' to see what the output is. Apparently, there is no bug in this program to debug it, but we will see, using JDB, how the control flows through this program. Recall that, this program being a Java program, runs on a Java Virtual Machine (JVM). Before we actually debug the Java program, we need to see that a connection is established between JDB and the JVM on which the Java program is running. Depending on the way JDB connects to the JVM, there are a few ways in which we can use JDB. No matter how the connection is established, once JDB is connected to the JVM, we can use the same set of commands for debugging. The JVM, on which the Java program to be debugged is running, is called the 'debuggee' here. Establishing the connection between JDB and the JVM In this section, we will see a few ways of establishing the connection between JDB and the JVM. JDB launching the JVM: In this option, we do not see two separate things as the debugger (JDB) and the debuggee(JVM), but rather we just invoke JDB by giving the initial class (i.e., the class that has the main() method) as an argument, and internally JDB 'launches' the JVM. $jdb AInitializing jdb ... At this point, the JVM is not yet started. We need to give 'run' command at the JDB prompt for the JVM to be started. JDB connecting to a running JVM: In this option, first start the JVM by using a command of the form: $java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=6000 AListening for transport dt_socket at address: 6000 It says that the JVM is listening at port 6000 for a connection. Now, start JDB (in another terminal) as: $jdb -attach 6000Set uncaught java.lang.ThrowableSet deferred uncaught java.lang.ThrowableInitializing jdb ...>VM Started: No frames on the current call stack main[1] At this point, JDB is connected to the JVM. It is possible to do remote debugging with JDB. If the JVM is running on machine M1, and we want to run JDB on M2, then we can start JDB on M2 as: $jdb -attach M1:6000 JDB listening for a JVM to connect: In this option, JDB is started first, with a command of the form: $jdb -listen 6000Listening at address: adc2180852:6000 This makes JDB listen at port 6000 for a connection from the JVM. Now, start the JVM (from another terminal) as: $java -Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=6000 A Once the above command is run, we see the following in the JDB terminal: Set uncaught java.lang.ThrowableSet deferred uncaught java.lang.ThrowableInitializing jdb ...>VM Started: No frames on the current call stack main[1] At this point, JDB has accepted the connection from the JVM. Here also, we can make the JVM running on machine M1 connect to a remote JDB running on machine M2, by starting the JVM as: $java -Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=M2:6000 A
Read more
  • 0
  • 0
  • 7075

article-image-how-configure-msdtc-and-firewall-distributed-wcf-service
Packt
21 Jun 2010
4 min read
Save for later

How to configure MSDTC and the firewall for the distributed WCF service

Packt
21 Jun 2010
4 min read
Understanding the distributed transaction support of a WCF service As we have seen, distributed transaction support of a WCF service depends on the binding of the service, the operation contract attribute, the operation implementation behavior, and the client applications. The following table shows some possible combinations of the WCF-distributed transaction support: Binding permits transaction flow Client flows transaction Service contract opts in transaction Service operation requires transaction scope Possible result True Yes Allowed or Mandatory True Service executes under the flowed in transaction True or False No Allowed True Service creates and executes within a new transaction True Yes or No Allowed False Service executes without a transaction True or False No Mandatory True or False SOAP exception True Yes NotAllowed True or False SOAP exception Testing the distributed transaction support of the WCF service Now that we have changed the service to support distributed transaction and let the client propagate the transaction to the service, we will test this. We will propagate a transaction from the client to the service, test the multiple database support of the WCF service, and discuss the Distributed Transaction Coordinator and Firewall settings for the distributed transaction support of the WCF service. Configuring the Distributed Transaction Coordinator In a subsequent section, we will call two services to update two databases on two different computers. As these two updates are wrapped within one distributed transaction, Microsoft Distributed Transaction Coordinator (MSDTC) will be activated to manage this distributed transaction. If MSDTC is not started or configured properly the distributed transaction will not be successful. In this section, we will explain how to configure MSDTC on both machines. You can follow these steps to configure MSDTC on your local and remote machines: Open Component Services from Control Panel | Administrative Tools. In the Component Services window, expand Component Services, then Computers, and then right-click on My Computer. Select Properties from the context menu. On the My Computer Properties window, click on the MSDTC tab. If this machine is running Windows XP, click on the Security Configuration button. If this machine is running Windows 7, verify that Use local coordinator is checked and then close the My Computer Properties window. Expand Distributed Transaction Coordinator under My Computer node, right-click on Local DTC, select Properties from the context menu, and then from the Local DTC Properties window, click on the Security tab. You should now see the Security Configuration for DTC on this machine.Set it as in the following screenshot. Remember you have to make these changes for both your local and remote machines. You have to restart the MSDTC service after you have changed your MSDTC settings, for the changes to take effect.Also, to simplify our example, we have chosen the No Authentication Required option. You should be aware that not needing authentication is a serious security issue in production. For more information about WCF security, you can go to the MSDN WCF security website at this address:MSDN Library. Configuring the firewall Even though Distributed Transaction Coordinator has been enabled the distributed transaction may still fail if the firewall is turned on and hasn't been set up properly for MSDTC. To set up the firewall for MSTC, follow these steps: Open the Windows Firewall window from the Control Panel. If the firewall is not turned on you can skip this section. Go to the Allow a program or feature through Windows Firewall window(for Windows XP, you need to allow exceptions and go to the Exceptions tab on the Windows Firewall window). Add Distributed Transaction Coordinator to the program list (windowssystem32msdtc.exe) if it is not already on the list. Make sure the checkbox before this item is checked. Again you need to change your firewall setting for both your local and remote machines. Now the firewall will allow msdtc.exe to go through so our next test won't fail due to the firewall restrictions. You may have to restart IIS after you have changed your firewall settings. In some cases you may also have to stop and then restart your fi rewall for the changes to take effect.
Read more
  • 0
  • 0
  • 9932
article-image-getting-started-facebook-application-development-using-coldfusionrailo
Packt
19 Jun 2010
5 min read
Save for later

Getting Started with Facebook Application Development using ColdFusion/Railo

Packt
19 Jun 2010
5 min read
There are other CFML Facebook articles on the internet such as Ray Camden’s tutorial with ColdFusion 8; however Facebook continues to innovate and change, and a majority of those resources are out of date for Facebook’s 2010 updates. Things such as “profile boxes” are passé, and now you have to work with “Application Tabs.” In addition, I have found that there are some general concepts of how Facebook applications work. These have not been covered well in other resources. Why Facebook? According to statistics, Facebook is the 3rd highest traffic site in the US right now (statistics for the rest of the world weren’t readily available). The nature of Facebook is that people socialize, and look at what other people are doing, which means that if your friends post that they are using certain applications or visiting certain sites, you know about it, and for most of us, that’s a good enough reason to check it out.  Thats what's called Grass roots marketing, and it works. “The average U.S. Internet user spends more time on Facebook than on Google, Yahoo, YouTube, Microsoft, Wikipedia and Amazon combined.” That should tell you something. There is a big market to tap into, and should answer the question—why Facebook. Even if you think Facebook isn't a useful tool for you, you can’t argue with the numbers when it comes to reaching potential customers. Why CFML with Facebook? Hopefully your interest in ColdFusion and/or Railo answers this. Since CFML is such an easy to learn and powerful extensible programming language, it only makes sense that we should be able to build Facebook applications with it. There are always some cautions with making websites talk to each other. Using CFML with Facebook is no different; however most of these have been overcome by people already, and you can easily zip through this by copy/pasting the work of others. The basic framework for my applications is the same, and you can use this as your jumping-off point to work on your own applications. Understanding Data Flow Facebook is rather unique in how it is structured, and understanding this structure is critical to being able to build applications properly. You will save yourself a lot of frustration by reviewing this section before you begin writing code. In most websites or web applications, people type in a web address, and they connect directly to your web server, where your application handles the business logic, database interaction and any other work, and then gives web content back to the requesting user. This is not the case with Facebook. With Facebook applications, users open up their web browsers to a Facebook web address (the “Canvas URL”), Facebook’s servers make a “behind the scenes” request to your web server (the “Callback URL”), your application then responds to Facebook’s request, and then, Facebook does the final markup and sends the web page content back to the user’s browser. If you followed that, you see that users always interact with Facebook, while Facebook’s server is the one that talks to your application. You can also connect back to Facebook via their RESTful API to get information about users, friends, photos, posts and more. So here are two important concepts to understand: Your Facebook application code lives on your web server, separate from Facebook. You will get web requests from Facebook on behalf of Facebook users. Users should always be interacting with Facebook’s web site; They should never go directly to your web server The Canvas URL is a Facebook address, which you will setup in the next section. The Callback URL is the root where you put your application files (*.cfc and *.cfm). It is also where you will put your CSS files, images, and anything else your application needs. The Callback URL can be a directory on any web hosting account, so there is no need to setup a separate web host for your Facebook application. Setting up a new Facebook application Generally speaking, setting up a new Facebook application is pretty easy. There are a few things that can trip you up, and I will highlight them. The first thing to do is log into your Facebook account, and authorize the Facebook Developer application by going to this URL:http://apps.facebook.com/developer/ Once you have authorized this application, you will see a link to create a new application. Create a new application, and give it a name: Fill in the description if you want, give it an icon and logo if you wish. Click on the Canvas menu option. Enter the canvas page url (this becomes the URL on facebook’s site that you and your users will go to – apps.facebook.com/yourapp). Enter the callback URL (the full URL to YOUR web server directory where your CFML code will reside. Very important: Select Render method to be “FBML” (which stands for Facebook Markup Language). The other options you can leave as their default values. When you are done, save your changes. The application summary page will show you some important information, specifically the API Key and Application Secret, which you will need in your application later. Consider using Facebook’s “sandbox” mode which makes your application invisible to the world while you are developing it. Likewise, when you are done with your application, consider using Facebook’s application directory to promote your application.
Read more
  • 0
  • 0
  • 1689

article-image-build-iphone-android-and-ipad-applications-using-jqtouch
Packt
18 Jun 2010
12 min read
Save for later

Build iPhone, Android and iPad Applications using jQTouch

Packt
18 Jun 2010
12 min read
  jQuery Plugin Development Beginner's Guide Build powerful, interactive plugins to implement jQuery in the best way possible Utilize jQuery's plugin framework to create a wide range of useful jQuery plugins from scratch Understand development patterns and best practices and move up the ladder to master plugin development Discover the ins and outs of some of the most popular jQuery plugins in action A Beginner's Guide packed with examples and step-by-step instructions to quickly get your hands dirty in developing high quality jQuery plugins       jQuery is a javascript framework that simplifies javascript development life cycle for web applications. Its greatest force comes from the ease of use and the huge number of plugins available. As a result of which javascript developers are exposed to a large number of enterprise components like Sort Tables, Editable Tables with Ajax and also web application components for animation, data manipulation. One such plugin with very powerful effects is the jQTouch; this plugin can be used by any web application developer with small experience in jquery to build applications for iPhone, iPad and Android devices. For now, just to get a feel, you can point your internet enabled iPad, iPhone or Android device to http://www.afrovisiongroup.com/twigle and test the application. Other examples of applications that can be developed using jQtouch include Gmail for the iPad or facebook touch. Getting Started Before we start using jQTouch, I would love to put across a few facts about jQTouch. jQTouch is a plugin for jQuery which means it only enhances jQuery to build smartphone applications that support swiping, and all the other touch gestures. Before you begin development with jQTouch, I would suggest you get comfortable with jQuery. jQTouch applications are not developed like regular web applications, where in an index page will be loaded with links that lead to other pages, and each page is loaded from the server every time a visitor clicks on a link. With jQTouch, all the pages are loaded once inside the index.html and each page is represented as a seperate div element in the index page. For example, the following html code snippet (<div id='page_name'>content</div>) represents a page in your jQTouch application and a link to that page is as follows (<a href='#page_name'>link to page name</a>). You can have as many pages as you want with all the pages having links to other pages inside the index.html file, but remember all this is stored in one single file index.html. The link clicks and navigation actions are implemented using javascript inbuilt into jQTouch. You will get to understand this as we implement twigle. Let's first get to know more about twigle. It is a twitter search application for smartphones loaded from the web. We will use jQTouch for client side development, jQuery ajax plugin for the server side communication and PHP in the backend to get the search results from the twitter Search API. jQTouch comes with javascript files and css files with themes. This defines the look and feel of the application. You won't have to bother about the design as the plugin already comes with predefined styles and graphics from which you can use as the base and extend it further to create your own unique looks. There are two themes that come with the plugin: apple theme and jqt theme.  Just like the name implies, the apple theme looks and feels like native iPhone OS apps. The plugin styles are predefined for the toolbar, rounded button, etc. You will discover this as we move on. jQTouch applications are basically developed in a single file, usually index.html. It contains the html code, javascript code and the styling. Everything in your application happens inside this file which gets loaded into your smartphone once like gmail and the other google applications. For example [code]  &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;div id='home'&gt; &lt;div class='toolbar'&gt;  Home Page &lt;/div&gt; &lt;div&gt;  this is the home page &lt;/div&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; [/code] The above html code should produce the following: After installing and initializing the jQtouch plugin with the apple theme, you should have the following: Notice how the <div class='toolbar'><h1>Home Page</h1></div> gets styled into the iPhone or iPad toolbar's look and feel. Now, on the whole, the page looks more or less like a native iPhone application. Developing with jQTouch To develop your iPhone OS or Android OS applications with jQtouch you need to have jQuery and jQTouch libraries which you can download from http://www.jqtouch.com/. Next, get your favorite code editor (dreamweaver, notepad ++, etc) and we can get started. Remember, we are going to look at how to develop an application like twigle here. You can check out the demo of the application at http://www.afrovisiongroup.com/twigle. This is a twitter search application for smartphones loaded from the web. We will use jQTouch for client side development, jQuery ajax plugin for the server side communication and PHP in the backend to get the search results from the twitter Search API. Lets Get to work: Create a folder on your local web server directory called twigle Download the jQTouch package and unzip it into the folder twigle, this will give you the following structure: twigle/demos(this folder contains the sample applications. You can look at the source to learn more about these) /extensions(this folder contains jQTouch extensions that are like its own plugins) /jqtouch(this folder contains the javascript and css files needed for jQTouch to work) /themes (this folder contains the theme files and you can create your own themes too) /license.txt /readme.txt /sample.htaccess Now we create two files in the twigle folder: index.html and twigle.php The index.html will hold our application views(pages represented as html DIV Tags) and the twigle.php will be our business logic backend that connects the twitter API to our index.html front end. Javascript and AJAX communications comes between the index.html and the twigle.php to load twitter search results for any given search request. Paste the following code into the index.html file: [code] &lt;!doctype html&gt;  &lt;html&gt;      &lt;head&gt;          &lt;meta charset="UTF-8" /&gt;          &lt;/head&gt; &lt;body&gt; &lt;div id="home"&gt;          &lt;div class="toolbar"&gt;             &lt;h1&gt;TWIGLE&lt;/h1&gt;             &lt;a href="#info" class="button leftButton flip"&gt;Info&lt;/a&gt;             &lt;!-- &lt;a href="#search_results" class="button add slideup"&gt;+&lt;/a&gt;   --&gt;        &lt;/div&gt;             &lt;form id="search"&gt;  &lt;ul class="rounded"&gt; &lt;li id="notice"&gt; Type your search term below and hit search twitter&lt;/li&gt; &lt;li&gt; &lt;input type="text" id="keyword" name="keyword" placeholder="type your search term here"&gt; &lt;/li&gt; &lt;/ul&gt; &lt;a href="#" class="whiteButton submit"&gt;SEARCH TWITTER&lt;/a&gt;  &lt;/form&gt;        &lt;/div&gt;         &lt;/div&gt;  &lt;div id="info"&gt; &lt;div class="toolbar"&gt; &lt;a href="#home" class="button leftButton flip"&gt;back&lt;/a&gt; &lt;h1&gt;TWIGLE BY mambenanje&lt;/h1&gt; &lt;/div&gt; &lt;div&gt; &lt;ul class='rounded'&gt; &lt;li&gt;mambenanje is CEO of AfroVisioN Group - www.afrovisiongroup.com&lt;br /&gt; And TWIGLE is a tutorial he did for packtpub.com&lt;/li&gt; &lt;li&gt;TWIGLE runs on iPhone and Android because its powered by jqtouch and it helps users search twitter from their internet connected handhelds&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;div id="search_results"&gt; &lt;div class="toolbar"&gt; &lt;a href="#home" class="button leftButton flip"&gt;back&lt;/a&gt; &lt;h1 id="search_title"&gt;Search results&lt;/h1&gt; &lt;/div&gt; &lt;div&gt; &lt;ul class="rounded" id="results"&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; [/code] Thats the DOM structure for our application. Taking a close look at it,  you will see three main div siblings of the <body> tag. These divs represent the pages our application will have and only one of these divs appears at a time in a jQTouch application. Note the toolbar class that is called inside each of those divs to represent the app view's toolbar(title bar + menu) on every given page. The <ul classs='rounded'> is also needed to represent rounded listed items typical for iPhone applications. So in summary our application has three pages which would be home, info and search_results. Lets explain the DOM for every page: Home: [code] &lt;div id="home"&gt;              &lt;div class="toolbar"&gt;                  &lt;h1&gt;TWIGLE&lt;/h1&gt;                  &lt;a href="#info" class="button leftButton flip"&gt;Info&lt;/a&gt;                   &lt;!-- &lt;a href="#search_results" class="button add slideup"&gt;+&lt;/a&gt;   --&gt;             &lt;/div&gt;              &lt;form id="search"&gt;  &lt;ul class="rounded"&gt; &lt;li id="notice"&gt; Type your search term below and hit search twitter&lt;/li&gt; &lt;li&gt; &lt;input type="text" id="keyword" name="keyword" placeholder="type your search term here"&gt; &lt;/li&gt; &lt;/ul&gt; &lt;a href="#" class="whiteButton submit"&gt;SEARCH TWITTER&lt;/a&gt;  &lt;/form&gt;             &lt;/div&gt;          &lt;/div&gt;  [/code] The home page has a toolbar that contains the TWIGLE heading, along with a jQTouch button that is left aligned and when clicked, flips to the next page which is Info. The other button which leads to the search_results page is commented out using html comments. Its there to show that you can add more buttons to the toolbar. Next is the form which has the id:search. This is how jQTouch works with forms with no action or method. The form submission is done via javascript which will be explained later. The rest is instruction and the keyword input field. Look closely at the search twitter button. Its not a typical input button, but an anchor tag styled with jQTouch theme classes that tells jQTouch this is a white button. It is responsible for initiating the form submission. The home page is the most important page in this application as it contains the form and like every home page it is also the welcome page of the application. The Info Page: [code]   &lt;div id="info"&gt; &lt;div class="toolbar"&gt; &lt;a href="#home" class="button leftButton flip"&gt;back&lt;/a&gt; &lt;h1&gt;TWIGLE BY mambenanje&lt;/h1&gt; &lt;/div&gt; &lt;div&gt; &lt;ul class='rounded'&gt; &lt;li&gt;mambenanje is CEO of AfroVisioN Group - www.afrovisiongroup.com&lt;br /&gt; And TWIGLE is a tutorial he did for packtpub.com&lt;/li&gt; &lt;li&gt;TWIGLE runs on iPhone and Android because its powered by jqtouch and it helps users search twitter from their internet connected handhelds&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; [/code] Its a tradition in software development to always have an about page for the software and iPhone/Android apps are no exception. The info page was created to give users of the twigle application an idea how this application came about. Closely look at the toolbar. It contains a button that leads to the home page and is styled to appear like a button. It flips to the home page when clicked. The rest is just literature that is presented in rounded lists.  
Read more
  • 0
  • 0
  • 3034

article-image-getting-started-jquery
Packt
18 Jun 2010
3 min read
Save for later

Getting Started with jQuery

Packt
18 Jun 2010
3 min read
(For more resources on jQuery, see here.) jQuery - How it works To understand how jQuery can ease web client (JavaScript based) development, one has to understand two aspects of jQuery. They are: Functionalities Modules Understanding the functionalities/services provided by jQuery will tell you what jQuery provides and understanding the modules that constitute jQuery will tell you how to access the services provided by jQuery. Here are the details. Functionalities The functionalities provided by jQuery can be classified into following: Selection Attributes handling Element manipulation Ajax Callbacks Event Handling Among the above listed functionalities, selection, element manipulation and event handling makes common tasks very easily implementable or trivial. Selection Using this functionality one can select one or multiple HTML elements. The raw JavaScript equivalent of the selection functionality is: document.getElementByID(‘<element id>’) or document.getElementByTagName(‘<tag name>’) Attributes handling One of most required task in JavaScript is to change the value of an attribute of a tag. The conventional way is to use getElementByID to get the element and then use index to get to the required attribute. jQuery eases it by using selection and attributes handling functionality in conjunction. Element handling There are scenarios where the values of tags need to be modified. One of such scenarios is rewriting text of a <p> tag based on selection from combo box. That is where element handling functionality of jQuery comes handy. Using the element handling or DOM scripting, as it is popularly known, one can not only access a tag but also perform manipulation such as appending child tags to multiple occurrences of a specific tag without using for loop. Ajax Ajax is of the concept and implementation that brought the usefulness of JavaScript to the fore. However, it also brought the complexities and the boilerplate code required for using Ajax to its full potential. The Ajax related functionalities of jQuery encapsulates away the boilerplate code and lets one concentrate on the result of the Ajax call. The main point to keep in mind is that encapsulation of the setup code does not mean that one cannot access the Ajax related events. jQuery takes care of that too and one can register to the Ajax events and handle them. Callbacks There are many scenarios in web development, where you want to initiate another task on the basis of completion of one task. An example of such a scenario involves animation. If you want to execute a task after completion of an animation, you will need callback function. The core of jQuery is implemented in such a way that most of the API supports callbacks. Event handling One of the main aspects of JavaScript and its relationship with HTML is the events triggered by the form elements can be handled using JavaScript. However, when multiple elements and multiple events come into picture, the code complexity becomes very hard to handle. The core of jQuery is geared towards handling the events in such a way that complexity can be maintained at manageable levels. Now that we have discussed the main functionalities of jQuery, let us move onto the main modules of jQuery and how the functionalities map onto the functionalities.
Read more
  • 0
  • 0
  • 1564
article-image-security-and-disaster-recovery-prestashop-13
Packt
18 Jun 2010
6 min read
Save for later

Security and Disaster Recovery in PrestaShop 1.3

Packt
18 Jun 2010
6 min read
We will do everything possible to make sure our store is not the victim of a successful attack. Fortunately, the PrestaShop team takes security very seriously and issues updates and fixes as soon as possible after any problems are discovered. We just have to make sure we do everything we can and also implement the PrestaShop upgrades as soon as they are available. It is also vital that we always have a recent copy of our store because one day, it is probably inevitable that our shop will die on us. It might be a hacker or maybe we will accidentally muck it up ourselves. A recent backup to handle this type of event is a minor inconvenience, because without one, it is an expensive catastrophe. So let's get on with it... Types of security attacks There are different types of security attacks. Here is a very brief explanation of some of the most common ones. Hopefully, this will make it clear why security is an ongoing and evolving issue and not something that can ever be 100 percent solved out of the box. Common sense issues These are often overlooked—make sure your passwords are impossible to guess. Use number sequences that are memorable to you but unguessable and meaningless to everyone else. Combine number sequences with regular letters in a variety of upper and lower case. Don't share your passwords with anyone. This applies to anyone who has access to your shop or hosting account. Brute force This is when an attacker uses software to repeatedly attempt to gain access or discover a password by guessing. Clearly, the simplest defence against this is a secure password. A good password is one with upper and lower case characters, apparently random numbers and words that are not names or are in the dictionary. Does your administrator password stand up to these criteria? SQL injection attack A malicious person amends, deletes, or retrieves information from your database by cleverly manipulating the forms or database requests contained in the code of PrestaShop. By appending to legitimate PrestaShop database code, harm can be done or breaches of security can be achieved. Cross-site scripting Attackers add instructions to access code on another site. They do this by appending a URL pointing to malicious code to a PHP URL of a legitimate page on your site. User error This is straight forward. It is likely that while developing or amending your website, you will mess up some or perhaps all of your PrestaShop. I did it once while writing this article. I will give you the full details of my slightly embarrassing confession later. So with so many ways that things can go wrong, we better start looking at some solutions. Employees and user security If you plan to employ someone or if you have a partner who is going to help in your new shop, it makes good sense to create a new user account so that they have their own login details. Even if it will be only you who needs to use the PrestaShop control panel, there is still a good argument for creating two or more accounts. Here is why. First we will consider a scenario, though a slightly exaggerated one: Guns4u.com: Guns4u wants to offer articles about how to use its products. The management, probably correctly, believe that in-depth how-tos about all its products will boost sales and increase customer retention. The diverse nature of their products makes employing a single writer impossible. For example, an expert on small arms is rarely an expert on ground-to-air ordinance. And a user of laser targeting equipment probably doesn't know the first thing about ship-based artillery. This is quite a problem. The management decides they need a way to allow a whole team of freelance writers who can login directly to the PrestaShop CMS. But bearing in mind the highly dubious backgrounds some of these writers will have, how can they be trusted in the PrestaShop control panel? Users of Guns4u.com: Suppose you employ somebody to write articles for you. You don't really want them being able to play with product prices or payment modules. You would want to restrict them to the CMS area of the control panel. Similarly, your partner might be helping you wrap and pack your products. To avoid accidents you might like to restrict them to the Customers and Orders tab. Now consider this scenario. Even you, after reading this article, can make a mistake. It is a really good idea to create at least one extra user account for you. I always make myself a wrapping and packing account. I use it all the time and it is reassuring to know that I can't accidentally click anything that can cause a problem. This type of user security is common in large organisations. On a company intranet, employees will almost always be restricted to areas of the company system to which they need and nothing more. Below is how to create a new user account and then after that we will look at profiles and permissions to enforce the restrictions and permissions suitable to us. Okay, let's create a new user. Time for action – creating users As you have come to expect, this is really easy. Click on the Employees tab and then click on the Add new link. Enter the Last name, First name, and E-mail address of your new employee or user. The status box enables you to allow or disallow access to the new employee. Unless you have a reason for creating an account and not letting them use it, select the check mark (Allow). If you have reason to want to stop your new employee or user accessing your control panel, simply come back here and click on the cross. In the Profile drop-down box, choose Administrator. This will give the new user full access. We will investigate when this is a good idea and when you might like to change this, if you would like to add our freelance writer next. Click the Save button to create the new user account.
Read more
  • 0
  • 0
  • 3464

article-image-distributed-transaction-using-wcf
Packt
17 Jun 2010
12 min read
Save for later

Distributed transaction using WCF

Packt
17 Jun 2010
12 min read
(Read more interesting articles on WCF 4.0 here.) Creating the DistNorthwind solution In this article, we will create a new solution based on the LINQNorthwind solution.We will copy all of the source code from the LINQNorthwind directory to a new directory and then customize it to suit our needs. The steps here are very similar to the steps in the previous chapter when we created the LINQNorthwind solution.Please refer to the previous chapter for diagrams. Follow these steps to create the new solution: Create a new directory named DistNorthwind under the existing C:SOAwithWCFandLINQProjects directory. Copy all of the files under the C:SOAwithWCFandLINQProjectsLINQNorthwind directory to the C:SOAwithWCFandLINQProjectsDistNorthwind directory. Remove the folder, LINQNorthwindClient. We will create a new client for this solution. Change all the folder names under the new folder, DistNorthwind, from LINQNorthwindxxx to DistNorthwindxxx. Change the solution files' names from LINQNorthwind.sln to DistNorthwind.sln, and also from LINQNorthwind.suo to DistNorthwind.suo. Now we have the file structures ready for the new solution but all the file contents and the solution structure are still for the old solution. Next we need to change them to work for the new solution. We will first change all the related WCF service files. Once we have the service up and running we will create a new client to test this new service. Start Visual Studio 2010 and open this solution: C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwind.sln. Click on the OK button to close the projects were not loaded correctly warning dialog. From Solution Explorer, remove all five projects (they should all be unavailable). Right-click on the solution item and select Add | Existing Projects… to add these four projects to the solution. Note that these are the projects under the DistNorthwind folder, not under the LINQNorthwind folder: LINQNorthwindEntities.csproj, LINQNorthwindDAL.csproj, LINQNorthwindLogic.csproj, and LINQNorthwindService.csproj. In Solution Explorer, change all four projects' names from LINQNorthwindxxx to DistNorthwindxxx. In Solution Explorer, right-click on each project, select Properties (or select menu Project | DistNorthwindxxx Properties), then change the Assembly name from LINQNorthwindxxx to DistNorthwindxxx, and change the Default namespace from MyWCFServices.LINQNorthwindxxx to MyWCFServices.DistNorthwindxxx. Open the following files and change the word LINQNorthwind to DistNorthwind wherever it occurs: ProductEntity.cs, ProductDAO.cs, ProductLogic.cs, IProductService.cs, and ProductService.cs. Open the file, app.config, in the DistNorthwindService project and change the word LINQNorthwind to DistNorthwind in this file. The screenshot below shows the final structure of the new solution, DistNorthwind: Now we have finished modifying the service projects. If you build the solution now you should see no errors. You can set the service project as the startup project, run the program. Hosting the WCF service in IIS The WCF service is now hosted within WCF Service Host.We had to start the WCF Service Host before we ran our test client.Not only do you have to start the WCF Service Host, you also have to start the WCF Test client and leave it open. This is not that nice. In addition, we will add another service later in this articleto test distributed transaction support with two databases and it is not that easy to host two services with one WCF Service Host.So, in this article, we will first decouple our WCF service from Visual Studio to host it in IIS. You can follow these steps to host this WCF service in IIS: In Windows Explorer, go to the directory C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindService. Within this folder create a new text file, ProductService.svc, to contain the following one line of code: <%@ServiceHost Service="MyWCFServices.DistNorthwindService. ProductService"%> Again within this folder copy the file, App.config, to Web.config and remove the following lines from the new Web.config file: <host> <baseAddresses> <add baseAddress="http://localhost:8080/ Design_Time_Addresses/MyWCFServices/ DistNorthwindService/ProductService/" /> </baseAddresses> </host> Now open IIS Manager, add a new application, DistNorthwindService, and set its physical path to C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindService. If you choose to use the default application pool, DefaultAppPool, make sure it is a .NET 4.0 application pool.If you are using Windows XP you can create a new virtual directory, DistNorthwindService, set its local path to the above directory, and make sure its ASP.NET version is 4.0. From Visual Studio, in Solution Explorer, right-click on the project item,DistNorthwindService, select Properties, then click on the Build Events tab, and enter the following code to the Post-build event command line box: copy .*.* ..With this Post-build event command line, whenever DistNorthwindService is rebuilt the service binary files will be copied to the C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindServicebin directory so that the service hosted in IIS will always be up-to-date. From Visual Studio, in Solution Explorer, right-click on the project item, DistNorthwindService, and select Rebuild. Now you have finished setting up the service to be hosted in IIS. Open Internet Explorer, go to the following address, and you should see the ProductService description in the browser: http://localhost/DistNorthwindService/ProductService.svc Testing the transaction behavior of the WCF service Before explaining how to enhance this WCF service to support distributed transactions, we will first confirm that the existing WCF service doesn't support distributed transactions. In this article, we will test the following scenarios: Create a WPF client to call the service twice in one method. The first service call should succeed and the second service call should fail. Verify that the update in the first service call has been committed to the database, which means that the WCF service does not support distributed transactions. Wrap the two service calls in one TransactionScope and redo the test. Verify that the update in the first service call has still been committed to the database which means the WCF service does not support distributed transactions even if both service calls are within one transaction scope. Add a second database support to the WCF service. Modify the client to update both databases in one method. The first update should succeed and the second update should fail. Verify that the first update has been committed to the database, which means the WCF service does not support distributed transactions with multiple databases. Creating a client to call the WCF service sequentially The first scenario to test is that within one method of the client application two service calls will be made and one of them will fail. We then verify whether the update in the successful service call has been committed to the database. If it has been, it will mean that the two service calls are not within a single atomic transaction and will indicate that the WCF service doesn't support distributed transactions. You can follow these steps to create a WPF client for this test case: In Solution Explorer, right-click on the solution item and select Add | New Project… from the context menu. Select Visual C# | WPF Application as the template. Enter DistributedWPF as the Name. Click on the OK button to create the new client project. Now the new test client should have been created and added to the solution. Let's follow these steps to customize this client so that we can call ProductService twice within one method and test the distributed transaction support of this WCF service: On the WPF MainWindow designer surface, add the following controls (you can double-click on the MainWindow.xaml item to open this window and make sure you are on the design mode, not the XAML mode): A label with Content Product ID Two textboxes named txtProductID1 and txtProductID2 A button named btnGetProduct with Content Get Product Details A separator to separate above controls from below controls Two labels with content Product1 Details and Product2 Details Two textboxes named txtProduct1Details and txtProduct2Details, with the following properties: AcceptsReturn: checked Background: Beige HorizontalScrollbarVisibility: Auto VerticalScrollbarVisibility: Auto IsReadOnly: checked A separator to separate above controls from below controls A label with content New Price Two textboxes named txtNewPrice1 and txtNewPrice2 A button named btnUpdatePrice with Content Update Price A separator to separate above controls from below controls Two labels with content Update1 Results and Update2 Results Two textboxes named txtUpdate1Results and txtUpdate2Results with the following properties: AcceptsReturn: checked Background: Beige HorizontalScrollbarVisibility: Auto VerticalScrollbarVisibility: Auto IsReadOnly: checked Your MainWindow design surface should look like the following screenshot: In Solution Explorer, right-click on the DistNorthwindWPF project item, select Add Service Reference… and add a service reference of the product service to the project. The namespace of this service reference should be ProductServiceProxy and the URL of the product service should be like this:http://localhost/DistNorthwindService/ProductService.svc On the MainWindow.xaml designer surface, double-click on the Get Product Details button to create an event handler for this button. In the MainWindow.xaml.cs file, add the following using statement: using DistNorthwindWPF.ProductServiceProxy; Again in the MainWindow.xaml.cs file, add the following two class members: Product product1, product2; Now add the following method to the MainWindow.xaml.cs file: private string GetProduct(TextBox txtProductID, ref Product product) { string result = ""; try { int productID = Int32.Parse(txtProductID.Text.ToString()); ProductServiceClient client = new ProductServiceClient(); product = client.GetProduct(productID); StringBuilder sb = new StringBuilder(); sb.Append("ProductID:" + product.ProductID.ToString() + "n"); sb.Append("ProductName:" + product.ProductName + "n"); sb.Append("UnitPrice:" + product.UnitPrice.ToString() + "n"); sb.Append("RowVersion:"); foreach (var x in product.RowVersion.AsEnumerable()) { sb.Append(x.ToString()); sb.Append(" "); } result = sb.ToString(); } catch (Exception ex) { result = "Exception: " + ex.Message.ToString(); } return result; } This method will call the product service to retrieve a product from the database, format the product details to a string, and return the string. This string will be displayed on the screen. The product object will also be returned so that later on we can reuse this object to update the price of the product. Inside the event handler of the Get Product Details button, add the following two lines of code to get and display the product details: txtProduct1Details.Text = GetProduct(txtProductID1, ref product1); txtProduct2Details.Text = GetProduct(txtProductID2, ref product2); Now we have finished adding code to retrieve products from the database through the Product WCF service. Set DistNorthwindWPF as the startup project, press Ctrl + F5 to start the WPF test client, enter 30 and 31 as the product IDs, and then click on the Get Product Details button. You should get a window like this image: To update the prices of these two products follow these steps to add the code to the project: On the MainWindow.xaml design surface and double-click on the Update Price button to add an event handler for this button. Add the following method to the MainWindow.xaml.cs file: private string UpdatePrice( TextBox txtNewPrice, ref Product product, ref bool updateResult) { string result = ""; try { product.UnitPrice = Decimal.Parse(txtNewPrice.Text.ToString()); ProductServiceClient client = new ProductServiceClient(); updateResult = client.UpdateProduct(ref product); StringBuilder sb = new StringBuilder(); if (updateResult == true) { sb.Append("Price updated to "); sb.Append(txtNewPrice.Text.ToString()); sb.Append("n"); sb.Append("Update result:"); sb.Append(updateResult.ToString()); sb.Append("n"); sb.Append("New RowVersion:"); } else { sb.Append("Price not updated to "); sb.Append(txtNewPrice.Text.ToString()); sb.Append("n"); sb.Append("Update result:"); sb.Append(updateResult.ToString()); sb.Append("n"); sb.Append("Old RowVersion:"); } foreach (var x in product.RowVersion.AsEnumerable()) { sb.Append(x.ToString()); sb.Append(" "); } result = sb.ToString(); } catch (Exception ex) { result = "Exception: " + ex.Message; } return result; } This method will call the product service to update the price of a product in the database. The update result will be formatted and returned so that later on we can display it. The updated product object with the new RowVersion will also be returned so that later on we can update the price of the same product again and again. Inside the event handler of the Update Price button, add the following code to update the product prices: if (product1 == null) { txtUpdate1Results.Text = "Get product details first"; } else if (product2 == null) { txtUpdate2Results.Text = "Get product details first"; } else { bool update1Result = false, update2Result = false; txtUpdate1Results.Text = UpdatePrice( txtNewPrice1, ref product1, ref update1Result); txtUpdate2Results.Text = UpdatePrice( txtNewPrice2, ref product2, ref update2Result); } Testing the sequential calls to the WCF service Let's run the program now to test the distributed transaction support of the WCF service. We will first update two products with two valid prices to make sure our code works with normal use cases. Then we will update one product with a valid price and another with an invalid price. We will verify that the update with the valid price has been committed to the database, regardless of the failure of the other update. Let's follow these steps for this test: Press Ctrl + F5 to start the program. Enter 30 and 31 as product IDs in the top two textboxes and click on the Get Product Details button to retrieve the two products. Note that the prices for these two products are 25.89 and 12.5 respectively. Enter 26.89 and 13.5 as new prices in the middle two textboxes and click on the Update Price button to update these two products. The update results are true for both updates, as shown in following screenshot: Now enter 27.89 and -14.5 as new prices in the middle two textboxes and click on the Update Price button to update these two products. This time the update result for product 30 is still True but for the second update the result is False. Click on the Get Product Details button again to refresh the product prices so that we can verify the update results. We know that the second service call should fail so the second update should not be committed to the database. From the test result we know this is true (the second product price didn't change). However from the test result we also know that the first update in the first service call has been committed to the database (the first product price has been changed). This means that the first call to the service is not rolled back even when a subsequent service call has failed. Therefore each service call is in a separate standalone transaction. In other words, the two sequential service calls are not within one distributed transaction.
Read more
  • 0
  • 0
  • 1772