Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-tapestry-5-advanced-components
Packt
23 Oct 2009
10 min read
Save for later

Tapestry 5 Advanced Components

Packt
23 Oct 2009
10 min read
Following are some of the components, we'll examine in Tapestry 5: The Grid component allows us to display different data in a fairly sophisticated table. We are going to use it to display our collection of celebrities. The BeanEditForm component greatly simplifies creating forms for accepting user input. We shall use it for adding a new celebrity to our collection. The DateField component provides an easy and attractive way to enter or edit the date. The FCKEditor component is a rich text editor, and it is as easy to incorporate into a Tapestry 5 web application, just as a basic TextField is. This is a third party component, and the main point here is to show that using a library of custom components in a Tapestry 5 application requires no extra effort. It is likely that a similar core component will appear in a future version of the framework. Grid Component It is possible to display our collection of celebrities with the help of the Loop component. It isn't difficult, and in many cases, that will be exactly the solution you need for the task at hand. But, as the number of displayed items grow (our collection grows) different problems may arise. We might not want to display the whole collection on one page, so we'll need some kind of pagination mechanism and some controls to enable navigation from page to page. Also, it would be convenient to be able to sort celebrities by first name, last name, occupation, and so on. All this can be achieved by adding more controls and more code to finally achieve the result that we want, but a table with pagination and sorted columns is a very common part of a user interface, and recreating it each time wouldn't be efficient. Thankfully, the Grid component brings with it plenty of ready to use functionality, and it is very easy to deal with. Open the ShowAll.tml template in an IDE of your choice and remove the Loop component and all its content, together with the surrounding table: <table width="100%"> <tr t_type="loop" t_source="allCelebrities" t:value="celebrity"> <td> <a href="#" t_type="PageLink" t_page="Details" t:context="celebrity.id"> ${celebrity.lastName} </a> </td> <td>${celebrity.firstName}</td> <td> <t:output t_format="dateFormat" t:value="celebrity.dateOfBirth"/> </td> <td>${celebrity.occupation}</td> </tr> </table> In place of this code, add the following line: <t:grid t_source="allCelebrities"/> Run the application, log in to be able to view the collection, and you should see the following result: Quite an impressive result for a single short line of code, isn't it? Not only are our celebrities now displayed in a neatly formatted table, but also, we can sort the collection by clicking on the columns' headers. Also note that occupation now has only the first character capitalized—much better than the fully capitalized version we had before. Here, we see the results of some clever guesses on Tapestry's side. The only required parameter of the Grid component is source, the same as the required parameter of the Loop component. Through this parameter, Grid receives a number of objects of the same class. It takes the first object of this collection and finds out its properties. It tries to create a column for each property, transforming the property's name for the column's header (for example, lastName property name gives Last Name column header) and makes some additional sensible adjustments like changing the case of the occupation property values in our example. All this is quite impressive, but the table, as it is displayed now, has a number of deficiencies: All celebrities are displayed on one page, while we wanted to see how pagination works. This is because the default number of records per page for  Grid component is 25—more than we have in our collection at the moment. The last name of the celebrities does not provide a link to the Details page anymore. It doesn't make sense to show the Id column. The order of the columns is wrong. It would be more sensible to have the Last Name in the first column, then First Name, and finally the Date of Birth. By default, to define the display of the order of columns in the table, Tapestry will use the order in which getter methods are defined in the displayed class. In the Celebrity class, the getFirstName method is the first of the getters and so the First Name column will go first, and so on. There are also some other issues we might want to take care of, but let's first deal with these four. Tweaking the Grid First of all let's change the number of records per page. Just add the following parameter to the component's declaration: <t:grid t_source="allCelebrities" rowsPerPage="5"/> Run the application, and here is what you should see: You can now easily page through the records using the attractive pager control that appeared at the bottom of the table. If you would rather have the pager at the top, add another parameter to the Grid declaration: <t:grid t_source="allCelebrities" rowsPerPage="5" pagerPosition="top"/> You can even have two pagers, at the top and at the bottom, by specifying pagerPosition="both", or no pagers at all (pagerPosition="none"). In the latter case however, you will have to provide some custom way of paging through records. The next enhancement will be a link surrounding the celebrity's last name and linking to the Details page. We'll be adding an ActionLink and will need to know which Celebrity to link to, so we have the Grid store using the row parameter. This is how the Grid declaration will look: <t:grid t_source="allCelebrities" rowsPerPage="5" row="celebrity"/> As for the page class, we already have the celebrity property in it. It should have been left from our experiments with the Loop component. It will also be used in exactly the same way as with Loop, while iterating through the objects provided by its source parameter, Grid will assign the object that is used to display the current row to the celebrity property. The next thing to do is to tell Tapestry that when it comes to the contents of the Last Name column, we do not want Grid to display it in a default way. Instead, we shall provide our own way of displaying the cells of the table that contain the last name. Here is how we do this: <t:grid t_source="allCelebrities" rowsPerPage="5" row="celebrity"> <t:parameter name="lastNameCell"> <t:pagelink t_page="details" t_context="celebrity.id"> ${celebrity.lastName} </t:pagelink> </t:parameter> </t:grid> Here, the Grid component contains a special Tapestry element , similar to the one that we used in the previous chapter, inside the If component. As before, it serves to provide an alternative content to display, in this case, the content which will fill in the cells of the Last Name column. How does Tapestry know this? By the name of the element, lastNameCell. The first part of this name, lastName, is the name of one of the properties of the displayed objects. The last part, Cell, tells Tapestry that it is about the content of the table cells displaying the specified property. Finally, inside , you can see an expansion displaying the name of the current celebrity and surrounding it with the PageLink component that has for its context the ID of the current celebrity. Run the application, and you should see that we have achieved what we wanted: Click on the last name of a celebrity, and you should see the Details page with the appropriate details on it. All that is left now is to remove the unwanted Id column and to change the order of the remaining columns. For this, we'll use two properties of the Grid—remove and reorder. Modify the component's definition in the page template to look like this: <t:grid t_source="celebritySource" rowsPerPage="5" row="celebrity" remove="id" reorder="lastName,firstName,occupation,dateOfBirth"> <t:parameter name="lastNameCell"> <t:pagelink t_page="details" t_context="celebrity.id"> ${celebrity.lastName} </t:pagelink> </t:parameter> </t:grid> Please note that re-ordering doesn't delete columns. If you omit some columns while specifying their order, they will simply end up last in the table. Now, if you run the application, you should see that the table with a collection of celebrities is displayed exactly as we wanted: Changing the Column Titles Column titles are currently generated by Tapestry automatically. What if we want to have different titles? Say we want to have the title, Birth Date, instead of Date Of Birth. The easiest and the most efficient way to do this is to use the message catalog, the same one that we used while working with the Select component in the previous chapter. Add the following line to the app.properties file: dateOfBirth-label=Birth Date Run the application, and you will see that the column title has changed appropriately. This way, appending -label to the name of the property displayed by the column, you can create the key for a message catalog entry, and thus change the title of any column. Now you should be able to adjust the Grid component to most of the possible requirements and to display with its help many different kinds of objects. However, one scenario can still raise a problem. Add an output statement to the getAllCelebrities method in the ShowAll page class, like this: public List<Celebrity> getAllCelebrities() { System.out.println("Getting all celebrities..."); return dataSource.getAllCelebrities(); } The purpose of this is simply to be aware when the method is called. Run the application, log in, and as soon as the table with celebrities is shown, you will see the output, as follows: Getting all celebrities... The Grid component has the allCelebrities property defined as its source, so it invokes the getAllCelebrities method to obtain the content to display. Note however that Grid, after invoking this method, receives a list containing all 15 celebrities in collection, but displays only the first five. Click on the pager to view the second page—the same output will appear again. Grid requested for the whole collection again, and this time displayed only the second portion of five celebrities from it. Whenever we view another page, the whole collection is requested from the data source, but only one page of data is displayed. This is not too efficient but works for our purpose. Imagine, however, that our collection contains as many as 10,000 celebrities, and it's stored in a remote database. Requesting for the whole collection would put a lot of strain on our resources, especially if we are going to have 2,000 pages. We need to have the ability to request the celebrities, page-by-page—only the first five for the first page, only the second five for the second page and so on. This ability is supported by Tapestry. All we need to do is to provide an implementation of the GridDataSource interface. Here is a somewhat simplified example of such an implementation.
Read more
  • 0
  • 0
  • 2536

article-image-functional-testing-jmeter
Packt
22 Oct 2009
5 min read
Save for later

Functional Testing with JMeter

Packt
22 Oct 2009
5 min read
JMeter is a 100% pure Java desktop application. JMeter is found to be very useful and convenient in support of functional testing. Although JMeter is known more as a performance testing tool, functional testing elements can be integrated within the Test Plan, which was originally designed to support load testing. Many other load-testing tools provide little or none of this feature, restricting themselves to performance-testing purposes. Besides integrating functional-testing elements along with load-testing elements in the Test Plan, you can also create a Test Plan that runs these exclusively. In other words, aside from creating a Load Test Plan, JMeter also allows you to create a Functional Test Plan. This flexibility is certainly resource-efficient for the testing project. In this article by Emily H. Halili, we will give you a walkthrough on how to create a Test Plan as we incorporate and/or configure JMeter elements to support functional testing. Preparing for Functional Testing JMeter does not have a built-in browser, unlike many functional-test tools. It tests on the protocol layer, not the client layer (i.e. JavaScripts, applets, and many more.) and it does not render the page for viewing. Although, by default that embedded resources can be downloaded, rendering these in the Listener | View Results Tree may not yield a 100% browser-like rendering. In fact, it may not be able to render large HTML files at all. This makes it difficult to test the GUI of an application under testing. However, to compensate for these shortcomings, JMeter allows the tester to create assertions based on the tags and text of the page as the HTML file is received by the client. With some knowledge of HTML tags, you can test and verify any elements as you would expect them in the browser. It is unnecessary to select a specific workload time to perform a functional test. In fact, the application you want to test may even reside locally, with your own machine acting as the "localhost" server for your web application. For this article, we will limit ourselves to selected functional aspects of the page that we seek to verify or assert. Using JMeter Components We will create a Test Plan in order to demonstrate how we can configure the Test Plan to include functional testing capabilities. The modified Test Plan will include these scenarios: 1. Create Account —New Visitor creating an Account 2. Login User —User logging in to an Account Following these scenarios, we will simulate various entries and form submission as a request to a page is made, while checking the correct page response to these user entries. We will add assertions to the samples following these scenarios to verify the 'correctness' of a requested page. In this manner, we can see if the pages responded correctly to invalid data. For example, we would like to check that the page responded with the correct warning message when a user enters an invalid password, or whether a request returns the correct page. First of all, we will create a series of test cases following the various user actions in each scenario. The test cases may be designed as follows: CREATE ACCOUNT Test Steps Data Expected 1 Go to Home page. www.packtpub.com Home page loads and renders with no page error 2 Click Your Account link (top right). User action 1. Your Account page loads and renders with no page error.2. Logout link is not found. 3 No Password: - Enter email address in Email text field.- Click the Create Account and Continue button. email=EMAIL 1. Your Account page resets with Warning message-Please enter password.2. Logout link not found. 4 Short Password: - Enter email address in Email text field.- Enter password in Password text field.- Enter password in Confirm Password text field. - Click Create Account and Continue button. email=EMAILpassword=SHORT_PWD confirm password=SHORT_PWD 1. Your Account page resets with Warning message-Your password must be 8 characters or longer.2. Logout link is not found. 5 Unconfirmed Password: - Enter email address in Email text field.- Enter password in Password text field.- Enter password in Confirm Password text field. - Click Create Account and Continue button. email=EMAILpassword=VALID_PWDconfirm password=INVALID_PWD 1. Your Account page resets with Warning messagePassword does not match.2. Logout link is not found. 6 Register Valid User: - Enter email address in Email text field.- Enter password in Password text field.- Enter password in Confirm Password text field. - Click Create Account and Continue button. email=EMAILpassword=VALID_PWDconfirm password=VALID_PWD 1. Logout link is found.2. Page redirects to User Account page.3. Message found: You are registered as: e:<EMAIL>. 7 Click Logout link. User action 1. Logout link is NOT found.     LOGIN USER Test Steps Data Expected 1 Click Home page. User action 1. WELCOME tab is active. 2 Log in Wrong Password: - Enter email in Email text field- Enter password at Password text field.- Click Login button. email=EMAILpassword=INVALID_PWD 1. Logout link is NOT found.2. Page refreshes.3. Warning message-Sorry your password was incorrect appears. 3 Log in Non-Exist Account:- Enter email in Email text field.- Enter password in Password text field.- Click Login button. email=INVALID_EMAILpassword=INVALID_PWD 1. Logout link is NOT found.2. Page refreshes.3. Warning message-Sorry, this does not match any existing accounts. Please check your details and try again or open a new account below appears. 4 Log in Valid Account:- Enter email in Email text field.- Enter password in Password text field.- Click Login-button. email=EMAILpassword=VALID_PWD 1. Logout link is found.2. Page reloads.3. Login successful message-You are logged in as: appears. 5 Click Logout link. User action 1. Logout link is NOT found.    
Read more
  • 0
  • 3
  • 11537

article-image-pop-image-widget-using-javascript-php-and-css
Packt
22 Oct 2009
7 min read
Save for later

Pop-up Image Widget using JavaScript, PHP and CSS

Packt
22 Oct 2009
7 min read
If you’re a regular blog reader then it’s likely that you’ve encountered the Recent Visitors widget form (http://mybloglog.com). This widget displays the profile like name, picture and sites authored by members of Mybloglog who have recently visited your blog. In the Mybloglog widget, when you move the mouse cursor to the member’s picture, you’ll see a popup displaying a brief description of that member. A glance at MyBlogLog widget The above image is of a MyBlogLog widget. As you can see in the right part of the widget, there is a list of the recent visitors to the blog from members of MyBlogLog. You may also have noticed that in the left part of the widget is a popup showing the details and an image of the visitor. This popup is displayed when the mouse is moved over the image on the widget. Now, let’s look at the code which we got from MyBlogLog to display the above widget. <script src="http://pub.mybloglog.com/comm3.php?mblID=2007121300465126&r= widget&is=small&o=l&ro=5&cs=black&ww=220&wc=multiple"></script> In the above script element, the language and type attributes are not specified. Although they are optional attributes in HTML - you must specify a value in the type attribute to make the above syntax valid in an XHTML web page. If you closely looked at the src attribute of the script element, you can see that the source page of the script is a .php file. You can use the JavaScript code with any file extension like .php , .asp, and so on , but whenever you use such a file in src attribute please note that the final output code of the file (after being parsed by server) should be a valid JavaScript code. Creating pop-up image widget This pop-up image widget is somewhat similar to MyBlogLog widget but it is a simplified version of that widget. This is a very simple widget with uses JavaScript, PHP and CSS. Here you’ll see four images in the widget and a pop-up image (corresponding to the chosen image) will be displayed when you move the mouse over it. After getting the core concept, you can extend the functionality to make this look fancier. Writing Code for Pop-up Image Widget As I’ve already discussed, this widget is going to contain PHP code, JavaScript and a little bit of CSS as well. For this, you need to write the code in a PHP file with the .php extension. First of all, declare the variables for storing the current mouse position and string variables for storing the string of the widget. var widget_posx=0;var widget_posy=0;var widget_html_css=''; The widget_posx variable is to hold the x co-ordinate values of the mouse position on the screen, whereas, the widget_posy variable will store the y co-ordinate. The widget_html_css variable stores the HTML and CSS elements which will be used later in the code. The (0,0) co-ordinate of the output devices like monitor is located at the top left position. So the mouse position 10,10 will be somewhere near the top left corner of monitor. After declaring the variables, let’s define an event handler to track the mouse position on the web page. document.onmousemove=captureMouse; As you can see above, we’ve called a function captureMouse() When the mouse is moved anywhere on the document (web page), the event handler which is the function captureMouse() is called on the onmousemove event. The Document object represents the entire HTML document and can be used to access and capture the events of all elements on a page. Each time a user moves the mouse one pixel, a mousemove event occurs. It engages system resources to process all mousemove events, hence, use this event carefully! Now, let’s look at the code of the captureMouse() function. function captureMouse(event){ if (!event){var event = window.event;}if (event.pageX || event.pageY) { widget_posx = event.pageX; widget_posy = event.pageY; } else if (event.clientX || event.clientY) { widget_posx = event.clientX; widget_posy = event.clientY; } } As you can see in the above function, the event variable is passed as a function parameter. This event variable is the JavaScript’s Event object. The Event object keeps track of various events that occur on the page, such as the user moving the mouse or clicking on the link, and allows you to react to them by writing code which is relevant to the event. if (!event){var event = window.event;} In the above code, the first line of the event handler ensures that if the browser doesn’t pass the event information to the above function, then we would obtain it from any explicit event registration of the window object. We can track different activity in the document by the event object with the help of its various defined properties. For example, if eventObj is the event object and we’ve to track whether the ctrl key is pressed (or not) - we can use the following code in JavaScript: eventObj.ctrlKey If we’ve assigned the x, y-position of mouse in the page using the pageX and pageY properties, we can also get the same mouse position of the mouse cursor using clientX and clientY property. Most browsers provide both pageX/pageY and clientX/clientY. Internet Explorer is the only current browser that provides clientX/clientY, but not pageX/pageY. To provide cross-browser support, we’ve used both pageX/pageY and clientX/clientY to get the mouse co-ordinates in the document, and assigned them to the widget_posx and widget_posy variables accordingly. Now, let’s look at widget_html_css variable, where we’re going to store the string which is going to be displayed in the widget. widget_html_css+='<style type="text/css">';widget_html_css+='.widgetImageCss';widget_html_css+='{ margin:2px;border:1px solid #CCCCCC;cursor:pointer}';widget_html_css+='</style>'; As you can see in the string of the above variable, we’ve added the style for the HTML element with the class name widgetImageCss within the style element. When applied, this class in the HTML adds a 2 pixel margins ‘brown color border’ to the element. Furthermore, the mouse cursor will be converted into pointer (a hand) which is defined with the cursor attribute in CSS. widget_html_css+='<div id="widget_popup"style="position:absolute;z-index:10; display:none">&nbsp;</div>'; Using the above code, we’re adding a division element with id widget_popup to the DOM. We’ve also added style to this element using inline styling. The position attribute of this element is set to absolute so that this element can move freely without disturbing the layout of the document. The z-index property is used for stacking the order of the element and in the above element it is set 10 so that this element will be displayed above all the other elements of the document. Finally, the display property is set to none for hiding the element at first. Afterwards, this element will be displayed with the pop-up image using JavaScript in the document. Elements can have negative stack orders i.e. you can set the z-index to -1 for an element. This will display it underneath the other elements on the page. Z-index only works on elements that have been positioned using CSS (such as position:absolute). Now, the PHP part of the codes comes in. We’ve used PHP to add the images to the widget_html_css string variables of JavaScript. We’ve used PHP in this part rather than using JavaScript for making this application flexible. JavaScript is a client side scripting language and can’t access the database or do any kind of server activity. Using PHP, you can extract and display the images from the database which might be the integral part of your desired widget.
Read more
  • 0
  • 0
  • 5093
Visually different images

article-image-aggregate-services-servicemix-jbi-esb
Packt
22 Oct 2009
10 min read
Save for later

Aggregate Services in ServiceMix JBI ESB

Packt
22 Oct 2009
10 min read
EAI - The Broader Perspective No one should have (or will) ever dared to build a 'Single System' which will take care of the entire business requirements of an enterprise. Instead, we build few (or many) systems,and each of them takes care of a set of functionalities in a single Line of Business (LOB). There is absolutely nothing wrong here, but the need of the hour is that these systems have to exchange information and interoperate in many new ways which have not been foreseen earlier. Business grows, enterprise boundaries expands and mergers and acquisition are all norms of the day. If IT cannot scale up with these volatile environments, the failure is not far. Let me take a single, but not simple problem that today's Businesses and IT face - Duplicate Data. By Duplicate Data we mean data related to a single entity stored in multiple systems and storage mechanisms, that too in multiple formats and multiple content. I will take the 'Customer' entity as an example so that I can borrow the 'Single Customer View' (SCV) jargon to explain the problem. We gather customer information while he makes a web order entry or when he raises a complaint against the product or service purchased or when we raise a marketing campaign for a new product to be introduced or ... The list continues, and in each of these scenarios we make use of different systems to collect and store the same customer information. 'Same Customer' - is it same? Who can answer this question? Is there a Data Steward who can provide you with the SCV from amongst the many information silos existing in your Organization? To rephrase the question, does your organization at least have a 'Single View of Truth', if it doesn't have a 'Single Source of Truth'? Information locked away inside disparate, monolithic application silos has proven a stubborn obstacle in answering the queries business requires, impeding the opportunities of selling, not to mention cross-selling and up-selling. Yeah, it's time to cleanse and distill each customer's data into a single best-record view that can be used to improve source system data quality. For that, first we need to integrate the many source systems available. Today, companies are even acquiring just to get access to it's invaluable Customer information! This is just one of the highlights of the importance of integration to control Information Entropy in the otherwise complicated IT landscape. Figure 1. The 'Single Customer View' Dilemma So Integration is not an end, but a means to end a full list of problems faced by enterprises today. We have been doing integration for many years. There exist many platforms, technologies and frameworks doing the same thing. Built around that, we have multiple Integration Architectures too, amongst which, the Point to Pont, Hub and Spoke, and the Message Bus are common. Figure 2 represents these integration topologies. Figure 2. EAI Topologies Let us now look at the salient features of these topologies to see if we are self-sufficient or need something more. Point to Point In Point to Point, we define integration solutions for a pair of applications. Thus, we have two end points to be integrated. We can build protocol and/or format adaptors/transformers at one or either end. This is the easiest way to integrate, as long as the volume of integration is low. We normally use technology specific APIs like FTP, IIOP, Remoting or batch interfaces to realize integration. The advantage is that between these two points, we have tight coupling, since both ends have knowledge about their peers. The downside is that if there are 6 nodes (systems) to be interconnected, we need at least 30 separate channels for both forward and reverse transport. So think of a mid-sized Enterprise with some 1000 systems to integrate! Hub & Spoke Hub And Spoke Architecture is also called as the Message Broker. It provides a centralized hub (Broker) to which all applications are connected. Each application connects with the central hub through lightweight connectors. The lightweight connectors facilitate application integration with minimum or no changes to the existing applications. Message Transformation and Routing takes place within the Hub. The major drawback of the Hub and Spoke Architecture is that if the Hub fails, the entire Integration topology fails. Enterprise Message Bus An Enterprise Message Bus provides a common communication infrastructure which acts as a platform-neutral and language-neutral adaptor between applications. This communication infrastructure may include a Message Router and/or Publish-Subscribe channels. So applications interact each other through the message bus with the help of Request-Response queues. Sometimes the applications have to use adapters that handle scenarios like invoking CICS transactions. Such adapters may provide connectivity between the applications and the message bus using proprietary bus APIs and application APIs. Service Oriented Integration (SOI) Service Oriented Architecture (SOA) provides us with a set of principles, patterns and practices, to provide and consume services which are orchestrated using open standards so as to remove single vendor lock-into provide an agile infrastructure where services range from business definition to technical implementation. In SOA, we no longer deal with single format and single protocol, instead we accept the fact that heterogeneity exists between applications. And our architecture still needs to ensure interoperability and thus information exchange. To help us do integration in the SOA manner, we require a pluggable service infrastructure where providers, consumers, and middleware services can collaborate in the famous 'Publish -- Find -- Bind' triangle. So, similar to the integration topologies described above, we need a backbone upon which we can build SOA that can provide a collection of middleware services that provides integration capabilities. This is what we mean by Service Oriented Integration (SOI). Gartner originally identified Enterprise Service Bus (ESB) Architecture as a core component in the SOA landscape. ESB provides a technical framework to align your SOA based integration needs. In the rest of the article we will concentrate on ESB. Enterprise Service Bus (ESB) Roy Schutle from Gartner defines an ESB as:"A Web-services-capable middleware infrastructure that supports intelligent program-to-program communication and mediates the relationships among loosely-coupled and uncoupled business components." In the ESB Architecture (Refer Figure 2), applications communicate through an SOA middleware backbone. The most distinguishing feature of the ESB Architecture is the distributed nature of the integration topology. This makes the ESB capabilities to spread out across the bus in a distributed fashion, thus avoiding any single point of failure. Scalability is achieved by distributing the capabilities into separately deployable service containers. Smart, intelligent connectors connect the applications to the Bus. Technical services like transformation, routing, security, etc. are provided internally by these connectors. The Bus federates services which are hosted locally or remotely, thus collaborating distributed capabilities. Many ESB solutions are based on Web Services Description Language (WSDL) technologies, and they use Extensible Markup Language (XML) formats for message translation and transformation. The best way to think about an ESB is to imagine the many features which we can provide to the message exchange at a mediation layer (the ESB layer), a few among them is listed below: Addressing & Routing  Synchronous and Asynchronous style invocations  Multiple Transport and protocol bindings  Content transformation and translation  Business Process Orchestration (BPM)  Event processing  Adapters to multiple platforms  etc... Service Aggregation in ESB ESB provides you the best ways of integrating services so that services are not only interoperable but also reusable in the form of aggregating in multiple ways and scenarios. This means, services can be mixed and matched to adapt to multiple protocols and consumer requirements. Let me explain you this concept, as we will explore more into this with the help of sample code too. In code and component reuse, we try to reduce ‘copy and paste’ reuse and encourage inheritance, composition and instance pooling. Similar analogy exists in SOI where services are hosted and pooled for multiple clients through multiple transport channels, and ESB can do this in the best way integration world has ever seen. We call this as the notion of shared services. For example, if a financial organization provides a ‘credit history check service’, an ESB can facilitate reuse of this service by multiple business processes (like a Personal Loan approval process or a Home Mortgage approval process). So, once we create our 'core services', we can then arbitrarily compose these services in a declarative fashion so as to define and publish more and more composite services. Business Process Management (BPM) tools can be integrated over ESB to leverage service aggregation and service collaboration. This facilitates reuse of basic or core (or fine grained) services at Business Process level. So, granularity of services is important which will also decide the level of reusability. Coarse grained or composite services consume fine grained services. Applications that consume  coarse-grained services are not exposed to the fine-grained services they use. Composite services can be assembled from coarse-grained as well as fine-grained services. To make the concept clear, let us take the example of provisioning a new VOIP (Voice Over IP) Service for a new Customer. This is a composite service which in turn calls multiple coarse grained services like 'validateOrder', 'createOrVerifyCustomer', 'checkProductAvailability', etc. Now, the createOrVerifyCustomer coarse grained service in turn call multiple fine grained services like 'validateCustomer', 'createCustomer', 'createBillingAddress', 'createMailingAddress', etc. Figure 3. Service Composition Java Business Integration (JBI) Java Business Integration (JBI) provides a collaboration framework which provides standard interfaces for integration components and protocols to plug into, thus allowing the assembly of Service Oriented Integration (SOI) frameworks. JSR 208 is an extension of Java 2 Enterprise Edition (J2EE), but it is specific for Java Business Integration Service Provider Interfaces (SPI). SOA and SOI are the targets of JBI and hence it is built around Web Services Description Language (WSDL). The nerve of the JBI architecture is the NMR (Normalized Message Router). This is a bus through which messages flow in either directions from a source to a destination. You can listen to Ron Ten-Hove, the Co-spec lead for JSR 208 here and he writes more about JBI components in the PDF download titled JBI Components: Part 1. JBI provides the best available, open foundation for structuring applications by composition of services rather than modularized, structured code that we have been doing in traditional programming paradigms. A JBI compliant ESB implementation must support four different service invocations, leading to four corresponding Message Exchange Patterns (MEP):   One-Way (In-Only MEP): Service Consumer issues a request to Service Provider. No error (fault) path is provided.  Reliable One-Way (Robust In-Only MEP): Service Consumer issues a request to Service Provider. Provider may respond with a fault if it fails to process the request.  Request-Response (In-Out MEP): Service Consumer issues a request to Service Provider, with expectation of response. Provider may respond with a fault if it fails to process request.  Request Optional-Response (In Optional-Out MEP): Service Consumer issues a request to Service Provider, which may result in a response. Both Consumer and provider have the option of generating a fault in response to a message received during the interaction.
Read more
  • 0
  • 0
  • 3711

article-image-oracle-web-rowset-part1
Packt
22 Oct 2009
6 min read
Save for later

Oracle Web RowSet - Part1

Packt
22 Oct 2009
6 min read
The ResultSet interface requires a persistent connection with a database to invoke the insert, update, and delete row operations on the database table data. The RowSet interface extends the ResultSet interface and is a container for tabular data that may operate without being connected to the data source. Thus, the RowSet interface reduces the overhead of a persistent connection with the database. In J2SE 5.0, five new implementations of RowSet—JdbcRowSet, CachedRowSet, WebRowSet, FilteredRowSet, and JoinRowSet—were introduced. The WebRowSet interface extends the RowSet interface and is the XML document representation of a RowSet object. A WebRowSet object represents a set of fetched database table rows, which may be modified without being connected to the database. Support for Oracle Web RowSet is a new feature in Oracle Database 10g driver. Oracle Web RowSet precludes the requirement for a persistent connection with the database. A connection is required only for retrieving data from the database with a SELECT query and for updating data in the database after all the required row operations on the retrieved data has been performed. Oracle Web RowSet is used for queries and modifications on the data retrieved from the database. Oracle Web RowSet, as an XML document representation of a RowSet facilitates the transfer of data. In Oracle Database 10g and 11g JDBC drivers, Oracle Web RowSet is implemented in the oracle.jdbc.rowset package. The OracleWebRowSet class represents a Oracle Web RowSet. The data in the Web RowSet may be modified without connecting to the database. The database table may be updated with the OracleWebRowSet class after the modifications to the Web RowSet have been made. A database JDBC connection is required only for retrieving data from the database and for updating the database. An XML document representation of the data in a Web RowSet may be obtained for data exchange. In this article, the Web RowSet feature in Oracle 10g database JDBC driver is implemented in JDeveloper 10g. An example Web RowSet will be created from a database. The Web RowSet will be modified and stored in the database table. In this article, we will learn the following: Creating a Oracle Web RowSet object Adding a row to Oracle Web RowSet Modifying the database table with Web RowSet In the second half of the article, we will cover the following : Reading a row from Oracle Web RowSet Updating a row in Oracle Web RowSet Deleting a row from Oracle Web RowSet Updating Database Table with modified Oracle Web RowSet Setting the Environment We will use Oracle database to generate an updatable OracleWebRowSet object. Therefore, install Oracle database 10g including the sample schemas. Connect to the database with the OE schema: SQL> CONNECT OE/<password> Create an example database table, Catalog, with the following SQL script: CREATE TABLE OE.Catalog(Journal VARCHAR(25), Publisher Varchar(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'July-August 2005', 'Tuning Undo Tablespace','Kimberly Floss');INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'March-April 2005', 'Starting with Oracle ADF', 'SteveMuench'); Configure JDeveloper 10g for Web RowSet implementation. Create a project in JDeveloper. Select File | New | General | Application. In the Create Application window specify an Application Name and click on Next. In the Create Project window, specify a Project Name and click on Next. A project is added in the Applications Navigator. Next, we will set the project libraries. Select Tools | ProjectProperties and in the Project Properties window, select Libraries | Add Library to add a library. Add the Oracle JDBC library to project libraries. If the Oracle JDBC drivers version prior to the Oracle database 10g (R2) JDBC drivers version is used, create a library from the Oracle Web RowSet implementation classes JAR file: C:JDeveloper10.1.3jdbclibocrs12.jar. The ocrs12.jar is required only for JDBC drivers prior to Oracle database 10g (R2) JDBC drivers. In Oracle database 10g (R2) JDBC drivers OracleRowSet implementation classes are packaged in the ojdbc14.jar. In Oracle database 11g JDBC drivers Oracle RowSet implementation classes are packaged in ojdbc5.jar and ojdbc6.jar. In the Add Library window select the User node and click on New. In the Create Library window specify a Library Name, select the Class Path node and click on Add Entry. Add an entry for ocrs12.jar. As Web RowSet was introduced in J2SE 5.0, if J2SE 1.4 is being used we also need to add an entry for the RowSet implementations JAR file, rowset.jar. Download the JDBC RowSet Implementations 1.0.1 zip file, jdbc_rowset_tiger-1_0_1-mrel-ri.zip, from http://java.sun.com/products/jdbc/download.html#rowset1_0_1 and extract the JDBC RowSet zip file to a directory. Click on OK in the Create Library window. Click on OK in the Add Library window. A library for the Web RowSet application is added. Now configure an OC4J data source. Select Tools | Embedded OC4J Server Preferences. A data source may be configured globally or for the current workspace. If a global data source is created using Global | Data Sources, the data source is configured in the C:JDeveloper10.1.3jdevsystemoracle.j2ee.10.1.3.36.73embedded-oc4jconfig data-sources.xml file. If a data source is configured for the current workspace using Current Workspace | Data Sources, the data source is configured in the data-sources.xml file. For example, the data source file for the WebRowSetApp application is WebRowSetApp-data-sources.xml. In the Embedded OC4J Server Preferences window configure either a global data source or a data source in the current workspace. A global data source definition is available to all applications deployed in the OC4J server instance. A managed-data-source element is added to the data-sources.xml file. <managed-data-source name='OracleDataSource' connection-pool-name='Oracle Connection Pool' jndi-name='jdbc/OracleDataSource'/><connection-pool name='Oracle Connection Pool'><connection-factory factory-class='oracle.jdbc.pool.OracleDataSource' user='OE' password='pw'url="jdbc:oracle:thin:@localhost:1521:ORCL"></connection-factory></connection-pool> Add a JSP, GenerateWebRowSet.jsp, to the WebRowSet project. Select File | New | Web Tier | JSP | JSP. Click on OK. Select J2EE 1.3 or J2EE 1.4 in the Web Application window and click on Next. In the JSP File window specify a File Name and click on Next. Select the default settings in the Error Page Options page and click on Next. Select the default settings in the Tag Librarieswindow and click on Next. Select the default options in the HTML Options window and click on Next. Click on Finish in the Finish window. Next, configure the web.xml deployment descriptor to include a reference to the data source resource configured in the data-sources.xml file as shown in following listing: <resource-ref><res-ref-name>jdbc/OracleDataSource</res-ref-name><res-type>javax.sql.DataSource</res-type><res-auth>Container</res-auth></resource-ref>
Read more
  • 0
  • 0
  • 4708

article-image-setting-and-configuring-liferay-portal
Packt
22 Oct 2009
5 min read
Save for later

Setting up and Configuring a Liferay Portal

Packt
22 Oct 2009
5 min read
Setting up Liferay Portal As an administrator at the enterprise, you need to undertake a lot of administration tasks, such as installing Liferay portal, installing and setting up databases, and so on. You can install Liferay Portal through different ways, based on your specific needs. Normally, there are three main installation options: Using an open source bundle—It is the easiest and fastest installation method to install Liferay portal as a bundle. By using a Java SE runtime environment with an embedded database, you simply unzip and run the bundle. Detailed installation procedure—You can install the portal in an existing application server. This option is available for all the supported application servers. Using the extension environment—You can use a full development environment to extend the functionality. We will take up the third installation option "Using the extension environment" in the coming section. Using Liferay Portal Bundled with Tomcat 5.5 in Windows First let's consider one scenario when you, as an administrator, need to install Liferay portal in Windows with MySQL database, and your local Java version is JavaSE 5.0. Let's install Liferay portal bundled with Tomcat 5.5 in Windows as follows: Download Liferay Portal bundled with Tomcat for JDK 5.0 from Liferay official web site. Unzip the bundled file. Set up MySQL database as follows:create database liferay;grant all on liferay.* to 'liferay'@'localhost' identified by'liferay' with grant option;grant all on liferay.* to 'liferay'@'localhost.localdomain'identified by 'liferay' with grant option; Create a database and account in MySQL: Copy the MySQL JDBC driver mysql.jar to $TOMCAT_DIR/lib/ext; Comment the Hypersonic data source (HSQL) configuration and uncomment MySQL configuration ($TOMCAT_DIR/conf/Catalina/localhost/ROOT.xml):<!-- Hypersonic --><!--<Resource name="jdbc/LiferayPool" auth="Container"type="javax.sql.DataSource" driverClassName="org.hsqldb.jdbcDriver"url="jdbc:hsqldb:lportal"username="sa"password=""maxActive="20" /> --><!-- MySQL --><Resource name="jdbc/LiferayPool" auth="Container"type="javax.sql.DataSource" driverClassName="com.mysql.jdbc.Driver"url="jdbc:mysql://localhost/liferay?useUnicode=true&amp;characterEncoding=UTF-8"username="liferay"password="liferay"maxActive="20" /> Run $TOMCAT_DIR /bin/startup.bat. Open your browser and go to http://localhost:8080 (here we assume that it is a local installation, otherwise use the real host name or IP). Login as an administrator—User: [email protected] and Password: test. Note that the bundle comes with an embedded HSQL database loaded with sample data from the public website of Liferay. Do not use the Hypersonic in production. Using Liferay Portal Bundled with Tomcat 6.x in Linux Let's consider another scenario when you, as an administrator, need to install Liferay portal in Linux with MySQL database, and your local Java version is Java 6.0. Let's install Liferay portal bundled with Tomcat 6.0 in Linux as follows: Download Liferay Portal bundled with Tomcat 6.0 from Liferay official web site. Unzip the bundled file. Create a database and account in MySQL (as stated before). Run $TOMCAT_DIR/bin/startup.sh. Open your browser and go to http://localhost:8080 (assuming local installation; otherwise use the real host name or IP). Log in as an administrator—User: [email protected] and Password: test. Note that, Liferay Portal creates the tables it needs along with example data, the first time it starts. Furthermore, it is necessary to make the script executable by running chmod +x filename.sh. It is often necessary to run the executable from the directory where it resides. Using More Options for Liferay Portal Installation You can use one of the following options for Servlet containers and full Java EE application servers to install Liferay Portal: Geronimo + Tomcat Glassfish for AIX Glassfish for Linux Glassfish for OSX Glassfish for Solaris Glassfish for Solaris (x86) Glassfish for Windows JBoss + Jetty 4.0 JBoss + Tomcat 4.0 JBoss + Tomcat 4.2 Jetty JOnAS + Jetty JOnAS + Tomcat Pramati Resin Tomcat 5.5 for JDK 1.4 Tomcat 5.5 for JDK 5.0 Tomcat 6.0 You can choose a preferred bundle according to your requirements and download it from the official download page directly. Simply go to the website http://www.liferay.com and click on Downloads page. Flexible Deployment Matrix As an administrator, you can install Liferay Portals on all major application servers, databases, and operating systems. There are over 700 ways to deploy Liferay Portal. Thus, you can reuse your existing resources, stick to your budget and get an immediate return on you investment that everyone can be happy with. In general, you can install Liferay portal in Linux, UNIX and Windows with any one of the following application servers (or Servlet containers) and by selecting any one of the following database systems. The applications servers (or Servlet containers) that Liferay Portal can run on, include: Borland ES 6.5 Apache Geronimo 2.x Sun GlassFish 2 UR1 JBoss 4.0.x, 4.2.x JOnAS 4.8.x JRun 4 Updater 3 OracleAS 10.1.3.x Orion 2.0.7 Pramati 5.0 RexIP 2.5 SUN JSAS 9.1 WebLogic 8.1 SP4, 9.2, 10 WebSphere 5.1, 6.0.x, 6.1.x Jetty 5.1.10 Resin 3.0.19 Tomcat 5.0.x/5.5.x/6.0.x Databases that Liferay portal can run on include: Apache Derby IBM DB2 Firebird Hypersonic Informix InterBase JDataStore MySQL Oracle PostgresSQL SAP SQL Server Sybase Operating systems that Liferay portal can run on include: LINUX (Debian, RedHat, SUSE, Ubuntu, and so on.) UNIX (AIX, FreeBSD, HP-UX, OS X, Solaris, and so on.) WINDOWS MAC OS X
Read more
  • 0
  • 0
  • 3054
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-jboss-jbpm-concepts-and-jbpm-process-definition-language-jpdl
Packt
22 Oct 2009
6 min read
Save for later

JBoss jBPM Concepts and jBPM Process Definition Language (jPDL)

Packt
22 Oct 2009
6 min read
JBoss jBPM Concepts JBoss jBPM is built around the concept of waiting. That may sound strange given that software is usually about getting things done, but in this case there is a very good reason for waiting. Real-life business processes cut across an organization, involve numerous humans and multiple systems, and happen over a period of time. In regular software, the code that makes up the system is normally built to "do all these tasks as soon as possible". This wouldn't work for a business process, as the people who need to take part in the process won't always want or be able to "do their task now". The software needs some way of waiting, until the process actor is ready to do their activity. Then once they have done their activity, the software needs to know what is the next activity in the chain and then wait for the next process actor to get round to doing their bit. The orchestration of this sequence of "wait, work, wait, work" is handled by the JBoss jBPM engine. The jBPM engine looks up our process definition and works out which way it should direct us through the process. We know the "process definition" better as our graphical process map. jBPM Process Definition Language—jPDL We will introduce the key terms and concepts here to get the ball rolling. We won't linger too long over the definitions, as the best way to fix the terminology in the brain is to see it used in context. At this point, we will introduce some core terminologies for a better understanding. The visual process map in the Designer is an example of what the JBoss jBPM project calls "Graph Oriented Programming". Instead of programming our software in code, we are programming our software using a visual process map: referred to as a "directed graph". This directed graph is also defined in the XML representation of the process we saw in the Source view. The graph plus the XML is a notation set, which is properly called jPDL, the "jBPM Process Definition Language". A process definition specified in jPDL is composed of "nodes", "transitions", and "actions", which together describe how an "instance" of the process should traverse the directed graph. During execution of the process, as the instance moves through the directed graph, it carries through a "token", which is a pointer to the node of the graph at which the instance is currently waiting. A "signal" tells the token which "transition" it should take from the node: signals specify which path to take through the process. Let's break this down a little bit with some more detail. Nodes A node in jPDL is modeled visually as a box, and hence looks very similar to the activity box we are used to from our workflow and activity flow diagrams. The concept of "nodes" does subtly differ from that of activities, however. In designing jPDL, the jBPM team have logically separated the idea of waiting for the result of an action from that of doing an action. They believe that the term "activity" blurs the line between these two ideas, which causes problems when trying to implement the logic behind a business process management system. For example, both "Seek approval" and "Record approval" would be modeled as activities on an activity flow diagram, but the former would be described as a "state" and the latter as an "action" in jPDL: the state element represents the concept of waiting for the action to happen, moving the graph to the next state. "Node" is therefore synonymous with "state" in jPDL. "Actions" are bits of code that can be added by a developer to tell the business process management system to perform an action that needs to be done by the system: for example, recording the approval of a holiday request in a database. Actions aren't mapped visually, but are recorded in the XML view of the process definition. We'll cover actions a bit later. There are different types of node, and they are used to accomplish different things. Let's quickly go through them so we know how they are used. Tasks A task node represents a task that is to be performed by humans. If we model a task node on our graph, it will result in a task being added to the task list of the person assigned to that task, when the process is executed. The process instance will wait for the person to complete that task and hand back the outcome of the task to the node. State A state node simply tells the process instance to wait, and in contrast to a task node, it doesn't create a task in anybody's task list. A state node would normally be used to model the behavior of waiting for an external system to provide a response. This would typically be done in combination with an Action, which we'll talk about soon. The process instance will resume execution when a signal comes back from the external system. Forks and Joins We can model concurrent paths of execution in jPDL using forks and joins. For example, the changes we made to our model to design our To Be process can be modeled using forks and joins to represent the parallel running of activities. We use a Fork to split the path of execution up, and then join it back together using a Join: the process instance will wait at the Join until the parallel tasks on both sides are completed. The instance can't move on until both chains of activities are finished. jBPM creates multiple child tokens related to the parent token for each path of execution. Decision In modeling our process in jBPM, there are two distinct types of decision with which we need to concern ourselves. Firstly, there is the case where the process definition itself needs to make a decision, based on data at its disposal, and secondly, where a decision made by a human or an external system is an input to the process definition. Where the process definition itself will make the decision, we can use a decision node in the model. Where the outcome of the decision is simply input into the process definition at run time, we should use a state node with multiple exiting transitions representing the possible outcomes of the decision.
Read more
  • 0
  • 1
  • 2757

article-image-building-jsfejb3-applications
Packt
22 Oct 2009
15 min read
Save for later

Building JSF/EJB3 Applications

Packt
22 Oct 2009
15 min read
Building JSF/EJB3 Applications This practical article shows you how to create a simple data-driven application using JSF and EJB3 technologies. The article also shows you how to effectively use NetBeans IDE when building enterprise applications. What We Are Going to Build The sample application we are building throughout the article is very straightforward. It offers just a few pages. When you click the Ask us a question link on the welcomeJSF.jsp page, you will be taken to the following page, on which you can submit a question:     Once you’re done with your question, you click the Submit button. As a result, the application persist your question along with your email in the database. The next page will look like this:     The web tier of the application is build using the JavaServer Faces technology, while EJB is used to implement the database-related code. Software You Need to Follow the Article Exercise To build the sample discussed here, you will need the following software components installed on your computer: Java Standard Development Kit (JDK) 5.0 or higher Sun Java System Application Server Platform Edition 9 MySQL NetBeans IDE 5.5 Setting Up the Database The first step in building our application is to set up the database to interact with. In fact, you could choose any database you like to be the application’s backend database. For the purpose of this article, though, we will discuss how to use MySQL. To keep things simple, let’s create a questions table that contains just three columns, outlined in the following table: Column Type Description trackno INTEGER AUTO_INCREMENT PRIMARY KEY Stores a track number generated automatically when a row is inserted. user_email VARCHAR(50) NOT NULL   question VARCHAR(2000) NOT NULL   Of course, a real-world questions table would contain a few more columns, for example, dateOfSubmission containing the date and time of submitting the question. To create the questions table, you first have to create a database and grant the required privileges to the user with which you are going to connect to that database. For example, you might create database my_db and user usr identified by password pswd. To do this, you should issue the following SQL commands from MySQL Command Line Client:   CREATE DATABASE my_db; GRANT CREATE, DROP, SELECT, INSERT, UPDATE, DELETE ON my_db.* TO 'usr'@'localhost' IDENTIFIED BY 'pswd'; In order to use the newly created database for subsequent statements, you should issue the following statement:   USE my_db   Finally, create the questions table in the database as follows:   CREATE TABLE questions(      trackno INTEGER AUTO_INCREMENT PRIMARY KEY,      user_email VARCHAR(50) NOT NULL,      question  VARCHAR(2000) NOT NULL ) ENGINE = InnoDB;     Once you’re done, you have the database with the questions table required to store incoming users’ questions. Setting Up a Data Source for Your MySQL Database Since the application we are going to build will interact with MySQL, you need to have installed an appropriate MySQL driver on your application server. For example, you might want to install MySQL Connector/J, which is the official JDBC driver for MySQL. You can pick up this software from the "downloads" page of the MySQL AB website at http://mysql.org/downloads/. Install the driver on your GlassFish application server as follows: Unpack the downloaded archive containing the driver to any directory on your machine Add mysql-connector-java-xxx-bin.jar to the CLASSPATH environment variable Make sure that your GlassFish application server is up and running Launch the Application Server Admin Console by pointing your browser at http://localhost:4848/ Within the Common Tasks frame, find and double-click the ResourcesJDBCNew Connection Pool node On the New Connection Pool page, click the New… button The first step of the New Connection Pool master, set the fields as shown in the following table: Setting Value Name jdbc/mysqlPool Resource type javax.sql.DataSource Database Vendor mysql   Click Next to move on to the second page of the master On the second page of New Connection Pool, set the properties to reflect your database settings, like that shown in the following table: Name Value databaseName my_db serverName localhost port 3306 user usr password pswd   Once you are done with setting the properties, click Finish. The newly created jdbc/mysqlPool connection pool should appear on the list. To check it, you should click its link to open it in a window, and then click the Ping button. If everything is okay, you should see a message telling you Ping succeeded Creating the Project The next step is to create an application project with NetBeans. To do this, follow the steps below: Choose File/New project and then choose the EnterpriseEnterprise Application template for the project. Click Next On the Name and Location page of the master, specify the name for the project: JSF_EJB_App. Also make sure that Create EJB Module and Create Web Application Module are checked. And click Finish As a result, NetBeans generates a new enterprise application in a standard project, containing actually two projects: an EJB module project and Web application project. In this particular example, you will use the first project for EJBs and the second one for JSF pages. Creating Entity Beans and Persistent Unit You create entity beans and the persistent unit in the EJB module project—in this example this is the JSF_EJB_App-ejb project. In fact, the sample discussed here will contain the only entity bean: Question. You might automatically generate it and then edit as needed. To generate it with NetBeans, follow the steps below: Make sure that your Sun Java System Application Server is up and running In the Project window, right click JSF_EJB_App-ejb project, and then choose New/Entity Classes From Database. As a result, you’ll be prompted to connect to your Sun Java System Application Server. Do it by entering appropriate credentials. In the New Entity Classes from Database window, select jdbc/mysqlPool from the Data Source combobox. If you recall from the Setting up a Data Source for your MySQL database section discussed earlier in this article, jdbc/mysqlPool is a JDBC connection pool created on your application server In the Connect dialog appeared, you’ll be prompted to connect to your MySQL database. Enter password pswd, set the Remember password during this session checkbox, and then click OK In the Available Tables listbox, choose questions, and click Add button to move it to the Selected Tables listbox. After that, click Next On the next page of the New Entity Classes from Database master, fill up the Package field. For example, you might choose the following name: myappejb.entities. And change the class name from Questions to Question in the Class Names box. Next, click the Create Persistent Unit button In the Create Persistent Unit window, just click the Create button, leaving default values of the fields In the New Entity Classes from Database dialog, click Finish As a result, NetBeans will generate the Question entity class, which you should edit so that the resultant class looks like the following:   package myappejb.entities; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "questions") public class Question implements Serializable { @Id @Column(name = "trackno") private Integer trackno; @Column(name = "user_email", nullable = false) private String userEmail; @Column(name = "question", nullable = false) private String question; public Question() { } public Integer getTrackno() { return this.trackno; } public void setTrackno(Integer trackno) { this.trackno = trackno; } public String getUserEmail() { return this.userEmail; } public void setUserEmail(String userEmail) { this.userEmail = userEmail; } public String getQuestion() { return this.question; } public void setQuestion(String question) { this.question = question; } }     Once you’re done, make sure to save all the changes made by choosing File/Save All. Having the above code in hand, you might of course do without first generating the Question entity from the database, but simply create an empty Java file in the myappejb.entities package, and then insert the above code there. Then you could separately create the persistent unit. However, the idea behind building the Question entity with the master here is to show how you can quickly get a required piece of code to be then edited as needed, rather than creating it from scratch. Creating Session Beans To finish with the JSF_EJB_App-ejb project, let’s proceed to creating the session bean that will be used by the web tier. In particular, you need to create the QuestionSessionBean session bean that will be responsible for persisting the data a user enters on the askquestion page. To generate the bean’s frame with a master, follow the steps below: In the Project window, right click JSF_EJB_App-ejb project, and then choose New/Session Bean In the New Session Bean window, enter EJB name: QuestionSessionBean. Then specify the package: myappejb.ejb. Make sure that the Session Type is set to Stateless and Create Interface is set to Remote. Click Finish As a result, NetBeans should generate two Java files: QuestionSessionBean.java and QuestionSessionRemote.java. You should modify QuestionSessionBean.java so that it contains the following code:   package myappejb.ejb; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.ejb.TransactionManagement; import javax.ejb.TransactionManagementType; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.PersistenceUnit; import javax.transaction.UserTransaction; import myappejb.entities.Question; @Stateless <b>@TransactionManagement(TransactionManagementType.BEAN)</b> public class QuestionSessionBean implements myappejb.ejb.QuestionSessionRemote { /** Creates a new instance of QuestionSessionBean */ public QuestionSessionBean() { } @Resource private UserTransaction utx; @PersistenceUnit(unitName = "JSF_EJB_App-ejbPU") private EntityManagerFactory emf; private EntityManager getEntityManager() { return emf.createEntityManager(); } public void save(Question question) throws Exception { EntityManager em = getEntityManager(); try { utx.begin(); em.joinTransaction(); em.persist(question); utx.commit(); } catch (Exception ex) { try { utx.rollback(); throw new Exception(ex.getLocalizedMessage()); } catch (Exception e) { throw new Exception(e.getLocalizedMessage()); } } finally { em.close(); } } }   Next, modify the QuestionSessionRemote.java so that it looks like this:   package myappejb.ejb; import javax.ejb.Remote; import myappejb.entities.Question; @Remote public interface QuestionSessionRemote { void save(Question question) throws Exception; }   Choose File/Save All to save the changes made. That’s it. You just finished with your EJB module project. Adding JSF Framework to the Project Now that you have the entity and session beans created, let’s switch to the JSF_EJB_App-war project, where you’re building the web tier for the application.Before you can proceed to building JSF pages, you need to add the JavaServer Faces framework to the JSF_EJB_App-war project. To do this, follow the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose Properties In the Project Properties window, select Frameworks from Categories, and click Add button. As a result, the Add a Framework dialog should appear In the Add a Framework dialog, choose JavaServer Faces and click OK Then click OK in the Project Properties dialog As a result, NetBeans adds the JavaServer Faces framework to the JSF_EJB_App-war project. Now if you extend the Configuration Files folder under the JSF_EJB_App-war project node in the Project window, you should see, among other configuration files, faces-config.xml there. Also notice the appearance of the welcomeJSF.jsp page in the Web Pages folder Creating JSF Managed Beans The next step is to create managed beans whose methods will be called from within the JSF pages. In this particular example, you need to create only one such bean: let’s call it QuestionController. This can be achieved by following the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose New/Empty Java File In the New Empty Java File window, enter QuestionController as the class name and enter myappjsf.jsf in the Package field. Then, click Finish In the generated empty java file, insert the following code:   package myappjsf.jsf; import javax.ejb.EJB; import javax.faces.application.FacesMessage; import javax.faces.context.FacesContext; import myappejb.entities.Question; import myappejb.ejb.QuestionSessionBean; public class QuestionController { @EJB private QuestionSessionBean sbean; private Question question; public QuestionController() { } public Question getQuestion() { return question; } public void setQuestion(Question question) { this.question = question; } public String createSetup() { this.question = new Question(); this.question.setTrackno(null); return "question_create"; } public String create() { try { Integer trck = sbean.save(question); addSuccessMessage("Your question was successfully submitted."); } catch (Exception ex) { addErrorMessage(ex.getLocalizedMessage()); } return "created"; } public static void addErrorMessage(String msg) { FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_ERROR, msg, msg); FacesContext fc = FacesContext.getCurrentInstance(); fc.addMessage(null, facesMsg); } public static void addSuccessMessage(String msg) { FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_INFO, msg, msg); FacesContext fc = FacesContext.getCurrentInstance(); fc.addMessage("successInfo", facesMsg); } }   Next, you need to add information about the newly created JSF managed bean to the faces-config.xml configuration file automatically generated when adding the JSF framework to the project. Find this file in the following folder: JSF_EJB_App-warWeb PagesWEB-INF in the Project window, and then insert the following tag between the and tags:   <managed-bean> <managed-bean-name>questionJSFBean</managed-bean-name> <managed-bean-class>myappjsf.jsf.QuestionController</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean>   Finally, make sure to choose File/Save All to save the changes made in faces-config.xml as well as in QuestionController.java. Creating JSF Pages To keep things simple, you create just one more JSF page: askquestion.jsp, where a user can submit a question. First, though, let’s modify the welcomeJSF.jsp page so that you can use it to move on to askquestion.jsp and then return to, once a question has been submitted. To achieve this, modify welcomeJSF.jsp as follows:   <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h:messages errorStyle="color: red" infoStyle="color: green" layout="table"/> <h:form> <h1><h:outputText value="Ask us a question" /></h1> <h:commandLink action="#{questionJSFBean.createSetup}" value="New question"/> <br> </h:form> </f:view> </body> </html> Now you can move on and create askquestion.jsp. To do this, follow the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose New/JSP In the New JSP File window, enter askquestion as the name for the page, and click Finish Modify the newly created askquestion.jsp so that it finally looks like this:   <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib uri="http://java.sun.com/jsf/core" prefix="f" %> <%@taglib uri="http://java.sun.com/jsf/html" prefix="h" %> <html>     <head>         <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />         <title>New Question</title>     </head>     <body>         <f:view>             <h:messages errorStyle="color: red" infoStyle="color: green" layout="table"/>             <h1>Your question, please!</h1>             <h:form>                 <h:panelGrid columns="2">                     <h:outputText value="Your Email:"/>                     <h:inputText id="userEmail" value= "#{questionJSFBean.question.userEmail}" title="Your Email" />                     <h:outputText value="Your Question:"/>                     <h:inputTextarea id="question" value= "#{questionJSFBean.question.question}"  title="Your Question" rows ="5" cols="35" />                 </h:panelGrid>                 <h:commandLink action="#{questionJSFBean.create}" value="Create"/>                 <br>                 <a href="/JSF_EJB_App-war/index.jsp">Back to index</a>             </h:form>         </f:view>     </body> </html>   The next step is to set page navigation. Turning back to the faces-config.xml configuration file, insert the following code there.   <navigation-rule> <navigation-case> <from-outcome>question_create</from-outcome> <to-view-id>/askquestion.jsp</to-view-id> </navigation-case> </navigation-rule> <navigation-rule> <navigation-case> <from-outcome>created</from-outcome> <to-view-id>/welcomeJSF.jsp</to-view-id> </navigation-case> </navigation-rule> Make sure that the above tags are within the <faces-config> and </faces-config> root tags. Check It You are ready now to check the application you just created. To do this, right-click JSF_EJB_App-ejb project in the Project window and choose Deploy Project. After the JSF_EJB_App-ejb project is successfully deployed, right click the JSF_EJB_App-war project and choose Run Project. As a result, the newly created application will run in a browser. As mentioned earlier, the application contains very few pages, actually three ones. For testing purposes, you can submit a question, and then check the questions database table to make sure that everything went as planned. Summary Both JSF and EJB 3 are popular technologies when it comes to building enterprise applications. This simple example illustrates how you can use these technologies together in a complementary way.
Read more
  • 0
  • 0
  • 3506

article-image-developing-simple-workflow-within-sugarcrm
Packt
22 Oct 2009
4 min read
Save for later

Developing a Simple Workflow within SugarCRM

Packt
22 Oct 2009
4 min read
A Very Simple Workflow In our simple workflow we'll assume that each task is carried out by one person at a time, and that all tasks are done sequentially (i.e. none are done in parallel). So, we'll look at the PPI Preliminary Investigation which, as you remember, maps to the standard SugarCRM Opportunity. Also, in this example, we're going to have a different person carrying out each one of the Investigation stages. Setting up the Process Stages If you look at SugarCRM then you'll see that by default none of the stages are related to investigations—they're all named using standard CRM terms: Obviously the first thing to do is to decide what the preliminary investigation stages actually are, and then map these to the SugarCRM stages. You'll realize that you'll need to edit the custom/include/langauge/en_us.lang.php file: $app_list_strings['sales_stage_dom']=array ( 'Prospecting' => 'Fact Gathering', 'Qualification' => 'Witness and Subject Location', 'Needs Analysis' => 'Witness and Subject Interviews', 'Value Proposition' => 'Scene Investigation', 'Id. Decision Makers' => 'Financial and background Investigation', 'Perception Analysis' => 'Document and evidence retrieval', 'Proposal/Price Quote' => 'Covert Camera surveillance', 'Negotiation/Review' => 'Wiretapping', 'Closed Won' => 'Full Investigation required', 'Closed Lost' => 'Insufficient Evidence',); Don't forget that you can also do this via Studio. However, once you've added your mapping into custom/include/langauge/en_us.lang.php file, and refresh your browser, then you'll see the new stages: Now that our stages are set up we need to know who'll be carrying out each one. Deciding Who Does What In our simple workflow there may not be the need to do anything further. Each person just needs to know who does what next: For example, once Kurt finishes the 'Covert Camera surveillance' stage then he just needs to update the Preliminary Investigation so that the stage is set to 'Wiretapping' and the assigned user as 'dobbsm'. However, things are rarely as simple as that. It's much more likely that: Investigations may be based on geographical locations, so that the above table may only apply to investigations based in London. Investigations based in New York follow the same process but with a different set of staff. On Mondays Fran does 'Witness and Subject Location' and William does 'Fact Gathering'. This means, of course, that we need to be using some businesses rules. Introducing Business Rules There are six 'triggers' that will cause the logic hooks to fire: after_retrieve before_save before_delete after_delete before_undelete after_undelete And the logic hooks are stored in custom/modules/<module name>/logic_hook.php, so for 'Preliminary Inquiries' this will be custom/modules/Opportunities/logic_hook.php. You'll also remember, of course, that the logic hook file needs to contain: The priority of the business rule The name of the businesses rule The file containing the business rule The business rule class The business rule function So, custom/modules/Opportunities/logic_hook.php needs to contain something like: <?php#As always ensure that the file can only be accessed through SugarCRMif(!defined('sugarEntry') || !sugarEntry) die( 'Not A Valid Entry Point');$hook_array = Array(); #Create an array$hook_array['before_save'] = Array();$hook_array['before_save'][] = Array(1, 'ppi_workflow', 'custom/include/ppi_workflow.php', 'ppi_workflow', 'ppi_workflow');?> Next we'll need the file that logic hook will be calling, but to start with this can be very basic—so, custom/include/ppi_workflow.php just needs to contain something like: <?php#Define the entry pointif(!defined('sugarEntry') || !sugarEntry) die( 'Not A Valid Entry Point');#Load any required filesrequire_once('data/SugarBean.php');require_once('modules/Opportunities/Opportunity.php');#Define the classclass ppi_workflow{ function ppi_workflow (&$bean, $event, $arguments) { }}?> With those two files set up as above nothing obvious will change in the operation of SugarCRM—the logic hook will fire, but we haven't told it to do anything, and so that what we'll do now. When the logic hook does run (i.e. when any Primary Investigation is saved) we would want it to: Check to see what stage we're now at Define the assigned user accordingly  
Read more
  • 0
  • 0
  • 2846

article-image-linq-objects
Packt
22 Oct 2009
10 min read
Save for later

LINQ to Objects

Packt
22 Oct 2009
10 min read
Without LINQ, we would have to go through the values one-by-one and then find the required details. However, using LINQ we can directly query collections and filter the required values without using any looping. LINQ provides powerful filtering, ordering, and grouping capabilities that requires minimum coding. For example, if we want to find out the types stored in an assembly and then filter the required details, we can use LINQ to query the assembly details using System.Reflection classes. The System.Reflection namespace contains types that retrieve information about assemblies, modules, members, parameters, and other entities as collections are managed code, by examining their metadata. Also, files under a directory are a collection of objects that can be queried using LINQ. We shall see some of the examples for querying some collections. Array of Integers The following example shows an integer array that contains a set of integers. We can apply the LINQ queries on the array to fetch the required values.     int[] integers = { 1, 6, 2, 27, 10, 33, 12, 8, 14, 5 };       IEnumerable<int> twoDigits =       from numbers in integers       where numbers >= 10       select numbers;       Console.WriteLine("Integers > 10:");       foreach (var number in twoDigits)       {          Console.WriteLine(number);       } The integers variable contains an array of integers with different values. The variable twoDigits, which is of type IEnumerable, holds the query. To get the actual result, the query has to be executed. The actual query execution happens when the query variable is iterated through the foreach loop by calling GetEnumerator() to enumerate the result. Any variable of type IEnumerable<T>, can be enumerated using the foreach construct. Types that support IEnumerable<T> or a derived interface such as the generic IQueryable<T>, are called queryable types. All collections such as list, dictionary and other classes are queryable. There are some non-generic IEnumerable collections like ArrayList that can also be queried using LINQ. For that, we have to explicitly declare the type of the range variable to the specific type of the objects in the collection, as it is explained in the examples later in this article. The twoDigits variable will hold the query to fetch the values that are greater than or equal to 10. This is used for fetching the numbers one-by-one from the array. The foreach loop will execute the query and then loop through the values retrieved from the integer array, and write it to the console. This is an easy way of getting the required values from the collection. If we want only the first four values from a collection, we can apply the Take() query operator on the collection object. Following is an example which takes the  first four integers from the collection. The four integers in the resultant collection are displayed using the foreach method.    IEnumerable<int> firstFourNumbers = integers.Take(4);   Console.WriteLine("First 4 numbers:");   foreach (var num in firstFourNumbers)   {      Console.WriteLine(num);   } The opposite of Take() operator is Skip() operator, which is used to skip the number of items in the collection and retrieve the rest. The following example skips the first four items in the list and retrieves the remaining.    IEnumerable<int> skipFirstFourNumbers = integers.Skip(4);   Console.WriteLine("Skip first 4 numbers:");   foreach (var num in skipFirstFourNumbers)   {      Console.WriteLine(num);   } This example shows the way to take or skip the specified number of items from the collection. So what if we want to skip or take the items until we find a match in the list? We have operators to get this. They are TakeWhile() and SkipWhile(). For example, the following code shows how to get the list of numbers from the integers collection until 50 is found. TakeWhile() uses an expression to include the elements in the collection as long as the condition is true and it ignores the other elements in the list. This expression represents the condition to test the elements in the collection for the match.    int[] integers = { 1, 9, 5, 3, 7, 2, 11, 23, 50, 41, 6, 8 };   IEnmerable<int> takeWhileNumber = integers.TakeWhile(num =>      num.CompareTo(50) != 0);   Console.WriteLine("Take while number equals 50");   foreach (int num in takeWhileNumber)      {         Console.WriteLine(num.ToString());      } Similarly, we can skip the items in the collection using SkipWhile(). It uses an expression to bypass the elements in the collection as long as the condition is true. This expression is used to evaluate the condition for each element in the list. The output of the expression is boolean. If the expression returns false, the remaining elements in the collections are returned and the expression will not be executed for the other elements. The first occurrence of the return value as false will stop the expression for the other elements and returns the remaining elements. These operators will provide better results if used against ordered lists as the expression is ignored for the other elements once the first match is found.    IEnumerable<int> skipWhileNumber = integers.SkipWhile(num =>      num.CompareTo(50) != 0);   Console.WriteLine("Skip while number equals 50");   foreach (int num in skipWhileNumber)   {      Console.WriteLine(num.ToString());   } Collection of Objects In this section we will see how we can query a custom built objects collection. Let us take the Icecream object, and build the collection, then we can query the collection. This Icecream class in the following code contains different properties such as Name, Ingredients, TotalFat, and Cholesterol.     public class Icecream    {        public string Name { get; set; }        public string Ingredients { get; set; }        public string TotalFat { get; set; }        public string Cholesterol { get; set; }        public string TotalCarbohydrates { get; set; }        public string Protein { get; set; }        public double Price { get; set; }     } Now build the Icecreams list collection using the class defined perviously.     List<Icecream> icecreamsList = new List<Icecream>        {            new Icecream {Name="Chocolate Fudge Icecream", Ingredients="cream,                milk, mono and diglycerides...", Cholesterol="50mg",                Protein="4g", TotalCarbohydrates="35g", TotalFat="20g",                Price=10.5        },        new Icecream {Name="Vanilla Icecream", Ingredients="vanilla extract,            guar gum, cream...", Cholesterol="65mg", Protein="4g",            TotalCarbohydrates="26g", TotalFat="16g", Price=9.80 },            new Icecream {Name="Banana Split Icecream", Ingredients="Banana, guar            gum, cream...", Cholesterol="58mg", Protein="6g",            TotalCarbohydrates="24g", TotalFat="13g", Price=7.5 }        }; We have icecreamsList collection which contains three objects with values of the Icecream type. Now let us say we have to retrieve all the ice-creams that cost less. We can use a looping method, where we have to look at the price value of each object in the list one-by-one and then retrieve the objects that have less value for the Price property. Using LINQ, we can avoid looping through all the objects and its properties to find the required ones. We can use LINQ queries to find this out easily. Following is a query that fetches the ice-creams with low prices from the collection. The query uses the where condition, to do this. This is similar to relational database queries. The query gets executed when the variable of type IEnumerable is enumerated when referred to in the foreach loop.     List<Icecream> Icecreams = CreateIcecreamsList();    IEnumerable<Icecream> IcecreamsWithLessPrice =    from ice in Icecreams    where ice.Price < 10    select ice;    Console.WriteLine("Ice Creams with price less than 10:");    foreach (Icecream ice in IcecreamsWithLessPrice)    {        Console.WriteLine("{0} is {1}", ice.Name, ice.Price);     } As we used List<Icecream> objects, we can also use ArrayList to hold the objects, and a LINQ query can be used to retrieve the specific objects from the collection according to our need. For example, following is the code to add the same Icecreams objects to the ArrayList, as we did in the previous example.     ArrayList arrListIcecreams = new ArrayList();    arrListIcecreams.Add( new Icecream {Name="Chocolate Fudge Icecream",        Ingredients="cream, milk, mono and diglycerides...",        Cholesterol="50mg", Protein="4g", TotalCarbohydrates="35g",        TotalFat="20g", Price=10.5 });    arrListIcecreams.Add( new Icecream {Name="Vanilla Icecream",        Ingredients="vanilla extract, guar gum, cream...",        Cholesterol="65mg", Protein="4g", TotalCarbohydrates="26g",        TotalFat="16g", Price=9.80 });    arrListIcecreams.Add( new Icecream {Name="Banana Split Icecream",        Ingredients="Banana, guar gum, cream...", Cholesterol="58mg",        Protein="6g", TotalCarbohydrates="24g", TotalFat="13g", Price=7.5    }); Following is the query to fetch low priced ice-creams from the list.     var queryIcecreanList = from Icecream icecream in arrListIcecreams    where icecream.Price < 10    select icecream; Use the foreach loop, shown as follows, to display the price of the objects retrieved using the above query.     foreach (Icecream ice in queryIcecreanList)    Console.WriteLine("Icecream Price : " + ice.Price);
Read more
  • 0
  • 0
  • 1651
article-image-manual-generic-and-ordered-tests-using-visual-studio-2008
Packt
22 Oct 2009
6 min read
Save for later

Manual, Generic, and Ordered Tests using Visual Studio 2008

Packt
22 Oct 2009
6 min read
The following screenshot describes a simple web application, which has a page for the new user registration. The user has to provide the necessary field details. After entering the details, the user will click on the Register button provided in the web page to submit all the details so that it gets registered to the site. To confirm this to the user, the system will send a notification with a welcoming email to the registered user. The mail is sent to the email address provided by the user. In the application shown in the above screenshot, the entire registration process cannot be automated for testing. For example, the email verification and checking the confirmation email sent by the system will not be automated as the user has to go manually and check the email. This part of the manual testing process will be explained in detail in this article. Manual tests Manual testing, as described earlier, is the simplest type of testing carried out by the testers without any automation tool. This test may contain a single or multiple tests inside. Manual test type is the best choice to be selected when the test is too difficult or complex to automate, or if the budget allotted for the application is not sufficient for automation. Visual Studio 2008 supports two types of manual tests file types. One as text file and the other as Microsoft Word. Manual test using text format This format helps us to create the test in the text format within Visual Studio IDE. The predefined template is available in Visual Studio for authoring this test. This template provides the structure for creating the tests. This format has the extension of .mtx. Visual Studio servers act as an editor for this test format. For creating this test in Visual Studio, either create a new test project and then add the test or select the menu option Test | New Test... and then choose the option to add the test to a new project. Now create the test using the menu option and select Manual Test (Text Format) from the available list as shown in the screenshot below. You can see the list Add to Test Project drop–down, which lists the different options to add the test to a test project. If you have not yet created the test project and selected the option to create the test, the drop-down option selected will create a new test project for the test to be added. If you have a test project already created, then we can also see that project in the list to get this new test added to the project. We can choose any option as per our need. For this sample, let us create a new test project in C#. So the first option from the drop-down of Add to Test Project would be selected in this case. After selecting the option, provide the name for the new test project the system will ask for. Let us name it TestingAppTest project. Now you can see the project getting created under the solution and the test template is also added to the test project as shown next. The template contains the detailed information for each section. This will help the tester or whoever is writing the test case to write the steps required for this test. Now update the test case template created above with the test steps required for checking the email confirmation message after the registration process. The test document also contains the title for the test, description, and the revision history for the changes made to the test case. Before executing the test and looking into the details of the run and the properties of the test, we will create the same test using Microsoft Word format as described in the next section. Manual test using Microsoft Word format This is similar to the manual test that was created using text format, except that the file type is Microsoft Word with extension .mht. While creating the manual test choose the template Manual Test (Word format) instead of the Manual Test (Text Format) as explained in the previous section. This option is available only if Microsoft Word is installed in the system. This will launch the Word template using the MS Word installed (version 2003 or later) in the system for writing the test details as shown in the following screenshot. The Word format helps us to have richer formatting capabilities with different fonts, colors, and styles for the text with graphic images and tables embedded for the test. This document not only provides the template but also the help information for each and every section so that the tester can easily understand the sections and write the test cases. This help information is provided in both the Word and Text format of the manual tests. In the test document seen in previous screenshot, we can fill the Test Details, Test Target, Test Steps, and Revision History similar to the one we did for the text format. The completed test case test document will look like this: Save the test details and close the document. Now we have both formats of manual tests in the project. Open the Test View window or the Test List Editor window to see the list of tests we have in the project. It should list two manual tests with their names and the project to which the tests are associated with. The tests shown in the Test View window looks like the one shown here: The same tests list shown by the Test List Editor would look like the one shown below. The additional properties like test list name, the project name the test belongs to, is also shown in the list editor. There are options for each test either to run or get added to any particular list. Manual tests also have other properties, which we can make use of during testing. These properties can be seen in the Properties window, which can be opened by choosing the manual test either in the Test View or in the Test List Editor windows by right-clicking the test and selecting the Properties option. The same window can also be opened by choosing the menu option View | Properties window. Both formats of manual testing have the same set of properties. Some of these properties are editable while some are read-only, which will be set by the application based on the test type. Some properties are directly related to TFS. The VSTFS is the integrated collaboration server, which combines team portal, work item tracking, build management, process guidance, and version control into a unified server.
Read more
  • 0
  • 0
  • 2284

article-image-enterprise-javabeans
Packt
22 Oct 2009
10 min read
Save for later

Enterprise JavaBeans

Packt
22 Oct 2009
10 min read
Readers familiar with previous versions of J2EE will notice that Entity Beans were not mentioned in the above paragraph. In Java EE 5, Entity Beans have been deprecated in favor of the Java Persistence API (JPA). Entity Beans are still supported for backwards compatibility; however, the preferred way of doing Object Relational Mapping with Java EE 5 is through JPA. Refer to Chapter 4 in the book Java EE 5 Development using GlassFish Application Server for a detailed discussion on JPA. Session Beans As we previously mentioned, session beans typically encapsulate business logic. In Java EE 5, only two artifacts need to be created in order to create a session bean: the bean itself, and a business interface. These artifacts need to be decorated with the proper annotations to let the EJB container know they are session beans. Previous versions of J2EE required application developers to create several artifacts in order to create a session bean. These artifacts included the bean itself, a local or remote interface (or both), a local home or a remote home interface (or both) and a deployment descriptor. As we shall see in this article, EJB development has been greatly simplified in Java EE 5. Simple Session Bean The following example illustrates a very simple session bean: package net.ensode.glassfishbook; import javax.ejb.Stateless; @Stateless public class SimpleSessionBean implements SimpleSession { private String message = "If you don't see this, it didn't work!"; public String getMessage() { return message; } } The @Stateless annotation lets the EJB container know that this class is a stateless session bean. There are two types of session beans, stateless and stateful. Before we explain the difference between these two types of session beans, we need to clarify how an instance of an EJB is provided to an EJB client application. When EJBs (both session beans and message-driven beans) are deployed, the EJB container creates a series of instances of each EJB. This is what is typically referred to as the EJB pool. When an EJB client application obtains an instance of an EJB, one of the instances in the pool is provided to this client application. The difference between stateful and stateless session beans is that stateful session beans maintain conversational state with the client, where stateless session beans do not. In simple terms, what this means is that when an EJB client application obtains an instance of a stateful session bean, the same instance of the EJB is provided for each method invocation, therefore, it is safe to modify any instance variables on a stateful session bean, as they will retain their value for the next method call. The EJB container may provide any instance of an EJB in the pool when an EJB client application requests an instance of a stateless session bean. As we are not guaranteed the same instance for every method call, values set to any instance variables in a stateless session bean may be "lost" (they are not really lost; the modification is in another instance of the EJB in the pool). Other than being decorated with the @Stateless annotation, there is nothing special about this class. Notice that it implements an interface called SimpleSession. This interface is the bean's business interface. The SimpleSession interface is shown next: package net.ensode.glassfishbook; import javax.ejb.Remote; @Remote public interface SimpleSession { public String getMessage(); } The only peculiar thing about this interface is that it is decorated with the @Remoteannotation. This annotation indicates that this is a remote business interface . What this means is that the interface may be in a different JVM than the client application invoking it. Remote business interfaces may even be invoked across the network. Business interfaces may also be decorated with the @Local interface. This annotation indicates that the business interface is a local business interface. Local business interface implementations must be in the same JVM as the client application invoking their methods. As remote business interfaces can be invoked either from the same JVM or from a different JVM than the client application, at first glance, we might be tempted to make all of our business interfaces remote. Before doing so, we must be aware of the fact that the flexibility provided by remote business interfaces comes with a performance penalty, because method invocations are made under the assumption that they will be made across the network. As a matter of fact, most typical Java EE application consist of web applications acting as client applications for EJBs; in this case, the client application and the EJB are running on the same JVM, therefore, local interfaces are used a lot more frequently than remote business interfaces. Once we have compiled the session bean and its corresponding business interface,we need to place them in a JAR file and deploy them. Just as with WAR files, the easiest way to deploy an EJB JAR file is to copy it to [glassfish installationdirectory]/glassfish/domains/domain1/autodeploy. Now that we have seen the session bean and its corresponding business interface, let's take a look at a client sample application: package net.ensode.glassfishbook; import javax.ejb.EJB; public class SessionBeanClient { @EJB private static SimpleSession simpleSession; private void invokeSessionBeanMethods() { System.out.println(simpleSession.getMessage()); System.out.println("nSimpleSession is of type: " + simpleSession.getClass().getName()); } public static void main(String[] args) { new SessionBeanClient().invokeSessionBeanMethods(); } } The above code simply declares an instance variable of type net.ensode.SimpleSession, which is the business interface for our session bean. The instance variable is decorated with the @EJB annotation; this annotation lets the EJB container know that this variable is a business interface for a session bean. The EJB container then injects an implementation of the business interface for the client code to use. As our client is a stand-alone application (as opposed to a Java EE artifact such as a WAR file) in order for it to be able to access code deployed in the server, it must be placed in a JAR file and executed through the appclient utility. This utility can be found at [glassfish installation directory]/glassfish/bin/. Assuming this path is in the PATH environment variable, and assuming we placed our client code in a JAR file called simplesessionbeanclient.jar, we would execute the above client code by typing the following command in the command line: appclient -client simplesessionbeanclient.jar Executing the above command results in the following console output: If you don't see this, it didn't work! SimpleSession is of type: net.ensode.glassfishbook._SimpleSession_Wrapper which is the output of the SessionBeanClient class. The first line of output is simply the return value of the getMessage() method we implemented in the session bean. The second line of output displays the fully qualified class name of the class implementing the business interface. Notice that the class name is not the fully qualified name of the session bean we wrote; instead, what is actually provided is an implementation of the business interface created behind the scenes by the EJB container. A More Realistic Example In the previous section, we saw a very simple, "Hello world" type of example. In this section, we will show a more realistic example. Session beans are frequently used as Data Access Objects (DAOs). Sometimes, they are used as a wrapper for JDBC calls, other times they are used to wrap calls to obtain or modify JPA entities. In this section, we will take the latter approach. The following example illustrates how to implement the DAO design pattern in asession bean. Before looking at the bean implementation, let's look at the business interface corresponding to it: package net.ensode.glassfishbook; import javax.ejb.Remote; @Remote public interface CustomerDao { public void saveCustomer(Customer customer); public Customer getCustomer(Long customerId); public void deleteCustomer(Customer customer); } As we can see, the above is a remote interface implementing three methods; thesaveCustomer() method saves customer data to the database, the getCustomer()method obtains data for a customer from the database, and the deleteCustomer() method deletes customer data from the database. Let's now take a look at the session bean implementing the above business interface. As we are about to see, there are some differences between the way JPA code is implemented in a session bean versus in a plain old Java object. package net.ensode.glassfishbook; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.sql.DataSource; @Stateless public class CustomerDaoBean implements CustomerDao { @PersistenceContext private EntityManager entityManager; @Resource(name = "jdbc/__CustomerDBPool") private DataSource dataSource; public void saveCustomer(Customer customer) { if (customer.getCustomerId() == null) { saveNewCustomer(customer); } else { updateCustomer(customer); } } private void saveNewCustomer(Customer customer) { customer.setCustomerId(getNewCustomerId()); entityManager.persist(customer); } private void updateCustomer(Customer customer) { entityManager.merge(customer); } public Customer getCustomer(Long customerId) { Customer customer; customer = entityManager.find(Customer.class, customerId); return customer; } public void deleteCustomer(Customer customer) { entityManager.remove(customer); } private Long getNewCustomerId() { Connection connection; Long newCustomerId = null; try { connection = dataSource.getConnection(); PreparedStatement preparedStatement = connection .prepareStatement( "select max(customer_id)+1 as new_customer_id " + "from customers"); ResultSet resultSet = preparedStatement.executeQuery(); if (resultSet != null && resultSet.next()) { newCustomerId = resultSet.getLong("new_customer_id"); } connection.close(); } catch (SQLException e) { e.printStackTrace(); } return newCustomerId; } } The first difference we should notice is that an instance of javax.persistence. EntityManager is directly injected into the session bean. In previous JPA examples,we had to inject an instance of javax.persistence.EntityManagerFactory, then use the injected EntityManagerFactory instance to obtain an instance of EntityManager. The reason we had to do this was that our previous examples were not thread safe. What this means is that potentially the same code could be executed concurrently by more than one user. As EntityManager is not designed to be used concurrently by more than one thread, we used an EntityManagerFactory instance to provide each thread with its own instance of EntityManager. Since the EJB container assigns a session bean to a single client at time, session beans are inherently thread safe, therefore, we can inject an instance of EntityManager directly into a session bean. The next difference between this session bean and previous JPA examples is that in previous examples, JPA calls were wrapped between calls to UserTransaction.begin() and UserTransaction.commit(). The reason we had to do this is because JPA calls are required to be in wrapped in a transaction, if they are not in a transaction, most JPA calls will throw a TransactionRequiredException. The reason we don't have to explicitly wrap JPA calls in a transaction as in previous examples is because session bean methods are implicitly transactional; there is nothing we need to do to make them that way. This default behavior is what is known as Container-Managed Transactions. Container-Managed Transactions are discussed in detail later in this article. When a JPA entity is retrieved in one transaction and updated in a different transaction, the EntityManager.merge() method needs to be invoked to update the data in the database. Invoking EntityManager.persist() in this case will result in a "Cannot persist detached object" exception.
Read more
  • 0
  • 0
  • 1804

article-image-visual-studio-2008-test-types
Packt
22 Oct 2009
15 min read
Save for later

Visual Studio 2008 Test Types

Packt
22 Oct 2009
15 min read
Software testing in Visual Studio Team System 2008 Before going into the details of the actual testing using Visual Studio 2008, we need to understand the different tools provided by Visual Studio Team System (VSTS) and their usage. Once we understand the tools usage, then we should be able to perform different types of testing using VSTS. As we go along creating a number of different tests, we will encounter difficulty in managing the test similar to the code and its different versions during application development. There are different features such as the Test List Editor and Test View and the Team Foundation Server (TFS) for managing and maintaining all the tests created using VSTS. Using this Test List Editor, we can group similar tests, create number of lists, add, or delete tests from the list. The other aspect of this article is to see the different file types getting created in Visual Studio during testing. Most of these files are in XML format, which get created automatically whenever the corresponding test is created. The tools such as the Team Explorer, Code Coverage, Test View, and Test Results are not new to Visual Studio 2008 but actually available since Visual Studio 2005. While we go through the windows and their purposes, we can check the IDE and the tools integration into Visual Studio 2008. Testing as part of Software Development Life Cycle The main objective of testing is to find the defects early in the SDLC. If the defect is found early, then the cost will be less, but if the defect is found during production or implementation stage, then the cost will be higher. Moreover, testing is carried out to assure the quality and reliability of the software. In order to find the defect earlier, the testing activities should start early, that is, in the Requirement phase of SDLC and continue till the end of SDLC. In the Coding phase, various testing activities takes place. Based on the design, the developers start coding the modules. Static and dynamic testing is carried out by the developers. Code reviews and code walkthroughs are also conducted. Once the coding is completed, then comes the Validation phase, where different phases or forms of testing are performed. Unit Testing: This is the first stage of testing in SDLC. This is performed by the developer to check whether the developed code meets the stated requirements. If there are any defects, the developer logs them against the code and fixes the code. The code is retested and then moved to the testers after confirming the code without any defects for the piece of functionality. This phase identifies a lot of defects and also reduces the cost and time involved in testing the application and fixing the code. Integration Testing: This testing is carried out between two or more modules or functions together with the intent of finding interface defects between them. This testing is completed as a part of unit or functional testing, and sometimes becomes its own standalone test phase. On a larger level, integration testing can involve putting together groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. Defects found are logged and later fixed by the developers. There are different ways of integration testing such as top-down testing and bottom-up testing: The Top-Down approach is followed to test the highest level of components and integrate first to test the high-level logic and the flow. The low-level components are tested later. The Bottom-Up approach is the exact opposite of the top-down approach. In this case, the low-level functionalities are tested and integrated first and then the high-level functionalities are tested. The disadvantage in this approach is that the high-level or the most complex functionalities are tested later. The Umbrella approach uses both the top-down and bottom-up patterns. The inputs for functions are integrated in the bottom-up approach and then the outputs for the functions are integrated in the top-down approach. System Testing: It compares the system specifications against the actual system. The system test design is derived from the system design documents and is used in this phase. Sometimes, system testing is automated using testing tools. Once all the modules are integrated, several errors may arise. Testing done at this stage is called system testing. Defects found in this testing are logged and fixed by the developers. Regression Testing: This is not mentioned in the testing phase, but is carried out once the defects are fixed by the developers. The main objective of this type of testing is to determine if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred and to check if any new functionality was introduced in the software. Types of testing Visual Studio provides a range of testing types and tools for software applications. Following are some of those types: Unit test Manual test Web test Load test Stress test Performance test Capacity Planning test Generic test Ordered test In addition to these types, there are additional tools provided to manage, order the listing, and execute tests created in Visual Studio. Some of these are the Test View, Test List Editor, and the Test Results window. We will look at these testing tools and the supporting tools for managing the testing in Visual Studio 2008 in detail later. Unit test As soon as the developer finishes the code, the developer wants to know if it is producing the expected result before getting into any more detailed testing or handing over the component to the tester. The type of testing performed by the developers to test their own code is called Unit testing. Visual Studio has great support for Unit testing. The main goal of the unit testing is to isolate each piece of the code or individual functionality and test if the method is returning the expected result for different set of parameter values. It is extremely important to run unit tests to catch the defects in the early stage. The methods generated by the automated unit testing tool call the methods in the classes from the source code and test the output of each of the methods by comparing them with the expected values. The unit test tool produces a separate set of test code for the source. Using the test code we can pass the parameter values to the method and test the value returned by the method, and then compare them with the expected result. Unit testing code can be easily created by using the code generation feature, which creates the testing source code for the source application code. The generated unit testing code will contain several attributes to identify the Test Class, Test Method, and Test Project. These attributes are assigned when the unit test code gets generated from the original source code. Then using this code, the developer has to change the values and assert methods to compare the expected result from these methods. The Unit test class is similar to the other classes in any other project. The good thing here is that we can create new test classes by inheriting the base test class. The base test class will contain the common or reusable testing methods. This is the new Unit testing feature which helps us reduce the code and reuse the existing test classes. Whenever any code change occurs, it is easy to figure out the fault with the help of Unit tests, rerun those tests, and check whether the code is giving the intended output. This is to verify the code change the developer has made and to confirm that it is not affecting the other parts of the application. All the methods and classes generated for the automated unit testing are inherited from the namespace Microsoft.VisualStudio.TestTools.UnitTesting. Manual test Manual testing is the oldest and the simplest type of testing, but yet very crucial for software testing. It requires a tester to run all the tests without any automation tool. It helps us to validate whether the application meets various standards defined for effective and efficient accessibility and usage. Manual testing comes to play in the following scenarios: There is not enough budget for automation. The tests are more complicated, or are too difficult to be converted into automated tests. The tests are going to be executed only once. There is not enough time to automate the tests. Automated tests would be time-consuming to create and run. Manual tests can be created either using a Word document or Text format in Visual Studio 2008. This is a form of describing the test steps that should be performed by the tester. The step should also mention the expected result out of testing the step. Web tests Web tests are used for testing the functionality of the web page, web application, web site, web services, and a combination of all these. Web tests can be created by recording the interactions that are performed in the browser. These can be played back to test the web application. Web tests are normally a series of HTTP requests (GET/POST). Web tests can be used for testing the application performance as well as for stress testing. During HTTP requests, the web test takes care of testing the web page redirects, validations, viewstate information, authentication, and JavaScript executions. There are different validation rules and extraction rules used in web testing. The validation rules are used for validating the form field names, texts, and tags in the requested web page. We can validate the results or values against the expected result as per business needs. These validation rules are also used for checking the time taken for the HTTP request. At some point in time, we need to extract the data returned by the web pages. We may need the data for future use, or we may have to collect the data for testing purposes. In this case, we have to use the extraction rules for extracting the data returned by the page requested. Using this process, we can extract the form fields, texts, or values in the web page and store it in the web test context or collection. Web tests cannot be performed only with the existence of a web page. We need some data to be populated from the database or some other source to test the web page functionality and performance. There is a data binding mechanism used in Web test, which is used for providing the data required for the requested page. We can bind the data from a database or any other data source. For example, the web page would be a reporting page that might require some query string parameters as well as the data to be shown in the page according to the parameters passed. To provide data for all these data-driven testing, we have to use the concept of data binding with the data source. Web tests can be classified into Simple Web tests and Coded Web tests. Both these are supported by VSTS. Simple Web tests are very simple to create and execute. It executes on its own as per the recording. Once the test is started, there won't be any intervention. The disadvantage is that it is not conditional. It's a series of valid flow of events. Coded Web tests are bit more complex, but provide a lot of flexibility. For example, if we need some conditional execution of tests based on some values then we have to depend on this coded web test. These tests are created using either C# or Visual Basic code. Using the generated code we can control the flow of test events. But the disadvantage is its high complexity and maintenance cost. Load test Load testing is a method of testing used in different types of testing. The important thing with Load testing is that it is about performance. This type of testing is conducted with other types of testing, which means that it can be performed along with either Web testing or Unit testing. The main purpose of load testing is to identify the performance of application based on different scenarios. Most of the time, we can predict the performance of the application that we develop, if it is running on one machine or a desktop. But in the case of web applications such as online ordering systems, we know the estimated maximum number of users, but do not know the connection speeds and location from where the users will access the web site. For such scenarios, the web application should support all the end users with good performance irrespective of the system they use, their Internet connection, the place, and the tool they use to access the web site. So before we release this web site to the customers or the end users, we should check the performance of the application so that it can support the mass end user group. This is where load testing will be very useful in testing the application along with Web test or Unit test. When a Web test is added to a Load test, it will simulate multiple users opening simultaneous connections to the same web application and making multiple HTTP requests. Load testing in Visual Studio comes with lots of properties which can be set to test the web application with different browsers, different user profiles, light loads, and heavy loads. Results of different tests can be saved in a repository to compare the set of results and improve their performance. In case of client server and multi-tier applications, we will be having a lot of components which will reside in the server and serve the client requests. To get the performance of these components, we have to make use of a Load test with a set of Unit tests. One good example would be to test the data access service component that calls a stored procedure in the backend database and returns the results to the application that is using this service. Load tests can be run either from the local machine or by submitting to a rig, which is a group of computers used for simulating the tests remotely. A rig consists of a single controller and one or more agents. Load tests can be used in different scenarios of testing: Stress testing: This checks the functionality of the application under heavy load. The resource provided to the application could vary based on the input file size or the size of the data set, for example, uploading a file which is more than 50MB in size. Smoke testing: This checks if the application performs well for a short duration with a light load. Performance testing: This checks the responsiveness and throughput of the application with different loads. Capacity Planning test: This checks the application performance with various capacities. Ordered test As we know, there are different types of testing required to build quality software. We take care of running all these tests for the applications we develop. But we also have an order in which to execute all these different tests. For example, we do the unit testing first, then the integration test, then the smoke test, and then we go for the functional test. We can order the execution of these tests using Visual Studio. Another example would be to test the configurations for the application before actually testing the functionality of the application. If we don't order the test, we would never know whether the end result is correct or not. Sometimes, the tests will not go through successfully if the tests are not run in order. Ordering of tests is done using the Test View window in Visual Studio. We can list all the available tests in the Test View and choose the tests in the same order using different options provided by Visual Studio and then run the tests. Visual Studio will take care of running the tests in the same order we have chosen in the list. So once we are able to run the test successfully in an order, we can also expect the same ordering in getting the results. Visual Studio provides the results of all the tests in a single row in the Test Results window. Actually, this single row result will contain the results of all the tests run in the order. We can just double-click the single row result to get the details of each tests run in the ordered test. Ordered test is the best way of controlling the tests and running the tests in an order. Generic test We have seen different types and ways of testing the applications using VSTS. There are situations where we might end up having other applications for testing, which are not developed using Visual Studio. We might have only the executables or binaries for those applications. But we may not have the supported testing tool for those applications. This is where we need the generic testing method. This is just a way of testing third-party applications using Visual Studio. Generic tests are used to wrap the existing tests. Once the wrapping is done, then it is just another test in VSTS. Using Visual Studio, we can collect the test results, and gather the code coverage data too. We can manage and run the generic tests in Visual Studio just like the others tests.
Read more
  • 0
  • 0
  • 1518
article-image-consuming-adapter-outside-biztalk-server
Packt
22 Oct 2009
3 min read
Save for later

Consuming the Adapter from outside BizTalk Server

Packt
22 Oct 2009
3 min read
In addition to infrastructure-related updates such as the aforementioned platform modernization, Windows Server 2008 Hyper-V virtualization support, and additional options for fail over clustering, BizTalk Server also includes new core functionality. You will find better EDI and AS2 capabilities for B2B situations and a new platform for mobile development of RFID solutions. One of the benefits of the new WCF SQL Server Adapter that I mentioned earlier was the capability to use this adapter outside of a BizTalk Server solution. Let's take a brief look at three options for using this adapter by itself and without BizTalk as a client or service. Called directly via WCF service reference If your service resides on a machine where the WCF SQL Server Adapter (and thus, the sqlBinding) is installed, then you may actually add a reference directly to the adapter endpoint. I have a command-line application, which serves as my service client. If we right-click this application, and have the WCF LOB Adapter SDK installed, then Add Adapter Service Reference appears as an option. Choosing this option opens our now-beloved wizard for browsing adapter metadata. As before, we add the necessary connection string details and browse the BatchMaster table and opt for the Select operation. Unlike the version of this wizard that opens for BizTalk Server projects, notice the Advanced options button at the bottom. This button opens a property window that lets us select a variety of options such as asynchronous messaging support and suppression of an accompanying configuration file. After the wizard is closed, we end up with a new endpoint and binding in our existing configuration file, and a .NET class containing the data and service contracts necessary to consume the service. We should now call this service as if we were calling any typical WCF service. Because the auto-generated namespace for the data type definition is a bit long, I first added an alias to that namespace. Next, I have a routine, which builds up the query message, executes the service, and prints a subset of the response. using DirectReference = schemas.microsoft.com.Sql._2008._05.Types.Tables.dbo; … private static void CallReferencedSqlAdapterService() { Console.WriteLine("Calling referenced adapter service");TableOp_dbo_BatchMasterClient client = new TableOp_dbo_BatchMasterClient("SqlAdapterBinding_TableOp_dbo_BatchMaster"); try{string columnString = "*";string queryString = "WHERE BatchID = 1";DirectReference.BatchMaster[] batchResult =client.Select(columnString, queryString);Console.WriteLine("Batch results ...");Console.WriteLine("Batch ID: " + batchResult[0].BatchID.ToString());Console.WriteLine("Product: " + batchResult[0].ProductName);Console.WriteLine("Manufacturing Stage: " + batchResult[0].ManufStage);client.Close(); Console.ReadLine(); } catch (System.ServiceModel.CommunicationException){client.Abort(); } catch (System.TimeoutException) { client.Abort(); } catch (System.Exception) { client.Abort(); throw; } } Once this quick block of code is executed, I can confirm that my database is accessed and my expected result set returned.
Read more
  • 0
  • 0
  • 1578

Packt
21 Oct 2009
6 min read
Save for later

Binding Web Services in ESB—Web Services Gateway

Packt
21 Oct 2009
6 min read
Web Services Web services separate out the service contract from the service interface. This feature is one of the many characteristic required for an SOA-based architecture. Thus, even though it is not mandatory that we use the web service to implement an SOA-based architecture, yet it is clearly a great enabler for SOA. Web services are hardware, platform, and technology neutral The producers and/or consumers can be swapped without notifying the other party, yet the information can flow seamlessly. An ESB can play a vital role to provide this separation. Binding Web Services A web service's contract is specified by its WSDL and it gives the endpoint details to access the service. When we bind the web service again to an ESB, the result will be a different endpoint, which we can advertise to the consumer. When we do so, it is very critical that we don't lose any information from the original web service contract. Why Another Indirection? There can be multiple reasons for why we require another level of indirection between the consumer and the provider of a web service, by binding at an ESB. Systems exist today to support business operations as defined by the business processes. If a system doesn't support a business process of an enterprise, that system is of little use. Business processes are never static. If they remain static then there is no growth or innovation, and it is doomed to fail. Hence, systems or services should facilitate agile business processes. The good architecture and design practices will help to build "services to last" but that doesn't mean our business processes should be stable. Instead, business processes will evolve by leveraging the existing services. Thus, we need a process workbench to assemble and orchestrate services with which we can "Mix and Match" the services. ESB is one of the architectural topologies where we can do the mix and match of services. To do this, we first bind the existing (and long lasting) services to the ESB. Then leverage the ESB services, such as aggregation and translation, to mix and match them and advertise new processes for businesses to use. Moreover, there are cross service concerns such as versioning, management, and monitoring, which we need to take care to implement the SOA at higher levels of maturity. The ESB is again one way to do these aspects of service orientation. HTTP HTTP is the World Wide Web (www) protocol for information exchange. HTTP is based on character-oriented streams and is firewall-friendly. Hence, we can also exchange XML streams (which are XML encoded character streams) over HTTP. In a web service we exchange XML in the SOAP (Simple Object Access Protocol) format over HTTP. Hence, the HTTP headers exchanged will be slightly different than a normal web page interaction. A sample web service request header is shown as follows: GET /AxisEndToEnd/services/HelloWebService?WSDL HTTP/1.1User-Agent: Java/1.6.0-rcHost: localhost:8080Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2Connection: keep-alivePOST /AxisEndToEnd/services/HelloWebService HTTP/1.0Content-Type: text/xml; charset=utf-8Accept: application/soap+xml, application/dime, multipart/related, text/*User-Agent: Axis/1.4Host: localhost:8080Cache-Control: no-cachePragma: no-cacheSOAPAction: ""Content-Length: 507 The first line contains a method, a URI and an HTTP version, each separated by one or more blank spaces. The succeeding lines contain more information regarding the web service exchanged. ESB-based integration heavily leverages the HTTP protocol due to its open nature, maturity, and acceptability. We will now look at the support provided by the ServiceMix in using HTTP. ServiceMix's servicemix-http Binding external web services at the ESB layer can be done in multiple ways but the best way is to leverage JBI components such as the servicemix-http component within ServiceMix. We will look in detail at how to bind the web services onto the JBI bus. servicemix-http in Detail servicemix-http is used for HTTP or SOAP binding of services and components into the ServiceMix NMR. For this ServiceMix uses an embedded HTTP server based on the Jetty. The following are the two ServiceMix components: org.apache.servicemix.components.http.HttpInvoker org.apache.servicemix.components.http.HttpConnector As of today, these components are deprecated and the functionality is replaced by the servicemix-http standard JBI component. A few of the features of the servicemix-http are as follows: Supports SOAP 1.1 and 1.2 Supports MIME with attachments Supports SSL Supports WS-Addressing and WS-Security Supports WSDL-based and XBean-based deployments Support for all MEPs as consumers or providers Since servicemix-http can function both as a consumer and a provider, it can effectively replace the previous HttpInvoker and HttpConnector component. Consumer and Provider Roles When we speak of the Consumer and Provider roles for the ServiceMix components, the difference is very subtle at first sight, but very important from a programmer perspective. The following figure shows the Consumer and Provider roles in the ServiceMix ESB: The above figure shows two instances of servicemix-http deployed in the ServiceMix ESB, one in a provider role and the other in the consumer role. As it is evident, these roles are with respect to the NMR of the ESB. In other words, a consumer role implies that the component is a consumer to the NMR whereas a provider role implies the NMR is the consumer to the component. Based on these roles, the NMR will take responsibility of any format or protocol conversions for the interacting components. Let us also introduce two more parties here to make the role of a consumer and a provider clear—a client and a service. In a traditional programming paradigm, the client interacts directly with the server (or service) to avail the functionality. In the ESB model, both the client and the service interact with each other only through the ESB. Hence, the client and the service need peers with their respective roles assigned, which in turn will interact with each other. Thus, the ESB consumer and provider roles can be regarded as the peer roles for the client and the service respectively. Any client request will be delegated to the consumer peer who in turn interacts with the NMR. This is because the client is unaware of the ESB and the NMR protocol or format. However, the servicemix-http consumer knows how to interact with the NMR. Hence any request from the client will be translated by the servicemix-http consumer and delivered to the NMR. On the service side also, the NMR needs to invoke the service. But the server service is neutral of any specific vendor's NMR and doesn't understand the NMR language as such. A peer provider role will help here. The provider receives the request from the NMR, translates it into the actual format or protocol of the server service and invokes the service. Any response will also follow the reverse sequence.
Read more
  • 0
  • 0
  • 1405
Modal Close icon
Modal Close icon