Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-understanding-business-activity-monitoring-oracle-soa-suite
Packt
28 Oct 2009
14 min read
Save for later

Understanding Business Activity Monitoring in Oracle SOA Suite

Packt
28 Oct 2009
14 min read
How BAM differs from traditional business intelligence The Oracle SOA Suite stores the state of all processes in a database in documented schemas so why do we need yet another reporting tool to provide insight into our processes and services? In other words how does BAM differ from traditional BI (Business Intelligence)? In traditional BI, reports are generated and delivered either on a scheduled basis or in response to a user request. Any changes to the information will not be reflected until the next scheduled run or until a user requests the report to be rerun. BAM is an event-driven reporting tool that generates alerts and reports in real time, based on a continuously changing data stream, some of whose data may not be in the database. As events occur in the Services and Processes, the business has defined they are captured by BAM and reports and views are updated in real time. Where necessary these updated reports are delivered to users. This delivery to users can take several forms. The best known is the dashboard on users' desktops that will automatically update without any need for the user to refresh the screen. There are also other means to deliver reports to the end user, including sending them via a text message or an email. Traditional reporting tools such as Oracle Reports and Oracle Discoverer as well as Oracles latest Business Intelligence Suite can be used to provide some real-time reporting needs but they do not provide the event driven reporting that gives the business a continuously updating view of the current business situation. Event Driven Architecture Event Driven Architecture (EDA) is about building business solutions around responsiveness to events. Events may be simple triggers such as a stock out event or they may be more complex triggers such as the calculations to realize that a stock out will occur in three days. An Event Driven Architecture will often take a number of simple events and then combine them through a complex event processing sequence to generate complex events that could not have been raised without aggregation of several simpler events. Oracle BAM scenarios Oracle Business Activity Monitoring is typically used to monitor two distinct types of real-time data. Firstly it may be used to monitor the overall state of processes in the business. For example it may be used to track how many auctions are currently running, how many have bids on them, and how many have completed in the last 24 hours (or other time periods). Secondly it may be used to track in real-time Key Performance Indicators or KPIS. For example it may be used to provide a real-time updating dashboard to a seller to show the current total value of all the sellers' auctions and to track this against an expected target. In the first case, we are interested in how business processes are progressing and are using BAM to identify bottlenecks and failure points within those processes. Bottlenecks can be identified by too much time being spent on given steps in the process. BAM allows us to compute the time taken between two points in a process, such as the time between order placement and shipping, and provide real-time feedback on those times. Similarly BAM can be used to track the percentage drop-out rate between steps in a sales process, allowing the business to take appropriate action. In the second case, our interest is on some aggregate number, such as our total liabilities should we win all the auctions we are bidding on. This requires us to aggregate results from many events, possibly performing some kind of calculation on them to provide us with a single KPI that gives an indication to the business of how things are going. BAM allows us to continuously update this number in real on a dashboard without the need for continued polling. It also allows us to trigger alerts, perhaps through email or SMS, to notify an individual, when a threshold is breached. In both cases reports delivered can be customized based on the individual receiving the report. BAM architecture It may seem odd to have a section on architecture in the middle of a article about how to effectively use BAM, but key to successful utilization of BAM is an understanding of how the different tiers relate to each other. Logical view The following diagram represents a logical view of how BAM operates. Events are acquired from one or more sources through event acquisition and then normalized, correlated, and stored in event storage (generally a memory area in BAM that is backed up to disc). The report cache generates reports based on events in storage and then delivers those reports, together with real-time updates through the report delivery layer. Event processing is also performed on events in storage, and when defined conditions are met, alerts will be delivered through the alert delivery service. Physical view To better understand the physical view of the architecture of BAM, we have divided this section into four parts. Let us discuss these in detail. Capture This logical view maps onto the physical BAM components shown in the following diagram. Data acquisition in the SOA Suite is handled by sensors in BPEL and ESB. BAM can also receive events from JMS message queues and access data in databases (useful for historical comparison). For complex data formats or for other data sources then Oracle Data Integrator (ODI is a separate product to the SOA Suite) is recommended by Oracle. Although potentially less efficient and more work than running ODI, it is also possible to use adapters to acquire data from multiple sources and feed it into BAM through ESB or BPEL. At the data capture level we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must consider the sources of that data and the best way to load it into BAM. Store Once the data is captured, it is then stored in a normalized form in the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item rather than creating multiple data items. Process Reports are run based on user demand. Once a report is run it will update the user's screen on a real time basis. Where multiple users are accessing the same report only one instance of the report is maintained by the report server. As events are captured and stored in real time the report engine will continuously monitor them for any changes that need to be made to those reports which are currently active. When changes are detected that impact active reports, then the appropriate report will be updated in memory and the updates sent to the user screen. In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain time-outs have expired. The event processor will often need to perform calculations across multiple data items to do this. Deliver Delivery of reports takes place in two ways. First, users request reports to be delivered to their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The second approach is that reports are sent out as a result of events being triggered by the Event Processing Engine. In the latter case, the report may be delivered by email, SMS, or voice messaging using the notifications service. A final option available for these event generated reports is to invoke a web service to take some sort of automated action. Closing the loop While monitoring what is happening is all very laudable, it is only of benefit if we actually do something about what we are monitoring. BAM provides the real-time monitoring ability very well but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where as a result of monitoring we are able to reach back into the processes and either alter their execution or start new ones. For example when a stock out or low stock event is raised then the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us to trigger events across multiple applications and locations in a way that may not be possible within a single application or process. For example, in response to a stock out, instead of requesting our supplier to provide more stock, we may be monitoring stock levels in independent systems and, based on stock levels elsewhere, may redirect stock from one location to another. BAM platform anomaly In 10g SOA Suite, BAM runs only as a Windows application. Unlike the rest of SOA Suite, it does not run on a JEE Application Server and it can only run on the Windows platform. In the next release, 11g, BAM will be provided as a JEE application that can run on a number of application servers and operating systems. User interface Development in Oracle BAM is done through a web-based user interface. This user interface gives access to four different applications that allow you to interact with different parts of BAM. These are: Active Viewer for giving access to reports; this relates to the deliver stage for user requested reports. Active Studio for building reports; this relates to the 'process' stage for creating reports. Architect for setting up both inbound and outbound events. Data elements are defined here as data sources. Alerts are also configured here. This covers setting up, acquire and store stages as well as the deliver stage for alerts. Administrator for managing users and roles as well as defining the types of message sources. We will not examine the applications individually but will take a task-focused look at how to use them as part of providing some specific reports. Monitoring process state Now that we have examined how BAM is constructed, let us use this knowledge to construct some simple dashboards that track the state of a business process. We will instrument a simple version of an auction process. The process is shown in the following figure: An auction is started and then bids are placed until the time runs out at which point the auction is completed. This is modelled in BPEL. This process has three distinct states: Started Bid received Completed We are interested in the number of auctions in each state as well as the total value of auctions in progress. One needs to follow these steps to build the dashboard: Define our data within the Active Data Cache Create sensors in BPEL and map to data in the ADC Create suitable reports Run the reports Defining data objects Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects. Before defining our data objects let's group them into an Auction folder so they are easy to find. To do this we use the BAM Architect application and select Data Objects which gives us the following screen: We select Create subfolder to create the folder and give it a name Auction. We then select Create folder to actually create the folder and we get a confirmation message to tell us that the folder was created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen. Now we have our folder we can create a data object. Again we select Data Objects from the drop-down menu. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder if it is not already open and selectCreate Data Object. If we don't select the Auction folder then we pick it later when filling in the details of the data object. We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object we can now create the data fields by selecting Add a field. When adding fields we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be public "available for display" and what if any tool tip text it should have. Once all the data fields have been defined then we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created. Grouping data into hierarchies When creating a data object it is possible to specify Dimensions for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to group the object by the fields in the given dimension. If multiple fields are selected for a single dimension then they can be layered into a hierarchy, for example to allow analysis by country, region, and city. In this case all three elements would be selected into a single dimension, perhaps called geography. Within geography a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies. A digression on populating data object fields In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object at one moment in time. Do not confuse data objects with the low level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low level events that populate them. In our auction example there will be just one auction object for every auction. However there will be at least two and usually more messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate or in some cases overwrite different parts of the auction data object. The table shows how the three messages populate different parts of the data object. Message Auction ID State Highest bid Reserve Expires Seller Highest bidder Auction Started Inserted Inserted Inserted Inserted Inserted Inserted   Bid Received   Updated Updated       Updated Auction Finished   Updated           Summary In this article we have explored how Business Activity Monitoring differs from and is complementary to more traditional Business Intelligence solutions such as Oracle Reports and Business Objects. We have explored how BAM can allow the business to monitor the state of business targets and Key Performance Indicators, such as the current most popular products in a retail environment or the current time taken to serve customers in a service environment.
Read more
  • 0
  • 0
  • 3247

article-image-python-data-persistence-using-mysql-part-ii-moving-data-processing-data
Packt
27 Oct 2009
8 min read
Save for later

Python Data Persistence using MySQL Part II: Moving Data Processing to the Data

Packt
27 Oct 2009
8 min read
To move data processing to the data, you can use stored procedures, stored functions, and triggers. All these components are implemented inside the underlying database, and can significantly improve performance of your application due to reducing network overhead associated with multiple calls to the database. It is important to realize, though, the decision to move any piece of processing logic into the database should be taken with care. In some situations, this may be simply inefficient. For example, if you decide to move some logic dealing with the data stored in a custom Python list into the database, while still keeping that list implemented in your Python code, this can be inefficient in such a case, since it only increases the number of calls to the underlying database, thus causing significant network overhead. To fix this situation, you could move the list from Python into the database as well, implementing it as a table. Starting with version 5.0, MySQL supports stored procedures, stored functions, and triggers, making it possible for you to enjoy programming on the underlying database side. In this article, you will look at triggers in action. Stored procedures and functions can be used similarly. Planning Changes for the Sample Application Assuming you have followed the instructions in Python Data Persistence using MySQL, you should already have the application structure to be reorganized here. To recap, what you should already have is: tags nested list of tags used to describe the posts obtained from the Packt Book Feed page. obtainPost function obtains the information about the most recent post on the Packt Book Feed page. determineTags function determines tags appropriate to the latest post obtained from the Packt Book Feed page. insertPost function inserts the information about the obtained post into the underlying database tables: posts and posttags. execPr function brings together the functionality of the described above functions. That’s what you should already have on the Python side. And on the database side, you should have the following components: posts table contains records representing posts obtained from the Packt Book Feed page. posttags table contains records each of which represents a tag associated with a certain post stored in the posts table. Let’s figure out how we can refactor the above structure, moving some data processing inside the database. The first thing you might want to do is to move the tags list from Python into the database, creating a new table tags for that. Then, you can move the logic implemented with the determineTags function inside the database, defining the AFTER INSERT trigger on the posts table. From within this trigger, you will also insert rows into the posttags table, thus eliminating the need to do it from within the insertPost function. Once you’ve done all that, you can refactor the Python code implemented in the appsample module. To summarize, here are the steps you need to perform in order to refactor the sample application discussed in the earlier article: Create tags table and populate it with the data currently stored in the  tags list implemented in Python. Define the AFTER INSERT trigger on the posts table. Refactor the insertPost function in the appsample.py module. Remove the tags list from the appsample.py module. Remove the determineTags function from the appsample.py module. Refactor the execPr function in the appsample.py module. Refactoring the Underlying Database To keep things simple, the tags table might contain a single column tag with the primary key constraint defined on it. So, you can create the tags table as follows: CREATE TABLE tags ( tag VARCHAR(20) PRIMARY KEY ) ENGINE = InnoDB; Then, you might want to modify the posttags table, adding a foreign key constraint to its tag column. Before you can do that, though, you will need to delete all the rows from this table. This can be done with the following query: DELETE FROM posttags; Now you can move on and alter posttags as follows: ALTER TABLE posttags ADD FOREIGN KEY (tag) REFERENCES tags(tag); The next step is to populate the tags table. You can automate this process with the help of the following Python script: >>> import MySQLdb >>> import appsample >>> db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db=">>> dbsample") >>> c=db.cursor() >>> c.executemany("""INSERT INTO tags VALUES(%s)""", appsample.tags) >>> db.commit() >>> db.close() As a result, you should have the tags table populated with the data taken from the tags list discussed in Python Data Persistence using MySQL. To make sure it has done so, you can turn back to the mysql prompt and issue the following query against the tags table: SELECT * FROM tags; The above should output the list of tags you have in the tags list. Of course, you can always extend this list, adding new tags with the INSERT statement. For example, you could issue the following statement to add the Visual Studio tag: INSERT INTO tags VALUES('Visual Studio'); Now you can move on and define the AFTER INSERT trigger on the posts table: delimiter // CREATE TRIGGER insertPost AFTER INSERT ON posts FOR EACH ROW BEGIN INSERT INTO posttags(title, tag) SELECT NEW.title as title, tag FROM tags WHERE LOCATE(tag, NEW.title)>0; END // delimiter ; As you can see, the posttags table will be automatically populated with appropriate tags just after a new row is inserted into the posts table. Notice the use of the INSERT … SELECT statement in the body of the trigger. Using this syntax lets you insert several rows into the posttags table at once, without having to use an explicit loop. In the WHERE clause of SELECT, you use standard MySQL string function LOCATE returning the position of the first occurrence of the substring, passed in as the first argument, in the string, passed in as the second argument. In this particular example, though, you are not really interested in obtaining the position of an occurrence of the substring in the string. All you need to find out here is whether the substring appears in the string or not. If it is, it should appear in the posttags table as a separate row associated with the row just inserted into the posts table. Refactoring the Sample’s Python Code Now that you have moved some data and data processing from Python into the underlying database, it’s time to reorganize the appsample custom Python module created as discussed in Python Data Persistence using MySQL. As mentioned earlier, you need to rewrite the insertPost and execPr functions and remove the determineTags function and the tags list. This is what the appsample module should look like after revising: import MySQLdb import urllib2 import xml.dom.minidom def obtainPost(): addr = "http://feeds.feedburner.com/packtpub/sDsa?format=xml" xmldoc = xml.dom.minidom.parseString(urllib2.urlopen(addr).read()) item = xmldoc.getElementsByTagName("item")[0] title = item.getElementsByTagName("title")[0].firstChild.data guid = item.getElementsByTagName("guid")[0].firstChild.data pubDate = item.getElementsByTagName("pubDate")[0].firstChild.data post ={"title": title, "guid": guid, "pubDate": pubDate} return post def insertPost(title, guid, pubDate): db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() c.execute("""INSERT INTO posts (title, guid, pubDate) VALUES(%s,%s,%s)""", (title, guid, pubDate)) db.commit() db.close() def execPr(): p = obtainPost() insertPost(p["title"], p["guid"], p["pubDate"]) If you compare it with appsample discussed in Part 1, you should notice that the revision is much shorter. It’s important to note, however, that nothing has changed from the user standpoint. So, if you now start the execPr function in your Python session: >>>import appsample >>>appsample.execPr() This should insert a new record into the posts table, inserting automatically corresponding tags records into the posttags table, if any. The difference lies in the way it’s going on behind the scenes. Now the Python code is responsible only for obtaining the latest post from the Packt Book Feed page and then inserting a record into the posts table. Dealing with tags is now responsibility of the logic implemented inside the database. In particular, the AFTER INSERT trigger defined on the posts table should take care of inserting the rows into the posttags table. To make sure that everything has worked smoothly, you can now check out the content of the posts and posttags tables. To look at the latest post stored in the posts table, you could issue the following query: SELECT title, str_to_date(pubDate,'%a, %e %b %Y') lastdate FROM posts ORDER BY lastdate DESC LIMIT 1; Then, you might want to look at the related tags stored in the posttags tables, by issuing the following query: SELECT p.title, t.tag, str_to_date(p.pubDate,'%a, %e %b %Y') lastdate FROM posts p, posttags t WHERE p.title=t.title ORDER BY lastdate DESC LIMIT 1; Conclusion In this article, you looked at how some business logic of a Python/MySQL application can be moved from Python into MySQL. For that, you continued with the sample application originally discussed in Python Data Persistence using MySQL.
Read more
  • 0
  • 0
  • 937

article-image-developing-web-applications-using-javaserver-faces-part-1
Packt
27 Oct 2009
6 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 1

Packt
27 Oct 2009
6 min read
Although a lot of applications have been written using these APIs, most modern Java applications are written using some kind of web application framework. As of Java EE 5, the standard framework for building web applications is Java Server Faces (JSF). Introduction to JavaServer Faces Before JSF was developed, Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available (at the time of writing, Wikipedia lists 35 Java web application frameworks, and this list is far from extensive!), often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE 5 specification resulted in having a standard web application framework available in any Java EE 5 compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all, however, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework as such, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE 5 component framework, one benefit of JSF is that it was designed with graphical tools in mind, making it easy for tools and IDEs such as NetBeans to take advantage of the JSF component model with drag-and-drop support for components. NetBeans provides a Visual Web JSF Designer that allow us to visually create JSF applications. Developing Our first JSF Application From an application developer's point of view, a JSF application consists of a series of JSP pages containing custom JSF tags, one or more JSF managed beans, and a configuration file named faces-config.xml. The faces-config.xml file declares the managed beans in the application, as well as the navigation rules to follow when navigating from one JSF page to another. Creating a New JSF Project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next, we need to enter a Project Name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the Server, Java EE Version, and Context Path of our application. In our example, we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. The Visual Web JavaServer Faces framework allows us to quickly build web pages by dragging-and-dropping components from the NetBeans palette into our pages. Although it certainly allows us to develop applications a lot quicker than manually coding, it hides a lot of the "ins" and "outs" of JSF. Having a background in standard JSF development will help us understand what the NetBeans Visual Web functionality does behind the scenes. When clicking Finish, the wizard generates a skeleton JSF project for us, consisting of a single JSP file called welcomeJSF.jsp, and a few configuration files: web.xml, faces-config.xml and, if we are using the default bundled GlassFish server, the GlassFish specific sun-web.xml file is generated as well. web.xml is the standard configuration file needed for all Java web applications. faces-config.xml is a JSF-specific configuration file used to declare JSF-managed beans and navigation rules. sun-web.xml is a GlassFish-specific configuration file that allows us to override the application's default context root, add security role mappings, and perform several other configuration tasks. The generated JSP looks like this: <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%-- This file is an entry point for JavaServer Faces application. --%> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h1> <h:outputText value="JavaServer Faces"/> </h1> </f:view> </body> </html> As we can see, a JSF enabled JSP file is a standard JSP file using a couple of JSF-specific tag libraries. The first tag library, declared in our JSP by the following line: <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> is the core JSF tag library, this library includes a number of tags that are independent of the rendering mechanism of the JSF application (recall that JSF can be used for applications other than web applications). By convention, the prefix f (for faces) is used for this tag library. The second tag library in the generated JSP, declared by the following line: <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> is the JSF HTML tag library. This tag library includes a number of tags that are used to implement HTML specific functionality, such as creating HTML forms and input fields. By convention, the prefix h (for HTML) is used for this tag library. The first JSF tag we see in the generated JSP file is the <f:view> tag. When writing a Java web application using JSF, all JSF custom tags must be enclosed inside an <f:view> tag. In addition to JSF-specific tags, this tag can contain standard HTML tags, as well as tags from other tag libraries, such as the JSTL tags. The next JSF-specific tag we see in the above JSP is <h:outputText>. This tag simply displays the value of its value attribute in the rendered page. The application generated by the new project wizard is a simple, but complete, JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's welcome page.
Read more
  • 0
  • 0
  • 1738
Visually different images

article-image-developing-web-applications-using-javaserver-faces-part-2
Packt
27 Oct 2009
5 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 2

Packt
27 Oct 2009
5 min read
JSF Validation Earlier in this article, we discussed how the required attribute for JSF input fields allows us to easily make input fields mandatory. If a user attempts to submit a form with one or more required fields missing, an error message is automatically generated. The error message is generated by the <h:message> tag corresponding to the invalid field. The string First Name in the error message corresponds to the value of the label attribute for the field. Had we omitted the label attribute, the value of the fields id attribute would have been shown instead. As we can see, the required attribute makes it very easy to implement mandatory field functionality in our application. Recall that the age field is bound to a property of type Integer in our managed bean. If a user enters a value that is not a valid integer into this field, a validation error is automatically generated. Of course, a negative age wouldn't make much sense, however, our application validates that user input is a valid integer with essentially no effort on our part. The email address input field of our page is bound to a property of type String in our managed bean. As such, there is no built-in validation to make sure that the user enters a valid email address. In cases like this, we need to write our own custom JSF validators. Custom JSF validators must implement the javax.faces.validator.Validator interface. This interface contains a single method named validate(). This method takes three parameters: an instance of javax.faces.context.FacesContext, an instance of javax.faces.component.UIComponent containing the JSF component we are validating, and an instance of java.lang.Object containing the user entered value for the component. The following example illustrates a typical custom validator. package com.ensode.jsf.validators;import java.util.regex.Matcher;import java.util.regex.Pattern;import javax.faces.application.FacesMessage;import javax.faces.component.UIComponent;import javax.faces.component.html.HtmlInputText;import javax.faces.context.FacesContext;import javax.faces.validator.Validator;import javax.faces.validator.ValidatorException;public class EmailValidator implements Validator { public void validate(FacesContext facesContext, UIComponent uIComponent, Object value) throws ValidatorException { Pattern pattern = Pattern.compile("w+@w+.w+"); Matcher matcher = pattern.matcher( (CharSequence) value); HtmlInputText htmlInputText = (HtmlInputText) uIComponent; String label; if (htmlInputText.getLabel() == null || htmlInputText.getLabel().trim().equals("")) { label = htmlInputText.getId(); } else { label = htmlInputText.getLabel(); } if (!matcher.matches()) { FacesMessage facesMessage = new FacesMessage(label + ": not a valid email address"); throw new ValidatorException(facesMessage); } }} In our example, the validate() method does a regular expression match against the value of the JSF component we are validating. If the value matches the expression, validation succeeds, otherwise, validation fails and an instance of javax.faces.validator.ValidatorException is thrown. The primary purpose of our custom validator is to illustrate how to write custom JSF validations, and not to create a foolproof email address validator. There may be valid email addresses that don't validate using our validator. The constructor of ValidatorException takes an instance of javax.faces.application.FacesMessage as a parameter. This object is used to display the error message on the page when validation fails. The message to display is passed as a String to the constructor of FacesMessage. In our example, if the label attribute of the component is not null nor empty, we use it as part of the error message, otherwise we use the value of the component's id attribute. This behavior follows the pattern established by standard JSF validators. Before we can use our custom validator in our pages, we need to declare it in the application's faces-config.xml configuration file. To do so, we need to add a <validator> element just before the closing </faces-config> element. <validator> <validator-id>emailValidator</validator-id> <validator-class> com.ensode.jsf.validators.EmailValidator </validator-class></validator> The body of the <validator-id> sub element must contain a unique identifier for our validator. The value of the <validator-class> element must contain the fully qualified name of our validator class. Once we add our validator to the application's faces-config.xml, we are ready to use it in our pages. In our particular case, we need to modify the email field to use our custom validator. <h:inputText id="email" label="Email Address" required="true" value="#{RegistrationBean.email}"> <f:validator validatorId="emailValidator"/></h:inputText> All we need to do is nest an <f:validator> tag inside the input field we wish to have validated using our custom validator. The value of the validatorId attribute of <f:validator> must match the value of the body of the <validator-id> element in faces-config.xml. At this point we are ready to test our custom validator. When entering an invalid email address into the email address input field and submitting the form, our custom validator logic was executed and the String we passed as a parameter to FacesMessage in our validator() method is shown as the error text by the <h:message> tag for the field.
Read more
  • 0
  • 0
  • 1051

article-image-oracle-web-rowset-part2
Packt
27 Oct 2009
4 min read
Save for later

Oracle Web RowSet - Part2

Packt
27 Oct 2009
4 min read
Reading a Row Next, we will read a row from the OracleWebRowSet object. Click on Modify Web RowSet link in the CreateRow.jsp. In the ModifyWebRowSet JSP click on the Read Row link. The ReadRow.jsp JSP is displayed. In the ReadRow JSP specify the Database Row to Read and click on Apply. The second row values are retrieved from the Web RowSet: In the ReadRow JSP the readRow() method of the WebRowSetQuery.java application is invoked. TheWebRowSetQuery object is retrieved from the session object. WebRowSetQuery query=( webrowset.WebRowSetQuery)session.getAttribute("query"); The String[] values returned by the readRow() method are added to theReadRow JSP fields. In the readRow() method theOracleWebRowSet object cursor is moved to the row to be read. webRowSet.absolute(rowRead); Retrieve the row values with the getString() method and add to String[]. Return the String[] object. String[] resultSet=new String[5];resultSet[0]=webRowSet.getString(1);resultSet[1]=webRowSet.getString(2);resultSet[2]=webRowSet.getString(3);resultSet[3]=webRowSet.getString(4);resultSet[4]=webRowSet.getString(5);return resultSet; ReadRow.jsp JSP is listed as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd"><%@ page contentType="text/html;charset=windows-1252"%><%@ page session="true"%><html><head><meta http-equiv="Content-Type" content="text/html;charset=windows-1252"><title>Read Row with Web RowSet</title></head><body><form><h3>Read Row with Web RowSet</h3><table><tr><td><a href="ModifyWebRowSet.jsp">Modify Web RowSetPage</a></td></tr></table></form><%webrowset.WebRowSetQuery query=null;query=( webrowset.WebRowSetQuery)session.getAttribute("query");String rowRead=request.getParameter("rowRead");String journalUpdate=request.getParameter("journalUpdate");String publisherUpdate=request.getParameter("publisherUpdate");String editionUpdate=request.getParameter("editionUpdate");String titleUpdate=request.getParameter("titleUpdate");String authorUpdate=request.getParameter("authorUpdate");if((rowRead!=null)){int row_Read=Integer.parseInt(rowRead);String[] resultSet=query.readRow(row_Read);journalUpdate=resultSet[0];publisherUpdate=resultSet[1];editionUpdate=resultSet[2];titleUpdate=resultSet[3];authorUpdate=resultSet[4];}%><form name="query" action="ReadRow.jsp" method="post"><table><tr><td>Database Row to Read:</td></tr><tr><td><input name="rowRead" type="text" size="25"maxlength="50"/></td></tr><tr><td>Journal:</td></tr><tr><td><input name="journalUpdate" value='<%=journalUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Publisher:</td></tr><tr><td><input name="publisherUpdate"value='<%=publisherUpdate%>' type="text" size="50"maxlength="250"/></td></tr><tr><td>Edition:</td></tr><tr><td><input name="editionUpdate" value='<%=editionUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Title:</td></tr><tr><td><input name="titleUpdate" value='<%=titleUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Author:</td></tr><tr><td><input name="authorUpdate" value='<%=authorUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td><input class="Submit" type="submit" value="Apply"/></td></tr></table></form></body></html>
Read more
  • 0
  • 0
  • 3902

article-image-implementing-basic-helloworld-wcf-windows-communication-foundation-service
Packt
27 Oct 2009
7 min read
Save for later

Implementing a Basic HelloWorld WCF (Windows Communication Foundation) Service

Packt
27 Oct 2009
7 min read
We will build a HelloWorld WCF service by carrying out the following steps: Create the solution and project Create the WCF service contract interface Implement the WCF service Host the WCF service in the ASP.NET Development Server Create a client application to consume this WCF service Creating the HelloWorld solution and project Before we can build the WCF service, we need to create a solution for our service projects. We also need a directory in which to save all the files. Throughout this article, we will save our project source codes in the D:SOAwithWCFandLINQProjects directory. We will have a subfolder for each solution we create, and under this solution folder, we will have one subfolder for each project. For this HelloWorld solution, the final directory structure is shown in the following image: You don't need to manually create these directories via Windows Explorer; Visual Studio will create them automatically when you create the solutions and projects. Now, follow these steps to create our first solution and the HelloWorld project: Start Visual Studio 2008. If the Open Project dialog box pops up, click Cancel to close it. Go to menu File | New | Project. The New Project dialog window will appear. From the left-hand side of the window (Project types), expand Other Project Types and then select Visual Studio Solutions as the project type. From the right-hand side of the window (Templates), select Blank Solution as the template. At the bottom of the window, type HelloWorld as the Name, and D:SOAwithWCFandLINQProjects as the Location. Note that you should not enter HelloWorld within the location, because Visual Studio will automatically create a folder for a new solution. Click the OK button to close this window and your screen should look like the following image, with an empty solution. Depending on your settings, the layout may be different. But you should still have an empty solution in your Solution Explorer. If you don't see Solution Explorer, go to menu View | Solution Explorer, or press Ctrl+Alt+L to bring it up. In the Solution Explorer, right-click on the solution, and select Add | New Project… from the context menu. You can also go to menu File | Add | New Project… to get the same result. The following image shows the context menu for adding a new project. The Add New Project window should now appear on your screen. In the left-hand side of this window (Project types), select Visual C# as the project type, and on the right-hand side of the window (Templates), select Class Library as the template. At the bottom of the window, type HelloWorldService as the Name. Leave D:SOAwithWCFandLINQProjectsHelloWorld as the Location. Again, don't add HelloWorldService to the location, as Visual Studio will create a subfolder for this new project (Visual Studio will use the solution folder as the default base folder for all the new projects added to the solution). You may have noticed that there is already a template for WCF Service Application in Visual Studio 2008. For the very first example, we will not use this template. Instead, we will create everything by ourselves so you know what the purpose of each template is. This is an excellent way for you to understand and master this new technology. Now, you can click the OK button to close this window. Once you click the OK button, Visual Studio will create several files for you. The first file is the project file. This is an XML file under the project directory, and it is called HelloWorldService.csproj. Visual Studio also creates an empty class file, called Class1.cs. Later, we will change this default name to a more meaningful one, and change its namespace to our own one. Three directories are created automatically under the project folder—one to hold the binary files, another to hold the object files, and a third one for the properties files of the project. The window on your screen should now look like the following image: We now have a new solution and project created. Next, we will develop and build this service. But before we go any further, we need to do two things to this project: Click the Show All Files button on the Solution Explorer toolbar. It is the second button from the left, just above the word Solution inside the Solution Explorer. If you allow your mouse to hover above this button, you will see the hint Show All Files, as shown in above diagram. Clicking this button will show all files and directories in your hard disk under the project folder-rven those items that are not included in the project. Make sure that you don't have the solution item selected. Otherwise, you can't see the Show All Files button. Change the default namespace of the project. From the Solution Explorer, right-click on the HelloWorldService project, select Properties from the context menu, or go to menu item Project | HelloWorldService Properties…. You will see the project properties dialog window. On the Application tab, change the Default namespace to MyWCFServices. Lastly, in order to develop a WCF service, we need to add a reference to the ServiceModel namespace. On the Solution Explorer window, right-click on the HelloWorldService project, and select Add Reference… from the context menu. You can also go to the menu item Project | Add Reference… to do this. The Add Reference dialog window should appear on your screen. Select System.ServiceModel from the .NET tab, and click OK. Now, on the Solution Explorer, if you expand the references of the HelloWorldService project, you will see that System.ServiceModel has been added. Also note that System.Xml.Linq is added by default. We will use this later when we query a database. Creating the HelloWorldService service contract interface In the previous section, we created the solution and the project for the HelloWorld WCF Service. From this section on, we will start building the HelloWorld WCF service. First, we need to create the service contract interface. In the Solution Explorer, right-click on the HelloWorldService project, and select Add | New Item…. from the context menu. The following Add New Item - HelloWorldService dialog window should appear on your screen. On the left-hand side of the window (Categories), select Visual C# Items as the category, and on the right-hand side of the window (Templates), select Interface as the template. At the bottom of the window, change the Name from Interface1.cs to IHelloWorldService.cs. Click the Add button. Now, an empty service interface file has been added to the project. Follow the steps below to customize it. Add a using statement: using System.ServiceModel; Add a ServiceContract attribute to the interface. This will designate the interface as a WCF service contract interface. [ServiceContract] Add a GetMessage method to the interface. This method will take a string as the input, and return another string as the result. It also has an attribute, OperationContract. [OperationContract] String GetMessage(String name); Change the interface to public. The final content of the file IHelloWorldService.cs should look like the following: using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ServiceModel;namespace MyWCFServices{[ServiceContract]public interface IHelloWorldService{[OperationContract]String GetMessage(String name);}}
Read more
  • 0
  • 0
  • 2226
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-data-migration-scenarios-sap-business-one-application-part-2
Packt
27 Oct 2009
7 min read
Save for later

Data Migration Scenarios in SAP Business ONE Application- part 2

Packt
27 Oct 2009
7 min read
Advanced data migration tools: xFusion Studio For our own projects, we have adopted a tool called xFusion. Using this tool, you gain flexibility and are able to reuse migration settings for specific project environments. The tool provides connectivity to directly extract data from applications (including QuickBooks and Peachtree). In addition, it also supports building rules for data profiling, validation, and conversions. For example, our project team participated in the development of the template for the Peachtree interface. We configured the mappings from Peachtree, and connected the data with the right fields in SAP. This was then saved as a migration template. Therefore, it would be easy and straightforward to migrate data from Peachtree to SAP in any future projects. xFusion packs save migration knowledge Based on the concept of establishing templates for migrations, xFusion provides preconfigured templates for the SAP Business ONE application. In xFusion, templates are called xFusion packs. Please note that these preconfigured packs may include master data packs, and also xFusion packs for transaction data. The following xFusion packs are provided for an SAP Business ONE migration: Administration Banking Business partner Finance HR Inventory and production Marketing documents and receipts MRP UDFs Services You can see that the packs are also grouped by business object. For example, you have a group of xFusion packs for inventory and production. You can open the pack and find a group of xFusion files that contain the configuration information. If you open the inventory and production pack, a list of folders will be revealed. Each folder has a set of Excel templates and xFusion fi les (seen in the following screenshot). An xFusion pack essentially incorporates the configuration and data manipulation procedures required to bring data from a source into SAP. The source settings can be saved in xFusion packs so that you can reuse the knowledge with regards to data manipulation and formatting. Data "massaging" using SQL The key for the migration procedure is the capability to do data massaging in order to adjust formats and columns, in a step-by-step manner, based on requirements. Data manipulation is not done programmatically, but rather via a step-by-step process, where each step uses SQL statements to verify and format data. The entire process is represented visually, and thereby documents the steps required. This makes it easy to adjust settings and fine-tune them. The following applications are supported and can, therefore, be used as a source for an SAP migration: (They are existing xFusion packs) SAP Business ONE Sage ACT! SAP SAP BW Peachtree QuickBooks Microsoft Dynamics CRM The following is a list of supported databases: Oracle ODBC MySQL OLE DB SQL Server PostgrSQL Working with xFusion The workflow in xFusion starts when you open an existing xFusion pack, or create a new one. In this example, an xFusion pack for business partner migration was opened. You can see the graphical representation of the migration process in the main window (in the following screenshot). Each icon in the graphic representation represents a data manipulation and formatting step. If you click on an icon, the complete path from the data source to the icon is highlighted. Therefore, you can select the previous steps to adjust the data. The core concept is that you do not directly change the input data, but define rules to convert data from the source format to the target format. If you open an xFusion pack for the SAP Business ONE application, the target is obviously SAP Business ONE. Therefore, you need to enter the privileges and database name so that the pack knows how to access the SAP system. In addition, the source parameters need to be provided. xFusion packs come with example Excel fi les. You need to select the Excel fi les as the relevant source. However, it is important to note that you don't need to use the Excel files. You can use any database, or other source, as long as you adjust the data format using the step-by-step process to represent the same format as provided in Excel. In xFusion. you can use the sample files that come in Excel format. The connection parameters are presented once you double-click on any of the connections listed in the Connections section as follows: It is recommended to click on Test Connection to verify the proper parameters. If all of the connections are right, you can run a migration from the source to the target by right-clicking on an icon and selecting Run Export as shown here: The progress and export is visually documented. This way, you can verify the success. There is also a log file in the directory where the currently utilized xFusion pack resides, as shown in the following screenshot: Tips and recommendations for your own project Now you know all of the main migration tools and methods. If you want to select the right tool and method for your specific situation, you will see that even though there may be many templates and preconfigured packs out there, your own project potentially comes with some individual aspects. When organizing the data migration project, use the project task skeleton I provided. It is important to subdivide the required migration steps into a group of easy-to-understand steps, where data can be verified at each level. If it gets complicated, it is probably not the right way to move forward, and you need to re-think the methods and tools you are using. Common issues The most common issue I found in similar projects is that the data to be migrated is not entirely clean and consistent. Therefore, be sure to use a data verification procedure at each step. Don't just import data, only to find out later that the database is overloaded with data that is not right. Recommendation Separate the master data and the transaction data. If you don't want to lose valuable transaction data, you can establish a reporting database which will save all of the historic transactions. For example, sales history can easily be migrated to an SQL database. You can then provide access to this information from the required SAP forms using queries or Crystal Reports. Case study During the course of evaluating the data import features available in the SAP Business ONE application, we have already learned how to import business partner information and item data. This can easily be done using the standard SAP data import features based on the Excel or text files. Using this method allows the lead, customer, and vendor data to be imported. Let's say that the Lemonade Stand enterprise has salespeople who travel to trade fairs and collect contact information. We can import the address information using the proven BP import method. But after this data is imported, what would the next step be? It would be a good idea to create and manage opportunities based on the address material. Basically, you already know how to use Excel to bring over address information. Let's enhance this concept to bring over opportunity information. We will use xFusion to import opportunity data into the SAP Business ONE application. The basis will be the xFusion pack for opportunities. Importing sales opportunities for the Lemonade Stand The xFusion pack is open, and you can see that it is a nice and clean example without major complexity. That's how it should be, as you see here:
Read more
  • 0
  • 0
  • 2360

article-image-new-soa-capabilities-biztalk-server-2009-uddi-services
Packt
27 Oct 2009
6 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: UDDI Services

Packt
27 Oct 2009
6 min read
All truths are easy to understand once they are discovered; the point is to discover them.-Galileo Galilei What is UDDI? Universal Description and Discovery Information (UDDI) is a type of registry whose primary purpose is to represent information about web services. It describes the service providers, the services that provider offers, and in some cases, the specific technical specifications for interacting with those services. While UDDI was originally envisioned as a public, platform independent registry that companies could exploit for listing and consuming services, it seems that many have chosen instead to use UDDI as an internal resource for categorizing and describing their available enterprise services. Besides simply listing available services for others to search and peruse, UDDI is arguably most beneficial for those who wish to perform runtime binding to service endpoints. Instead of hard-coding a service path in a client application, one may query UDDI for a particular service's endpoint and apply it to their active service call. While UDDI is typically used for web services, nothing prevents someone from storing information about any particular transport and allowing service consumers to discover and do runtime resolution to these endpoints. As an example, this is useful if you have an environment with primary, backup, and disaster access points and want your application be able to gracefully look up and failover to the next available service environment. In addition, UDDI can be of assistance if an application is deployed globally but you wish for regional consumers to look up and resolve against the closest geographical endpoint. UDDI has a few core hierarchy concepts that you must grasp to fully comprehend how the registry is organized. The most important ones are included here. Name Purpose Name in Microsoft UDDI services BusinessEntity These are the service providers. May be an organization, business unit or functional area. Provider BusinessService General reference to a business service offered by a provider. May be a logical grouping of actual services. Service BindingTemplate Technical details of an individual service including endpoint Binding tModel (Technical Model) Represents metadata for categorization or description such as transport or protocol tModel As far as relationships between these entities go, a Business Entity may contain many Business Services, which in turn can have multiple Binding Templates. A binding may reference multiple tModels and tModels may be reused across many Binding Templates. What's new in UDDI version three? The latest UDDI specification calls out multiple-registry environments, support for digital signatures applied to UDDI entries, more complex categorization, wildcard searching, and a subscription API. We'll spend a bit of time on that last one in a few moments. Let's take a brief lap around at the Microsoft UDDI Services offering. For practical purposes, consider the UDDI Services to be made up of two parts: an Administration Console and a web site. The website is actually broken up into both a public facing and administrative interface, but we'll talk about them as one unit. The UDDI Configuration Console is the place to set service-wide settings ranging from the extent of logging to permissions and site security. The site node (named UDDI) has settings for permission account groups, security settings (see below), and subscription notification thresholds among others. The web node, which resides immediately beneath the parent, controls web site setting such as logging level and target database. Finally, the notification node manages settings related to the new subscription notification feature and identically matches the categories of the web node. The UDDI Services web site, found at http://localhost/uddi/, is the destination or physically listing, managing, and configuring services. The Search page enables querying by a wide variety of criteria including category, services, service providers, bindings, and tModels. The Publish page is where you go to add new services to the registry or edit the settings of existing ones. Finally, the Subscription page is where the new UDDI version three capability of registry notification is configured. We will demonstrate this feature later in this article. How to add services to the UDDI registry? Now we're ready to add new services to our UDDI registry. First, let's go to the Publish page and define our Service Provider and a pair of categorical tModels. To add a new Provider, we right-click the Provider node in the tree and choose Add Provider. Once a provider is created and named, we have the choice of adding all types of context characteristics such as a contact name(s), categories, relationships, and more. I'd like to add two tModel categories to my environment : one to identify which type of environment the service references (development, test, staging, production) and another to flag which type of transport it uses (Basic HTTP, WS HTTP, and so on). To add atModel, simply right-click the tModels node and choose Add tModel. This first one is named biztalksoa:runtimeresolution:environment. After adding one more tModel for biztalksoa:runtimeresolution:transporttype, we're ready to add a service to the registry. Right-click the BizTalkSOA provider and choose Add Service. Set the name of this service toBatchMasterService. Next, we want to add a binding (or access point) for this service, which describes where the service endpoint is physically located. Switch to the Bindings tab of the service definition and choose New Binding. We need a new access point, so I pointed to our proxy service created earlier and identified it as an endPoint. Finally, let's associate the two new tModel categories with our service. Switch to the Categories tab, and choose to Add Custom Category. We're asked to search for atModel, which represents our category, so a wildcard entry such as %biztalksoa%  is a valid search criterion. After selecting the environment category, we're asked for the key name and value. The key "name" is purely a human-friendly representation of the data whereas the tModel identifier and the key value comprise the actual name-value pair. I've entered production as the value on the environment category, and WS-Http as the key value on thetransporttype category. At this point, we have a service sufficiently configured in the UDDI directory so that others can discover and dynamically resolve against it.
Read more
  • 0
  • 0
  • 1866

article-image-new-soa-capabilities-biztalk-server-2009-wcf-sql-server-adapter
Packt
26 Oct 2009
3 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: WCF SQL Server Adapter

Packt
26 Oct 2009
3 min read
Do not go where the path may lead; go instead where there is no path and leave a trail.-Ralph Waldo Emerson Many of the patterns and capabilities shown in this article are compatible with the last few versions of the BizTalk Server product. So what's new in BizTalk Server 2009?` BizTalk Server 2009 is the sixth formal release of the BizTalk Server product. This upcoming release has a heavy focus on platform modernization through new support for Windows Server 2008, Visual Studio.NET 2008, SQL Server 2008, and the .NET Framework 3.5. This will surely help developers who have already moved to these platforms in their day-to-day activities but have been forced to maintain separate environments solely for BizTalk development efforts. Lets get started. What is the WCF SQL Adapter? The BizTalk Adapter Pack 2.0 now contains five system and data adapters including SAP, Siebel, Oracle databases, Oracle applications, and SQL Server. What are these adapters and how are they different than the adapters available for previous version of BizTalk? Up until recently, BizTalk adapters were built using a commonly defined BizTalk Adapter Framework. This framework prescribed interfaces and APIs for adapter developers in order to elicit a common look and feel for the users of the adapters. Moving forward, adapter developers are encouraged by Microsoft to use the new WCF LOB Adapter SDK. As you can guess from the name, this new adapter framework, which can be considered an evolution of the BizTalk Adapter Framework, is based on WCF technologies. All of the adapters in the BizTalk Adapter Pack 2.0 are built upon the WCF LOB Adapter SDK. What this means is that all of the adapters are built as reusable, metadata-rich components that are surfaced to users as WCF bindings. So much like you have a wsHttp or netTcp binding, now you have a sqlBinding or sapBinding. As you would expect from a WCF binding, there is a rich set of configuration attributes for these adapters and they are no longer tightly coupled to BizTalk itself. Microsoft has made connection a commodity, and no longer do organizations have to spend tens of thousands of dollars to connect to line of business systems like SAP through expensive, BizTalk-only adapters. This latest version of the BizTalk Adapter Pack now includes a SQL Server adapter, which replaces the legacy BizTalk-only SQL Server adapter. What do we get from this SQL Server adapter that makes it so much better than the old one? Feature Classic SQL Adapter WCF SQL Adapter Execute create-read-update-delete statements on tables and views; execute stored procedures and generic T-SQL statements Partial (send operations only support stored procedures and updategrams) Yes Database polling via FOR XML Yes Yes Database polling via  traditional tabular results No Yes Proactive database push via SQL Query Notification No Yes Expansive adapter configuration which impacts connection management and transaction behavior No Yes Support for composite transactions which allow aggregation of operations across tables or procedures into a single atomic transaction No Yes Rich metadata browsing and retrieval for finding and selecting database operations No Yes Support for the latest data types (e.g. XML) and SQL Server 2008 platform No Yes Reusable outside of BizTalk applications by WCF or basic HTTP clients No Yes Adapter extension and configuration through out of the box WCF components or custom WCF behaviors No Yes Dynamic WSDL generation which always reflects current state of the system instead of fixed contract which always requires explicit updates No Yes
Read more
  • 0
  • 0
  • 4370

article-image-jboss-tools-palette
Packt
26 Oct 2009
4 min read
Save for later

JBoss Tools Palette

Packt
26 Oct 2009
4 min read
By default, JBoss Tools Palette is available in the Web Development perspective that can be displayed from the Window menu by selecting the Open Perspective | Other option. In the following screenshot, you can see the default look of this palette: Let's dissect this palette to see how it makes our life easier! JBoss Tools Palette Toolbar Note that on the top right corner of the palette, we have a toolbar made of three buttons (as shown in the following screenshot). They are (from left to right): Palette Editor Show/Hide Import Each of these buttons accomplishes different tasks for offering a high level of flexibility and customizability. Next, we will focus our attention on each one of these buttons. Palette Editor Clicking on the Palette Editor icon will display the Palette Editor window (as shown in the following screenshot), which contains groups and subgroups of tags that are currently supported. Also, from this window you can create new groups, subgroups, icons, and of course, tags—as you will see in a few moments. As you can see, this window contains two panels: one for listing groups of tag libraries (left side) and another that displays details about the selected tag and allows us to modify the default values (extreme right). Modifying a tag is a very simple operation that can be done like this: Select from the left panel the tag that you want to modify (for example, the <div> tag from the HTML | Block subgroup, as shown in the previous screenshot). In the right panel, click on the row from the value column that corresponds to the property that you want to modify (the name column). Make the desirable modification(s) and click the OK button for confirming it (them). Creating a set of icons The Icons node from the left panel allows you to create sets of icons and import new icons for your tags. To start, you have to right-click on this node and select the Create | Create Set option from the contextual menu (as shown in the following screenshot). This action will open the Add Icon Set window where you have to specify a name for this new set. Once you're done with the naming, click on the Finish button (as shown in the following screenshot). For example, we have created a set named eHTMLi: Importing an icon You can import a new icon in any set of icons by right-clicking on the corresponding set and selecting the Create | Import Icon option from the contextual menu (as shown in the following screenshot): This action will open the Add Icon window, where you have to specify a name and a path for your icon, and then click on the Finish button (as shown in the following screenshot). Note that the image of the icon should be in GIF format. Creating a group of tag libraries As you can see, the JBoss Tools Palette has a consistent default set of groups of tag libraries, like HTML, JSF, JSTL, Struts, XHTML, etc. If these groups are insufficient, then you can create new ones by right-clicking on the Palette node and selecting the Create | Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Create Group window, where you have to specify a name for the new group, and then click on Finish. For example, we have created a group named mygroup: Note that you can delete (only groups created by the user) or edit groups (any group) by selecting the Delete or Edit options from the contextual menu that appears when you right-click on the chosen group. Creating a tag library Now that we have created a group, it's time to create a library (or a subgroup). To do this, you have to right-click on the new group and select the Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Add Palette Group window, where you have to specify a name and an icon for this library, and then click on the Finish button (as shown in the following screenshot). As an example, we have created a library named eHTML with an icon that we had imported in the Importing an icon section discussed earlier in this article: Note that you can delete a tag library (only tag libraries created by the user) by selecting the Delete option from the contextual menu that appears when you right-click on the chosen library.
Read more
  • 0
  • 0
  • 1314
article-image-working-simple-associations-using-cakephp
Packt
24 Oct 2009
5 min read
Save for later

Working with Simple Associations using CakePHP

Packt
24 Oct 2009
5 min read
Database relationship is hard to maintain even for a mid-sized PHP/MySQL application, particularly, when multiple levels of relationships are involved because complicated SQL queries are needed. CakePHP offers a simple yet powerful feature called 'object relational mapping' or ORM to handle database relationships with ease.In CakePHP, relations between the database tables are defined through association—a way to represent the database table relationship inside CakePHP. Once the associations are defined in models according to the table relationships, we are ready to use its wonderful functionalities. Using CakePHP's ORM, we can save, retrieve, and delete related data into and from different database tables with simplicity, in a better way—no need to write complex SQL queries with multiple JOINs anymore! In this article by Ahsanul Bari and Anupom Syam, we will have a deep look at various types of associations and their uses. In particular, the purpose of this article is to learn: How to figure out association types from database table relations How to define different types of associations in CakePHP models How to utilize the association for fetching related model data How to relate associated data while saving There are basically 3 types of relationship that can take place between database tables: one-to-one one-to-many many-to-many The first two of them are simple as they don't require any additional table to relate the tables in relationship. In this article, we will first see how to define associations in models for one-to-one and one-to-many relations. Then we will look at how to retrieve and delete related data from, and save data into, database tables using model associations for these simple associations. Defining One-To-Many Relationship in Models To see how to define a one-to-many relationship in models, we will think of a situation where we need to store information about some authors and their books and the relation between authors and books is one-to-many. This means an author can have multiple books but a book belongs to only one author (which is rather absurd, as in real life scenario a book can also have multiple authors). We are now going to define associations in models for this one-to-many relation, so that our models recognize their relations and can deal with them accordingly. Time for Action: Defining One-To-Many Relation Create a new database and put a fresh copy of CakePHP inside the web root. Name the database whatever you like but rename the cake folder to relationship. Configure the database in the new Cake installation. Execute the following SQL statements in the database to create a table named authors, CREATE TABLE `authors` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `name` varchar( 127 ) NOT NULL , `email` varchar( 127 ) NOT NULL , `website` varchar( 127 ) NOT NULL ); Create a books table in our database by executing the following SQL commands: CREATE TABLE `books` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `isbn` varchar( 13 ) NOT NULL , `title` varchar( 64 ) NOT NULL , `description` text NOT NULL , `author_id` int( 11 ) NOT NULL ) Create the Author model using the following code (/app/models/authors.php): <?php class Author extends AppModel{ var $name = 'Author'; var $hasMany = 'Book';} ?> Use the following code to create the Book model (/app/models/books.php): <?phpclass Book extends AppModel{ var $name = 'Book'; var $belongsTo = 'Author';}?> Create a controller for the Author model with the following code: (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?>   Use the following code to create a controller for the Book model (/app/controllers/books_controller.php): <?php class BooksController extends AppController { var $name = 'Books'; var $scaffold; } ?> Now, go to the following URLs and add some test data: http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We have created two tables: authors and books for storing author and book information. A foreign-key named author_id is added to the books table to establish the one-to-many relation between authors and books. Through this foreign-key, an author is related to multiple books, as well as, a book is related to one single author. By Cake convention, the name of a foreign-key should be underscored, singular name of target model, suffixed with _id. Once the database tables are created and relations are established between them, we can define associations in models. In both of the model classes, Author and Book, we defined associations to represent the one-to-many relationship between the corresponding two tables. CakePHP provides two types of association: hasMany and belongsTo to define one-to-many relations in models. These associations are very appropriately named: As an author 'has many' books, Author model should have hasMany association to represent its relation with the Book model. As a book 'belongs to' one author, Book model should have belongsTo association to denote its relation with the Author model. In the Author model, an association attribute $hasMany is defined with the value Book to inform the model that every author can be related to many books. We also added a $belongsTo attribute in the Book model and set its value to Author to let the Book model know that every book is related to only one author. After defining the associations, two controllers were created for both of these models with scaffolding to see how the associations are working.
Read more
  • 0
  • 0
  • 6241

article-image-rss-web-widget
Packt
24 Oct 2009
8 min read
Save for later

RSS Web Widget

Packt
24 Oct 2009
8 min read
What is an RSS Feed? First of all, let us understand what a web feed is. Basically, it is a data format that provides frequently updated content to users. Content distributors syndicate the web feed, allowing users to subscribe, by using feed aggregator. RSS feeds contain data in an XML format. RSS is the term used for describing Really Simple Syndication, RDF Site Summary, or Rich Site Summary, depending upon the different versions. RDF (Resource Description Framework), a family of W3C specification, is a data model format for modelling various information such as title, author, modified date, content etc through variety of syntax format. RDF is basically designed to be read by computers for exchanging information. Since, RSS is an XML format for data representation, different authorities defined different formats of RSS across different versions like 0.90, 0.91, 0.92, 0.93, 0.94, 1.0 and 2.0. The following table shows when and by whom were the different RSS versions proposed. RSS Version Year Developer's Name RSS 0.90 1999 Netscape introduced RSS 0.90. RSS 0.91 1999 Netscape proposed the simpler format of RSS 0.91. 1999 UserLand Software proposed the RSS specification. RSS 1.0 2000 O'Reilly released RSS 1.0. RSS 2.0 2000 UserLand Software proposed the further RSS specification in this version and it is the most popular RSS format being used these days. Meanwhile, Harvard Law school is responsible for the further development of the RSS specification. There had been a competition like scenario for developing the different versions of RSS between UserLand, Netscape and O'Reilly before the official RSS 2.0 specification was released. For a detailed history of these different versions of RSS you can check http://www.rss-specifications.com/history-rss.htm The current version RSS is 2.0 and it is the common format for publishing RSS feeds these days. Like RSS, there is another format that uses the XML language for publishing web feeds. It is known as ATOM feed, and is most commonly used in Wiki and blogging software. Please refer to http://en.wikipedia.org/wiki/ATOM for detail. The following is the RSS icon that denotes links with RSS feeds. If you're using Mozilla's Firefox web browser then you're likely to see the above image in the address bar of the browser for subscribing to an RSS feed link available in any given page. Web browsers like Firefox and Safari discover available RSS feeds in web pages by looking at the Internet media type application/rss+xml. The following tag specifies that this web page is linked with the RSS feed URL: http://www.example.com/rss.xml<link href="http://www.example.com/rss.xml" rel="alternate" type="application/rss+xml" title="Sitewide RSS Feed" /> Example of RSS 2.0 format First of all, let’s look at a simple example of the RSS format. <?xml version="1.0" encoding="UTF-8" ?><rss version="2.0"><channel> <title>Title of the feed</title> <link>http://www.examples.com</link> <description>Description of feed</description> <item> <title>News1 heading</title> <link>http://www.example.com/news-1</link> <description>detail of news1 </description> </item> <item> <title>News2 heading</title> <link>http://www.example.com/news-2</link> <description>detail of news2 </description> </item></channel></rss> The first line is the XML declaration that indicates its version is 1.0. The character encoding is UTF-8. UTF-8 characters support many European and Asian characters so it is widely used as character encoding in web. The next line is the rss declaration, which declares that it is a RSS document of version 2.0 The next line contains the <channel> element which is used for describing the detail of the RSS feed. The <channel> element must have three required elements <title>, <link> and <description>. The title tag contains the title of that particular feed. Similarly, the link element contains the hyperlink of the channel and the description tag describes or carries the main information of the channel. This tag usually contains the information in detail. Furthermore, each <channel> element may have one or more <item> elements which contain the story of the feed. Each <item> element must have the three elements <title>, <link> and <description> whose use is similar to those of channel elements, but they describe the details of each individual items. Finally, the last two lines are the closing tags for the <channel> and <rss> elements. Creating RSS Web Widget The RSS widget we're going to build is a simple one which displays the headlines from the RSS feed, along with the title of the RSS feed. This is another widget which uses some JavaScript, PHP CSS and HTML. The content of the widget is displayed within an Iframe so when you set up the widget, you've to adjust the height and width. To parse the RSS feed in XML format, I've used the popular PHP RSS Parser – Magpie RSS. The homepage of Magpie RSS is located at http://magpierss.sourceforge.net/. Introduction to Magpie RSS Before writing the code, let's understand what the benefits of using the Magpie framework are, and how it works. It is easy to use. While other RSS parsers are normally limited for parsing certain RSS versions, this parser parses most RSS formats i.e. RSS 0.90 to 2.0, as well as ATOM feed. Magpie RSS supports Integrated Object Cache which means that the second request to parse the same RSS feed is fast— it will be fetched from the cache. Now, let's quickly understand how Magpie RSS is used to parse the RSS feed. I'm going to pick the example from their homepage for demonstration. require_once 'rss_fetch.inc';$url = 'http://www.getacoder.com/rss.xml';$rss = fetch_rss($url);echo "Site: ", $rss->channel['title'], "<br>";foreach ($rss->items as $item ) { $title = $item[title]; $url = $item[link]; echo "<a href=$url>$title</a></li><br>";} If you're more interested in trying other PHP RSS parsers then you might like to check out SimplePie RSS Parser (http://simplepie.org/) and LastRSS (http://lastrss.oslab.net/). You can see in the first line how the rss_fetch.inc file is included in the working file. After that, the URL of the RSS feed from getacoder.com is assigned to the $url variable. The fetch_rss() function of Magpie is used for fetching data and converting this data into RSS Objects. In the next line, the title of RSS feed is displayed using the code $rss->channel['title']. The other lines are used for displaying each of the RSS feed's items. Each feed item is stored within an $rss->items array, and the foreach() loop is used to loop through each element of the array. Writing Code for our RSS Widget As I've already discussed, this widget is going to use Iframe for displaying the content of the widget, so let's look at the JavaScript code for embedding Iframe within the HTML code. var widget_string = '<iframe src="http://www.yourserver.com/rsswidget/rss_parse_handler.php?rss_url=';widget_string += encodeURIComponent(rss_widget_url);widget_string += '&maxlinks='+rss_widget_max_links;widget_string += '" height="'+rss_widget_height+'" width="'+rss_widget_width+'"';widget_string += ' style="border:1px solid #FF0000;"';widget_string +=' scrolling="no" frameborder="0"></iframe>';document.write(widget_string); In the above code, the widget string variable contains the string for displaying the widget. The source of Iframe is assigned to rss_parse_handler.php. The URL of the RSS feed, and the headlines from the feed are passed to rss_parse_handler.php via the GET method, using rss_url and maxlinks parameters respectively. The values to these parameters are assigned from the Javascript variables rss_widget_url and rss_widget_max_links. The width and height of the Iframe are also assigned from JavaScript variables, namely rss_widget_width and rss_widget_height. The red border on the widget is displayed by assigning 1px solid #FF0000 to the border attribute using the inline styling of CSS. Since, Inline CSS is used, the frameborder property is set to 0 (i.e. the border of the frame is zero). Displaying borders from the CSS has some benefit over employing the frameborder property. While using CSS code, 1px dashed #FF0000 (border-width border-style border-color) means you can display a dashed border (you can't using frameborder), and you can use the border-right, border-left, border-top, border-bottom attributes of CSS to display borders at specified positions of the object. The scrolling property is set to no here, which means that the scroll bar will not be displayed in the widget if the widget content overflows. If you want to show a scroll bar, then you can set this property to yes. The values of JavaScript variables like rss_widget_url, rss_widget_max_links etc come from the page where we'll be using this widget. You'll see how the values of these variables will be assigned from the section at the end where we'll look at how to use this RSS widget.  
Read more
  • 0
  • 0
  • 3937

article-image-service-oriented-java-business-integration-whats-whys
Packt
23 Oct 2009
4 min read
Save for later

Service Oriented Java Business Integration - What's & Why's

Packt
23 Oct 2009
4 min read
Many of you as (Java) programmers generate business purpose code, like "confirming an order" or "find available products". At times, you may also want to connect to external systems and services, since your application in isolation alone will not provide you the required functionality. When the number of such connections increases, you would be generating more and more of "integration code", mixed along with your business code. For single or simple systems and services this is fine, but what if your "Enterprise" has got many (say 100? or even more...) such systems and services to be integrated together? Here, integration becomes a prime concern, which is separate from fulfilling your business concerns. In the SOA context, we will have services fulfilling your business use cases. Existing Java tools help us to define services. But are they enough to support Service Oriented Integration (SOI)? Perhaps not, and this is where JSR-208 (Java Specification Request) introduces the Java Business Integration (JBI) specification. And in the world of integration, we have multiple Architectures to follow including the Point-to-Point, Hub-and-Spoke, Message-Bus and the Service-Bus. Each of them have their own advantages and disadvantages, and the Enterprise Service Bus (ESB) is an Architectural pattern best suited for doing SOI. This book provides a consistent style and visual representations to describe the message flows in sample scenarios, which helps the reader to understand the code samples fully. This book also presents practical advice on designing code that connects services together, based again on practical experiences gathered over the last one decade in java business integration. I believes in "Practice What You Preach" and hence equips you with enough tools to "Practice What You Read". What does the book have to offer? or What does it teach? This book introduces ESB - The book guarantees you understand ESB and can also code for ESB. This book introduces JBI - The book don't reproduce specification, but give you just enough highlights alone. Teaches you ServiceMix - ServiceMix is an Apache Open source Java ESB. The book teach you from step 1 of installation Teaches you to implement practical scenarios - Proxies, Web Services gateway, web services over JMS, service versioning, etc. Implementation for EIP - Gives you code on how to implement Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf For more, have a glance through the Table of Contents [PDF] Who would benefit from it? Any Java/J2EE enthusiast, who wants to know something more than daily POJO, Spring & Hibernate Developers & Architects who deals with integration Developers & Architects who don't consciously deal with integration - its high time for you to seperate out spaghetti integration aspects from your business purpose code - for that, you need to first understand and sense integration! Even people with Non-Java background - My .NET peers, don't envy on the lightweight approaches described in this book using java tools. The integration is done mostly in XML configurations with minimum java code, and you too can benefit from the literature. Anything special about the book First book published on JBI First book published on Apache ServiceMix First book which shows you how to integrate following ESB, using lightweight tools. A book with code, which makes you feel running and seeing the code in action, even without actually running the code (nothing prevents you from trying the samples). You can go though a Sample Chapter here: JBI-Bind-Web-Services-in-ESB-Gateway.pdf [1 MB] No heavy Workshops, IDEs, Studios, Plugins or 4GB RAM required - Use a text editor and Apache Ant, and you can run the samples. Based on existing knowledge on web services Authored and reviewed by practicing Architects, who are developers too in their everyday role. What this book is not about? Not a collection of white papers alone - The book provide you implementation samples. Not a repetition of ServiceMix online documentation - The book provide practical scenarios as samples Not about JBI Service Provider Interface (SPI) - Hence this book is not for tool vendors, but for developers Fine, if you think you need some starters before the real chill, you can go through the article titled Aggregate Services in ServiceMix JBI ESB
Read more
  • 0
  • 0
  • 1160
article-image-interacting-databases-through-java-persistence-api
Packt
23 Oct 2009
17 min read
Save for later

Interacting with Databases through the Java Persistence API

Packt
23 Oct 2009
17 min read
We will look into: Creating our first JPA entity Interacting with JPA entities with entity manager Generating forms in JSF pages from JPA entities Generating JPA entities from an existing database schema JPA named queries and JPQL Entity relationships Generating complete JSF applications from JPA entities Creating Our First JPA Entity JPA entities are Java classes whose fields are persisted to a database by the JPA API. JPA entities are Plain Old Java Objects (POJOs), as such, they don't need to extend any specific parent class or implement any specific interface. A Java class is designated as a JPA entity by decorating it with the @Entity annotation. In order to create and test our first JPA entity, we will be creating a new web application using the JavaServer Faces framework. In this example we will name our application jpaweb. As with all of our examples, we will be using the bundled GlassFish application server. To create a new JPA Entity, we need to right-click on the project and select New | Entity Class. After doing so, NetBeans presents the New Entity Class wizard. At this point, we should specify the values for the Class Name and Package fields (Customer and com.ensode.jpaweb in our example), then click on the Create Persistence Unit... button. The Persistence Unit Name field is used to identify the persistence unit that will be generated by the wizard, it will be defined in a JPA configuration file named persistence.xml that NetBeans will automatically generate from the Create Persistence Unit wizard. The Create Persistence Unit wizard will suggest a name for our persistence unit, in most cases the default can be safely accepted. JPA is a specification for which several implementations exist. NetBeans supports several JPA implementations including Toplink, Hibernate, KODO, and OpenJPA. Since the bundled GlassFish application server includes Toplink as its default JPA implementation, it makes sense to take this default value for the Persistence Provider field when deploying our application to GlassFish. Before we can interact with a database from any Java EE 5 application, a database connection pool and data source need to be created in the application server. A database connection pool contains connection information that allow us to connect to our database, such as the server name, port, and credentials. The advantage of using a connection pool instead of directly opening a JDBC connection to a database is that database connections in a connection pool are never closed, they are simply allocated to applications as they need them. This results in performance improvements, since the operations of opening and closing database connections are expensive in terms of performance. Data sources allow us to obtain a connection from a connection pool by obtaining an instance of javax.sql.DataSource via JNDI, then invoking its getConnection() method to obtain a database connection from a connection pool. When dealing with JPA, we don't need to directly obtain a reference to a data source, it is all done automatically by the JPA API, but we still need to indicate the data source to use in the application's Persistence Unit. NetBeans comes with a few data sources and connection pools pre-configured. We could use one of these pre-configured resources for our application, however, NetBeans also allows creating these resources "on the fly", which is what we will be doing in our example. To create a new data source we need to select the New Data Source... item from the Data Source combo box. A data source needs to interact with a database connection pool. NetBeans comes pre-configured with a few connection pools out of the box, but just like with data sources, it allows us to create a new connection pool "on demand". In order to do this, we need to select the New Database Connection... item from the Database Connection combo box. NetBeans includes JDBC drivers for a few Relational Database Management Systems (RDBMS) such as JavaDB, MySQL, and PostgreSQL "out of the box". JavaDB is bundled with both GlassFish and NetBeans, therefore we picked JavaDB for our example. This way we avoid having to install an external RDBMS. For RDBMS systems that are not supported out of the box, we need to obtain a JDBC driver and let NetBeans know of it's location by selecting New Driver from the Name combo box. We then need to navigate to the location of a JAR file containing the JDBC driver. Consult your RDBMS documentation for details. JavaDB is installed in our workstation, therefore the server name to use is localhost. By default, JavaDB listens to port 1527, therefore that is the port we specify in the URL. We wish to connect to a database called jpaintro, therefore we specify it as the database name. Since the jpaintro database does not exist yet, we pass the attribute create=true to JavaDB, this attribute is used to create the database if it doesn't exist yet. Every JavaDB database contains a schema named APP, since each user by default uses a schema named after his/her own login name. The easiest way to get going is to create a user named "APP" and select a password for this user. Clicking on the Show JDBC URL checkbox reveals the JDBC URL for the connection we are setting up. The New Database Connection wizard warns us of potential security risks when choosing to let NetBeans remember the password for the database connection. Database passwords are scrambled (but not encrypted) and stored in an XML file under the .netbeans/[netbeans version]/config/Databases/Connections directory. If we follow common security practices such as locking our workstation when we walk away from it, the risks of having NetBeans remember database passwords will be minimal. Once we have created our new data source and connection pool, we can continue configuring our persistence unit. It is a good idea to leave the Use Java Transaction APIs checkbox checked. This will instruct our JPA implementation to use the Java Transaction API (JTA) to allow the application server to manage transactions. If we uncheck this box, we will need to manually write code to manage transactions. Most JPA implementations allow us to define a table generation strategy. We can instruct our JPA implementation to create tables for our entities when we deploy our application, to drop the tables then regenerate them when our application is deployed, or not create any tables at all. NetBeans allows us to specify the table generation strategy for our application by clicking the appropriate value in the Table Generation Strategy radio button group. When working with a new application, it is a good idea to select the Drop and Create table generation strategy. This will allow us to add, remove, and rename fields in our JPA entity at will without having to make the same changes in the database schema. When selecting this table generation strategy, tables in the database schema will be dropped and recreated, therefore any data previously persisted will be lost. Once we have created our new data source, database connection and persistence unit, we are ready to create our new JPA entity. We can do so by simply clicking on the Finish button. At this point NetBeans generates the source for our JPA entity. JPA allows the primary field of a JPA entity to map to any column type (VARCHAR, NUMBER). It is best practice to have a numeric surrogate primary key, that is, a primary key that serves only as an identifier and has no business meaning in the application. Selecting the default Primary Key type of long will allow for a wide range of values to be available for the primary keys of our entities. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } //Other generated methods (hashCode(), equals() and //toString() omitted for brevity.} As we can see, a JPA entity is a standard Java object. There is no need to extend any special class or implement any special interface. What differentiates a JPA entity from other Java objects are a few JPA-specific annotations. The @Entity annotation is used to indicate that our class is a JPA entity. Any object we want to persist to a database via JPA must be annotated with this annotation. The @Id annotation is used to indicate what field in our JPA entity is its primary key. The primary key is a unique identifier for our entity. No two entities may have the same value for their primary key field. This annotation can be placed just above the getter method for the primary key class. This is the strategy that the NetBeans wizard follows. It is also correct to specify the annotation right above the field declaration. The @Entity and the @Id annotations are the bare minimum two annotations that a class needs in order to be considered a JPA entity. JPA allows primary keys to be automatically generated. In order to take advantage of this functionality, the @GeneratedValue annotation can be used. As we can see, the NetBeans generated JPA entity uses this annotation. This annotation is used to indicate the strategy to use to generate primary keys. All possible primary key generation strategies are listed in the following table:   Primary Key Generation Strategy   Description   GenerationType.AUTO   Indicates that the persistence provider will automatically select a primary key generation strategy. Used by default if no primary key generation strategy is specified.   GenerationType.IDENTITY   Indicates that an identity column in the database table the JPA entity maps to must be used to generate the primary key value.   GenerationType.SEQUENCE   Indicates that a database sequence should be used to generate the entity's primary key value.   GenerationType.TABLE   Indicates that a database table should be used to generate the entity's primary key value.       In most cases, the GenerationType.AUTO strategy works properly, therefore it is almost always used. For this reason the New Entity Class wizard uses this strategy. When using the sequence or table generation strategies, we might have to indicate the sequence or table used to generate the primary keys. These can be specified by using the @SequenceGenerator and @TableGenerator annotations, respectively. Consult the Java EE 5 JavaDoc at http://java.sun.com/javaee/5/docs/api/ for details. For further knowledge on primary key generation strategies you can refer EJB 3 Developer Guide by Michael Sikora, which is another book by Packt Publishing (http://www.packtpub.com/developer-guide-for-ejb3/book). Adding Persistent Fields to Our Entity At this point, our JPA entity contains a single field, its primary key. Admittedly not very useful, we need to add a few fields to be persisted to the database. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; private String firstName; private String lastName; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } //Additional methods omitted for brevity} In this modified version of our JPA entity, we added two fields to be persisted to the database; firstName will be used to store the user's first name, lastName will be used to store the user's last name. JPA entities need to follow standard JavaBean coding conventions. This means that they must have a public constructor that takes no arguments (one is automatically generated by the Java compiler if we don't specify any other constuctors), and all fields must be private, and accessed through getter and setter methods. Automatically Generating Getters and Setters In NetBeans, getter and setter methods can be generated automatically. Simply declare new fields as usual then use the "insert code" keyboard shortcut (default is Alt+Insert), then select Getter and Setter from the resulting pop-up window, then click on the check box next to the class name to select all fields, then click on the Generate button. Before we can use JPA persist our entity's fields into our database, we need to write some additional code. Creating a Data Access Object (DAO) It is a good idea to follow the DAO design pattern whenever we write code that interacts with a database. The DAO design pattern keeps all database access functionality in DAO classes. This has the benefit of creating a clear separation of concerns, leaving other layers in our application, such as the user interface logic and the business logic, free of any persistence logic. There is no special procedure in NetBeans to create a DAO. We simply follow the standard procedure to create a new class by selecting File | New, then selecting Java as the category and the Java Class as the file type, then entering a name and a package for the class. In our example, we will name our class CustomerDAO and place it in the com.ensode.jpaweb package. At this point, NetBeans create a very simple class containing only the package and class declarations. To take complete advantage of Java EE features such as dependency injection, we need to make our DAO a JSF managed bean. This can be accomplished by simply opening faces-config.xml, clicking its XML tab, then right-clicking on it and selecting JavaServer Faces | Add Managed Bean. We get the Add Manged Bean dialog as seen here: We need to enter a name, fully qualified name, and scope for our managed bean (which, in our case, is our DAO), then click on the Add button. This action results in our DAO being declared as a managed bean in our application's faces-config.xml configuration file. <managed-bean> <managed-bean-name>CustomerDAO</managed-bean-name> <managed-bean-class> com.ensode.jpaweb.CustomerDAO </managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> We could at this point start writing our JPA code manually, but with NetBeans there is no need to do so, we can simply right-click on our code and select Persistence | Use Entity Manager, and most of the work is automatically done for us. Here is how our code looks like after doing this trivial procedure: package com.ensode.jpaweb;import javax.annotation.Resource;import javax.naming.Context;import javax.persistence.EntityManager;import javax.persistence.PersistenceContext;@PersistenceContext(name = "persistence/LogicalName", unitName = "jpawebPU")public class CustomerDAO { @Resource private javax.transaction.UserTransaction utx; protected void persist(Object object) { try { Context ctx = (Context) new javax.naming.InitialContext(). lookup("java:comp/env"); utx.begin(); EntityManager em = (EntityManager) ctx.lookup("persistence/LogicalName"); em.persist(object); utx.commit(); } catch (Exception e) { java.util.logging.Logger.getLogger( getClass().getName()).log( java.util.logging.Level.SEVERE, "exception caught", e); throw new RuntimeException(e); } }} All highlighted code is automatically generated by NetBeans. The main thing NetBeans does here is add a method that will automatically insert a new row in the database, effectively persisting our entity's properties. As we can see, NetBeans automatically generates all necessary import statements. Additionally, our new class is automatically decorated with the @PersistenceContext annotation. This annotation allows us to declare that our class depends on an EntityManager (we'll discuss EntityManager in more detail shortly). The value of its name attribute is a logical name we can use when doing a JNDI lookup for our EntityManager. NetBeans by default uses persistence/LogicalName as the value for this property. The Java Naming and Directory Interface (JNDI) is an API we can use to obtain resources, such as database connections and JMS queues, from a directory service. The value of the unitName attribute of the @PersistenceContext annotation refers to the name we gave our application's Persistence Unit. NetBeans also creates a new instance variable of type javax.transaction.UserTransaction. This variable is needed since all JPA code must be executed in a transaction. UserTransaction is part of the Java Transaction API (JTA). This API allows us to write code that is transactional in nature. Notice that the UserTransaction instance variable is decorated with the @Resource annotation. This annotation is used for dependency injection. in this case an instance of a class of type javax.transaction.UserTransaction will be instantiated automatically at run-time, without having to do a JNDI lookup or explicitly instantiating the class. Dependency injection is a new feature of Java EE 5 not present in previous versions of J2EE, but that was available and made popular in the Spring framework. With standard J2EE code, it was necessary to write boilerplate JNDI lookup code very frequently in order to obtain resources. To alleviate this situation, Java EE 5 made dependency injection part of the standard. The next thing we see is that NetBeans added a persist method that will persist a JPA entity, automatically inserting a new row containing our entity's fields into the database. As we can see, this method takes an instance of java.lang.Object as its single parameter. The reason for this is that the method can be used to persist any JPA entity (although in our example, we will use it to persist only instances of our Customer entity). The first thing the generated method does is obtain an instance of javax.naming.InitialContext by doing a JNDI lookup on java:comp/env. This JNDI name is the root context for all Java EE 5 components. The next thing the method does is initiate a transaction by invoking uxt.begin(). Notice that since the value of the utx instance variable was injected via dependency injection (by simply decorating its declaration with the @Resource annotation), there is no need to initialize this variable. Next, the method does a JNDI lookup to obtain an instance of javax.persistence.EntityManager. This class contains a number of methods to interact with the database. Notice that the JNDI name used to obtain an EntityManager matches the value of the name attribute of the @PersistenceContext annotation. Once an instance of EntityManager is obtained from the JNDI lookup, we persist our entity's properties by simply invoking the persist() method on it, passing the entity as a parameter to this method. At this point, the data in our JPA entity is inserted into the database. In order for our database insert to take effect, we must commit our transaction, which is done by invoking utx.commit(). It is always a good idea to look for exceptions when dealing with JPA code. The generated method does this, and if an exception is caught, it is logged and a RuntimeException is thrown. Throwing a RuntimeException has the effect of rolling back our transaction automatically, while letting the invoking code know that something went wrong in our method. The UserTransaction class has a rollback() method that we can use to roll back our transaction without having to throw a RunTimeException. At this point we have all the code we need to persist our entity's properties in the database. Now we need to write some additional code for the user interface part of our application. NetBeans can generate a rudimentary JSF page that will help us with this task.
Read more
  • 0
  • 0
  • 5370

article-image-web-scraping-python-part-2
Packt
23 Oct 2009
7 min read
Save for later

Web scraping with Python (Part 2)

Packt
23 Oct 2009
7 min read
This article by Javier Collado expands the set of web scraping techniques shown in his previous article by looking closely into a more complex problem that cannot be solved with the tools that were explained there. For those who missed out on that article, here's the link. Web Scraping with Python This article will show how to extract the desired information using the same three steps when the web page is not written directly using HTML, but is auto-generated using JavaScript to update the DOM tree. As you may remember from that article, web scraping is the ability to extract information automatically from a set of web pages that were designed only to display information nicely to humans; but that might not be suitable when a machine needs to retrieve that information. The three basic steps that were recommended to be followed when performing a scraping task were the following: Explore the website to find out where the desired information is located in the HTML DOM tree Download as many web pages as needed Parse downloaded web pages and extract the information from the places found in the exploration step What should be taken into account when the content is not directly coded in the HTML DOM tree? The main difference, as you probably have already noted, is that using the downloading methods that were suggested in the previous article (urllib2 or mechanize) just don't work. This is because they generate an HTTP request to get the web page and deliver the received HTML directly to the scraping script. However, the pieces of information that are auto-generated by the JavaScript code are not yet in the HTML file because the code is not executed in any virtual machine as it happens when the page is displayed in a web browser. Hence, instead of relying on a library that generates HTTP requests, we need a library that behaves as a real web browser, or even better, a library that interacts with a real web browser. So that we are sure that we obtain the same data as we see when manually opening a page in a web browser. Please remember that the aim of web scraping is actually parsing the data that a human user sees, so interacting with a real web browser would be a really nice feature. Is there any tool out there to perform that? Fortunately, the answer is yes. In particular, there are a couple of tools used for web testing automation that can be used to solve the JavaScript execution problem: Selenium and Windmill . For the code samples in the sections below, Windmill is used. Any choice would be fine as both of them are well documented and stable tools ready to be used for production. Let's now follow the same three steps that were suggested in the previous article to solve the scraping of the contents of a web page that is partly generated using JavaScript code. Explore Imagine that you are a fan of NASA Image of the day gallery. You want to get a list of the names of all the images in the gallery together with the link to the whole resolution picture just in case you decide to download it later to use as a desktop wallpaper. The first thing to do is to locate the data that has to be extracted on the desired web page. In the case of the Image of the day gallery (see screenshot below), there are three elements that are important to note: Title of the image that is being currently displayed Link to the image full resolution file Next link to make it possible navigate through all the images To find out the location of each piece of interesting information, as it was already suggested in the previous article, it's better to use a tool such as Firebug whose inspect functionality can be really useful. The following picture, for example, shows the location of the image title inside an h3 tag: The other two fields can be located as easily as the title, so no further explanation will be given here. Please refer to the previous article for further information. Download As explained in the introduction, to download the content of the web page, we will use Windmill as it allows the JavaScript code to execute in the web browser before getting the page content. Because Windmill is mostly a testing library, instead of writing a script that calls the Windmill API, I will write a test case for Windmill to navigate through all the image web pages. The code for the test should be as follows: 1 def test_scrape_iotd_gallery(): 2 """ 3 Scrape NASA Image of the Day Gallery 4 """ 5 # Extra data massage for BeautifulSoup 6 my_massage = get_massage() 7 8 # Open main gallery page 9 client = WindmillTestClient(__name__) 10 client.open(url='http://www.nasa.gov/multimedia/imagegallery/iotd.html') 11 12 # Page isn't completely loaded until image gallery data 13 # has been updated by javascript code 14 client.waits.forElement(xpath=u"//div[@id='gallery_image_area']/img", 15 timeout=30000) 16 17 # Scrape all images information 18 images_info = {} 19 while True: 20 image_info = get_image_info(client, my_massage) 21 22 # Break if image has been already scrapped 23 # (that means that all images have been parsed 24 # since they are ordered in a circular ring) 25 if image_info['link'] in images_info: 26 break 27 28 images_info[image_info['link']] = image_info 29 30 # Click to get the information for the next image 31 client.click(xpath=u"//div[@class='btn_image_next']") 32 33 # Print results to stdout ordered by image name 34 for image_info in sorted(images_info.values(), 35 key=lambda image_info: image_info['name']): 36 print ("Name: %(name)sn" 37 "Link: %(link)sn" % image_info) As it can be seen, the usage of Windmill is similar to other libraries such as mechanize. For example, first of all a client object has to be created to interact with the browser, (line 9) and later, the main web page, that is going to be used to navigate through all the information, has to be opened (line 10). Nevertheless, it also includes some facilities that take into account JavaScript code as shown at line 14. In this line, the waits.forElement method has been used to look for DOM element that is filled by the JavaScript code so when that element, in this case the big image in the image gallery, is displayed, the rest of the script can proceed. It is important to note here that the web page processing doesn't start when the page is downloaded (this happens after line 10), but when there's some evidence that JavaScript code has finished the DOM tree manipulation. For navigating through all the pages that contain the information needed, this is just a matter of pressing over the next arrow (line 30). As the images are ordered in a circular buffer, the point when it is decided to stop is when the same image link has been parsed twice (line 25). To execute the script, instead of launching it as we would normally do for a python script, we should call it through the Windmill script to properly initialize the environment: $ windmill firefox test=nasa_iotd.py As it can be seen in the following screenshot, Windmill takes care of opening a browser (Firefox in this case) window and a controller window in which it's possible to see the commands that the script is executing (several clicks on next in the example): The controller window is really interesting because not only does it display the progress of the test cases, but also allows to enter/record actions interactively, which is a nice feature when trying things out. In particular, the recording may be used under some situations to replace Firebug in the exploration step. This is because the captured actions may be stored in a script without spending much time in xpath expressions. For more information about how to use Windmill and the complete API, please refer to the Windmill documentation.
Read more
  • 0
  • 0
  • 9202
Modal Close icon
Modal Close icon