Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-seam-conversation-management-using-jboss-seam-components-part-1
Packt
24 Dec 2009
8 min read
Save for later

Seam Conversation Management using JBoss Seam Components: Part 1

Packt
24 Dec 2009
8 min read
The JBoss Seam framework provides elegant solutions to a number of problems. One of these problems is the concept of conversation management. Traditional web applications have a limited number of scopes (or container-managed memory regions) in which they can store data needed by the application at runtime. In a typical Java web application, these scopes are the application scope, the session scope, and the request scope. JSP-based Java web applications also have a page scope. Application scope is typically used to store stateless components or long-term read-only application data. Session scope provides a convenient, medium-term storage for per-user application state, such as user credentials, application preferences, and the contents of a shopping cart. Request scope is short-term storage for per-request information, such as search keywords, data table sort direction, and so on. Seam introduces another scope for JSF applications: the conversation scope. The conversation scope can be as short-term as the request scope, or as long-term as the session scope. Seam conversations come in two types: temporary conversations and long-running conversations. A temporary Seam conversation typically lasts as  long as a single HTTP request. A long-running Seam conversation typically spans several screens and can be tied to more elaborate use cases and workflows within the application, for example, booking a hotel, renting a car, or placing an order for computer hardware. There are some important implications for Seam's conversation management when using Ajax capabilities of RichFaces and Ajax4jsf. As an Ajax-enabled JSF form may involve many Ajax requests before the form is "submitted" by the user at the end of a  use case, some subtle side effects can impact our application if we are not careful. Let's look at an example of how to use Seam conversations effectively with Ajax. Temporary conversations When a Seam-enabled conversation-scoped JSF backing bean is accessed for the first time, through a value expression or method expression from the JSF page for instance, the Seam framework creates a temporary conversation if a conversation does not already exist and stores the component instance in that scope. If a long-running conversation already exists, and the component invocation requires a long-running conversation, for example by associating the view with a long-running conversation in pages.xml, by annotating the bean class or method with Seam's @Conversational annotation, by annotating a method with Seam's @Begin annotation, or by using the conversationPropagation request parameter, then Seam stores the component instance in the existing long-running conversation. ShippingCalculatorBean.java The following source code demonstrates how to declare a conversation-scoped backing being using Seam annotations. In this example, we declare the ShippingCalculatorBean as a Seam-managed conversation-scoped component named shippingCalculatorBeanSeam. @Scope(ScopeType.CONVERSATION) public class ShippingCalculatorBean implements Serializable { /** * */ private static final long serialVersionUID = 1L; private Country country; private Product product; public Country getCountry() { return country; } public Product getProduct() { return product; } public Double getTotal() { Double total = 0d; if (country != null && product != null) { total = product.getPrice(); if (country.getName().equals("USA")) { total = +5d; } else { total = +10d; } } return total; } public void setCountry(Country country) { this.country = country; } public void setProduct(Product product) { this.product = product; } } faces-config.xml We also declare the same ShippingCalculatorBean class as a request-scoped backing bean named shippingCalculaorBean in faces-config.xml. Keep in mind that the JSF framework manages this instance of the class, so none of the Seam annotations are effective for instances of this managed bean. <managed-bean> <description>Shipping calculator bean.</description> <managed-bean-name>shippingCalculatorBean</managed-bean-name> <managed-bean-class>chapter5.bean.ShippingCalculatorBean </managed-bean-class> <managed-bean-scope>request</managed-bean-scope> </managed-bean> pages.xml The pages.xml file is an important Seam configuration file. When a Seam-enabled web application is deployed, the Seam framework looks for and processes a file in the WEB-INF directory named pages.xml. This file contains important information about the pages in the JSF application, and enables us to indicate if a long-running conversation should be started automatically when a view is first accessed. In this example, we declare two pages in pages.xml, one that does not start a long-running conversation, and one that does. <?xml version="1.0" encoding="utf-8"?> <pages xsi_schemaLocation="http://jboss.com/products/seam/pages http://jboss.com/products/seam/pages-2.1.xsd"> <page view-id="/conversation01.jsf" /> <page view-id="/conversation02.jsf"> <begin-conversation join="true"/> </page> … </pages> conversation01.jsf Let's look at the source code for our first Seam conversation test page. In this page, we render two forms side-by-side in an HTML panel grid. The first form is bound to the JSF-managed request-scoped ShippingCalculatorBean, and the second form is bound to the Seam-managed conversation-scoped ShippingCalculatorBean. The form allows the user to select a product and a shipping destination, and then calculates the shipping cost when the command button is clicked. When the user tabs through the fields in a form, an Ajax request is sent, submitting the form data and re-rendering the button. The button is in a disabled state until the user has selected a value in both the fields. The Ajax request creates a new HTTP request on the server, so for the first form JSF creates a new request-scoped instance of our ShippingCalculatorBean for every Ajax request. As the view is not configured to use a long-running conversation, Seam creates a new temporary conversation and stores a new instance of our ShippingCalculatorBean class in that scope for each Ajax request. Therefore, the behavior that can be observed when running this page in the browser is that the calculation simply does not work. The value is always zero. This is because the model state is being lost due to the incorrect scoping of our backing beans. <h:panelGrid columns="2" cellpadding="10"> <h:form> <rich:panel> <f:facet name="header"> <h:outputText value="Shipping Calculator (No Conversation)" /> </f:facet> <h:panelGrid columns="1" width="100%"> <h:outputLabel value="Select Product: " for="product" /> <h:selectOneMenu id="product" value="#{shippingCalculatorBean.product}"> <s:selectItems var="product" value="#{productBean.products}" label="#{product.name}" noSelectionLabel="Select" /> <a4j:support event="onchange" reRender="button" /> <s:convertEntity /> </h:selectOneMenu> <h:outputLabel value="Select Shipping Destination: " for="country" /> <h:selectOneMenu id="country" value="#{shippingCalculatorBean.country}"> <s:selectItems var="country" value="#{customerBean.countries}" label="#{country.name}" noSelectionLabel="Select" /> <a4j:support event="onchange" reRender="button"/> <s:convertEntity /> </h:selectOneMenu> <h:panelGrid columns="1" columnClasses="centered" width="100%"> <a4j:commandButton id="button" value="Calculate" disabled="#{shippingCalculatorBean.country eq null or shippingCalculatorBean.product eq null}" reRender="total" /> <h:panelGroup> <h:outputText value="Total Shipping Cost: " /> <h:outputText id="total" value="#{shippingCalculatorBean.total}"> <f:convertNumber type="currency" currencySymbol="$" maxFractionDigits="0" /> </h:outputText> </h:panelGroup> </h:panelGrid> </h:panelGrid> </rich:panel> </h:form> <h:form> <rich:panel> <f:facet name="header"> <h:outputText value="Shipping Calculator (with Temporary Conversation)" /> </f:facet> <h:panelGrid columns="1"> <h:outputLabel value="Select Product: " for="product" /> <h:selectOneMenu id="product" value="#{shippingCalculatorBeanSeam.product}"> <s:selectItems var="product" value="#{productBean.products}" label="#{product.name}" noSelectionLabel="Select" /> <a4j:support event="onchange" reRender="button" /> <s:convertEntity /> </h:selectOneMenu> <h:outputLabel value="Select Shipping Destination: " for="country" /> <h:selectOneMenu id="country" value="#{shippingCalculatorBeanSeam.country}"> <s:selectItems var="country" value="#{customerBean.countries}" label="#{country.name}" noSelectionLabel="Select" /> <a4j:support event="onchange" reRender="button" /> <s:convertEntity /> </h:selectOneMenu> <h:panelGrid columns="1" columnClasses="centered" width="100%"> <a4j:commandButton id="button" value="Calculate" disabled="#{shippingCalculatorBeanSeam.country eq null or shippingCalculatorBeanSeam.product eq null}" reRender="total" /> <h:panelGroup> <h:outputText value="Total Shipping Cost: " /> <h:outputText id="total" value="#{shippingCalculatorBeanSeam.total}"> <f:convertNumber type="currency" currencySymbol="$" maxFractionDigits="0" /> </h:outputText> </h:panelGroup> </h:panelGrid> </h:panelGrid> </rich:panel> </h:form> </h:panelGrid> The following screenshot demonstrates the problem of using request-scoped or temporary conversation-scoped backing beans in an Ajax-enabled JSF application. As an Ajax request is simply an asynchronous HTTP request marshalled by client-side code executed by the browser's JavaScript interpreter, the request-scoped backing beans are recreated with every Ajax request. The model state is lost and the behavior of the components in the view is incorrect.
Read more
  • 0
  • 0
  • 1857

article-image-seam-conversation-management-using-jboss-seam-components-part-2
Packt
24 Dec 2009
4 min read
Save for later

Seam Conversation Management using JBoss Seam Components: Part 2

Packt
24 Dec 2009
4 min read
The introductory page of the order process The first view in our page flow is an introductory page that simply navigates to the first step in our ordering process. Notice that we use the Seam tag to render a hyperlink that includes the conversation ID as a query string parameter. This is called conversation propagation. Seam conversation propagation using hyperlinks Seam automatically propagates the conversation during JSF form submissions using the HTTP POST method. For any GET requests (for instance, clicking on a hyperlink), we are responsible for including the current conversation ID as a request parameter to ensure that the request is handled properly. Seam provides a hyperlink control rendered by the tag that automatically includes the current conversation ID on the query string. We can also include the conversation ID as a query string parameter by nesting the Seam tag inside the standard JSF tag. Conversation ID propagation is automatic when a JSF form is submitted using POST. The markup for the introductory screen in our order process is as follows: <h1>Product Order Form</h1> <a4j:form> <rich:panel> <f:facet name="header"> <h:outputText value="Welcome to our Store" /> </f:facet> <p>Welcome to our store. Our step-by-step forms will guide you through the ordering process.</p> <s:link view="/order/step1.jsf" value="Place an order" /> </rich:panel> </a4j:form> } The following screenshot shows the introductory screen of our ordering process. Notice in the status bar of the browser window that the URL generated by the Seam JSF hyperlink control contains a query string parameter named cid with a value of one. As long as we pass this parameter from page to page, all the requests will be handled as a part of the same conversation. The conversation ID is automatically  submitted during JSF postback requests. When a new conversation is started, Seam will increment the conversation ID automatically. The customer registration screen (Step 1) The first screen, our page flow, requires the user to provide customer information before placing an order. This view is basically identical to the example used in the Seam validation section of this article. Therefore, much of the JSF markup has been removed for simplification purposes. Notice that the action has been hardcoded in the <a4j:commandButton> tag and corresponds to a navigation rule declaration in faces-config.xml. No additional work is required for the Seam conversation ID to be propagated to the server when the form is submitted; this happens automatically. <h1>Step 1. Customer Registration</h1> <a4j:form id="customerForm" styleClass="customer-form"> ... <a4j:commandButton value="Next Step" action="next" reRender="customerForm" /> ... </a4j:form> The following screenshot shows the customer registration step in the online ordering page flow of our application. The shipping information screen (Step 2) The following screen requires the user to select a product and a shipping destination before clicking on the Next Step button. Once again, Seam conversation propagation happens automatically when the form is submitted. The order details confirmation screen (Step 3) The next screen requires the user to confirm the order details before submitting the order for processing. Once again, the JSF markup has been omitted for brevity. Notice that the command button invokes the submitOrder backing bean method to submit the order. As noted earlier, this method is annotated with the Seam framework @End annotation, indicating that the long-running conversation ends after the method is invoked. When the method returns, Seam demotes the long-running conversation to a temporary conversation and destroys it after the view is rendered. Any references to conversation-scoped beans are released when the Seam conversation is destroyed, efficiently freeing up server resources in a more fine-grained way than by invalidating the session. <h:form> ... <a4j:commandButton action="#{orderBean.submitOrder}" value="Submit Order" /> ... </h:form> The following screenshot shows the order details confirmation screen.
Read more
  • 0
  • 0
  • 1357

article-image-introducing-business-activity-monitoring
Packt
30 Nov 2009
7 min read
Save for later

Introducing Business Activity Monitoring

Packt
30 Nov 2009
7 min read
Introducing Business Activity Monitoring Typically, an organization's processes span multiple systems, channels, applications, departments, and external partners. In this case, how do we monitor such processes? What is the current state of the organizational processes? What is the benchmark for poorly-performing processes and exceptional processes? Most of the time, organizations are unable to answer such questions, or only have a vague idea for various reasons. Either they are monitoring the process with a very limited scope, or the mechanisms for monitoring the process are not in place to allow such details to be available. We rarely find organizations with process owners having an end-to-end view of a process. The big picture of a process is not available to the decision makers on a real-time basis. Also, we have seen that a BPM cycle involves more than just automating a business process. Although modeling and analysis of the process plays an important part before a process is executed, the benefit is further highlighted by using Business Rules technology for added agility of the enterprise. One important factor that closes the loop for BPM is the aspect of monitoring the process on a continuous basis to pinpoint bottlenecks, as the process is executing within a business, and acts as a feedback for potential process improvement exercise. The need to monitor an organization's business processes, especially as part of a larger BPM initiatives, is gaining considerable acceptability and demand. Such monitoring is the primary job of BAM. What is BAM? BAM allows a business to monitor its business processes, and related business events being generated in real-time, and provides an assessment of business process health based on pre-defined KPIs. This allows greater operational visibility of the business to relevant process owners for assessment and decision-making via real-time information dashboards. BAM also allows users to take actions based on information available on the dashboards. Typically, systems providing BAM capabilities use business events to capture information from varied sources such as ERP, workflow, BPM, legacy systems, external partners, and suppliers. These data sources provide the necessary business measures, which are evaluated by the BAM against set KPIs, and provide the information in a user-friendly dashboard for the users. BPM, SOA, and BAM BPM, SOA, and BAM can be used as independent, isolated technologies. However, their benefits are compounded for a business if used together in a complementary fashion. As we can see in the following reference architecture, BAM works along with the services and process components to capture event-related information from a business and IT perspective, for analysis and reporting purposes. In this case, SOA enables an organization to have a robust and flexible IT infrastructure that can help it easily achieve its BAM goals by allowing events and data from different services to be available to BAM for decision-making and realtime analysis. In an SOA-based solution, the business events will be inputs for BAM that are provided from the services layer. The linkage is via the BPM route, or through an event-based integration layer provided by an ESB. We can refer to this relationship between BAM and SOA as being Service Oriented Activity Monitoring (SOAM), as today's organizational setup will provide this event information to various BAM services interfaces exposed by the business applications in an enterprise. In case of BPM, the business process describes the key activities required to fulfill the specified business action and its associated KPIs. These actions are executed as transactions using an orchestration engine and the underlying service layer. These transaction occurrences result in multiple process events to be created for each step within a transaction. BAM's primary focus is on capturing, analyzing, and reporting on the transactions and events created by the process running over the SOA platform. In case of BPM, the business process describes the key activities required to fulfill the specified business action and its associated KPIs. These actions are executed as transactions using an orchestration engine and the underlying service layer. These transaction occurrences result in multiple process events to be created for each step within a transaction. BAM's primary focus is on capturing, analyzing, and reporting on the transactions and events created by the process running over the SOA platform. BAM usually looks at collecting information about a process based on the following attributes: Quantity or Volume of Transactions or Events: One of the primary areas covered by BAM is the volume of events generated by a process. This is not just an IT metric, but more of a business-related metric to help business stakeholders analyze information points such as the number of orders shipped in a day, the number of trades made during trading hours, the number of helpdesk tickets closed by a call centre, and so on. Usually, we will define these KPIs in a process definition, and use BAM to raise alerts to the portfolio manager if the process is exceeding those values, for example, "Send an alert as soon as the stock portfolio value decreases more than 3% ". Time Bound Events: In this case, the BAM concentrates on time-related metrics such as helpdesk ticket process cycle time for high priority issues, general process cycle times, supply-related waiting time, and so on. Again, based on certain thresholds, alerts can be sent out, or the reports can be viewed by the management in real-time using customized dashboards. Faults: These are situations where the process is not running well. This could be due to a hardware fault, or a process related issue such as deadlocks, or some other issue. BAM helps in these scenarios by helping to identify areas of problems, and providing important metrics with respect to frequencies of such errors and their potential damage on process performance, and other dimensions such as cost, schedules, and so on. User Defined Events and Conditions: Apart from the general dimensions of volume, time, and errors in a process, a business user might want to define KPIs around specific business issues that need analysis. For example, for compliance requirements, a bank might be required to keep track of all high-value transactions to prevent money laundering. During implementation, the business analysts can define this KPI in the process model, which will then be implemented mostly by a rules engine, and the events generated will be used by BAM to provide statistical reports and dashboards based on the frequency of these transactions, specific regions and user types involved in such transactions, and so on. The real value of BAM, however, does not come only from analysis of individual events, and exceptions generated during process execution. BAM provides a mechanism to correlate aggregated process events to help with a cause and effect analysis, pattern matching, and so on, which provides immense value to today's businesses. Although not in the scope of this article, another area which is gaining a lot of attention in this area is Complex Event Processing or CEP. This can be a perfect vehicle for implementing BAM in an enterprise in order to solve complex business issues. CEP is based on a concept of analyzing a set of specific events from a range of possible events, and identifying patterns that could be meaningful for an organization. Among the many applications of CEP, one example we can use is that of 'Algorithmic Trading', where CEP can be used to analyze a huge amount of market data, assess favorable patterns for trading, and initiate trading in a market based on this. A lot of banks are using this technology for performing low-value trades, and assessing risk positions simultaneously. CEP then records this information as a 'fingerprint', and maintains a history to use when deciding whether to execute similar trades in the future. As it gathers more experience and intelligence, the BAM tool supporting CEP can start to refine its predictive capabilities, and conduct more efficient calculations.
Read more
  • 0
  • 0
  • 2016
Visually different images

article-image-facelets-components-jsf-12
Packt
30 Nov 2009
12 min read
Save for later

Facelets Components in JSF 1.2

Packt
30 Nov 2009
12 min read
One of the more advanced features of the Facelets framework is the ability to define complex templates containing dynamic nested content. What is a template?The Merriam-Webster dictionary defines the word "template" as "a gauge, pattern, or mold (as a thin plate or board) used as a guide to the form of a piece being made" and as "something that establishes or serves as a pattern." In the context of user interface design for the Web, a template can be thought of as an abstraction of a set of pages in the web application.A template does not define content, but rather it defines placeholders for content, and provides the layout, orientation, flow, structure, and logical organization of the elements on the page. We can also think of templates as documents with "blanks" that will be filled in with real data and user interface controls at request time. One of the benefits of templating is the separation of content from presentation, making the maintenance of the views in our web application much easier. The <ui:insert> tag has a name attribute that is used to specify a dynamic content region that will be inserted by the template client. When Facelets renders a UI composition template, it attempts to substitute any <ui:insert> tags in the Facelets template document with corresponding <ui:define> tags from the Facelets template client document. Conceptually, the Facelets composition template transformation process can be visualized as follows: In this scenario, the browser requests a Facelets template client document in our JSF application. This document contains two <ui:define> tags that specify named content elements and references a Facelets template document using the <ui:composition> tag's template attribute. The Facelets template document contains two <ui:insert> tags that have the same names as the <ui:define> tags in the client document, and three <ui:include> tags for the header, footer, and navigation menu. This is a good example of the excellent support that Facelets provides for the Composite View design pattern. Facelets transforms the template client document by merging any content it defines using <ui:define> tags with the content insertion points specified in the Facelets template document using the <ui:insert> tag. The result of merging the Facelets template client document with the Facelets template document is rendered in the browser as a composite view. While this concept may seem a bit complicated at first, it is actually a powerful feature of the Facelets view defi nition framework that can greatly simplify user interface templating in a web application. In fact, the Facelets composition template document can itself be a template client by referencing another composition template. In this way, a complex hierarchy of templates can be used to construct a flexible, multi-layered presentation tier for a JSF application. Without the Facelets templating system, we would have to copy and paste view elements such as headers, footers, and menus from one page to the next to achieve a consistent look and feel across our web application. Facelets templating enables us to define our look and feel in one document and to reuse it across multiple pages. Therefore, if we decide to change the look and feel, we only have to update one document and the change is immediately propagated to all the views of the JSF application. Let's look at some examples of how to use the Facelets templating feature. A simple Facelets template The following is an example of a simple Facelets template. It simply renders a message within an HTML <h2> element. Facelets will replace the "unnamed" <ui:insert> tag (without the name attribute) in the template document with the content of the <ui:composition> tag from the template client document. template01.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>Facelets template example</title><link rel="stylesheet" type="text/css" href="/css/style.css" /></head><body><h2><ui:insert /></h2></body></html> A simple Facelets template client Let's look at a simple example of Facelets templating. The following page is a Facelets template client document. (Remember: you can identify a Facelets template client by looking for the existence of the template attribute on the <ui:composition> tag.) The <ui:composition> tag simply contains the text Hello World. templateClient01.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:composition example</title></head><body><ui:composition template="/WEB-INF/templates/template01.jsf">Hello World</ui:composition><ui:debug /></body></html> The following screenshot displays the result of the Facelets UI composition template transformation when the browser requests templateClient01.jsf. Another simple Facelets template client The following Facelets template client example demonstrates how a template can be reused across multiple pages in the JSF application: templateClient01a.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:composition example</title></head><body><ui:composition template="/WEB-INF/templates/template01.jsf">How are you today?</ui:composition><ui:debug /></body></html> The following screenshot displays the result of the Facelets UI composition template transformation when the browser requests templateClient01a.jsf: A more complex Facelets template The Facelets template in the previous example is quite simple and does not demonstrate some of the more advanced capabilities of Facelets templating. In particular, the template in the previous example only has a single <ui:insert> tag, with no name attribute specified. The behavior of the unnamed <ui:insert> tag is to include any content in the referencing template client page. In more complex templates, multiple <ui:insert> tags can be used to enable template client documents to defi ne several custom content elements that will be inserted throughout the template. The following Facelets template document declares three named <ui:insert> elements. Notice carefully where these tags are located. template02.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title><ui:insert name="title" /></title><link rel="stylesheet" type="text/css" href="/css/style.css" /></head><body><ui:include src="/WEB-INF/includes/header.jsf" /><h2><ui:insert name="header" /></h2><ui:insert name="content" /><ui:include src="/WEB-INF/includes/footer.jsf" /></body></html> In the following example, the template client document defines three content elements named title, header, and content using the <ui:define> tag. Their position in the client document is not important because the template document determines where this content will be positioned. templateClient02.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:composition example</title></head><body><ui:composition template="/WEB-INF/templates/template02.jsf"><ui:define name="title">Facelet template example</ui:define><ui:define name="header">Hello World</ui:define><ui:define name="content">Page content goes here.</ui:define></ui:composition><ui:debug /></body></html> The following screenshot displays the result of a more complex Facelets UI composition template transformation when the browser requests the page named templateClient02.jsf. The next example demonstrates reusing a more advanced Facelets UI composition template. At this stage, we should have a good understanding of the basic concepts of Facelets templating and reuse. templateClient02a.jsf<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:composition example</title></head><body>Facelets Components[ 78 ]<ui:composition template="/WEB-INF/templates/template02.jsf"><ui:define name="title">Facelet template example</ui:define><ui:define name="header">Thanks for visiting!</ui:define><ui:define name="content">We hope you enjoyed our site.</ui:define></ui:composition><ui:debug /></body></html> The next screenshot displays the result of the Facelets UI composition transformation when the browser requests templateClient02a.jsf. We can follow this pattern to make a number of JSF pages reuse the template in this manner to achieve a consistent look and feel across our web application. Decorating the user interface The Facelets framework supports the definition of smaller, reusable view elements that can be combined at runtime using the Facelets UI tag library. Some of these tags, such as the <ui:composition> and <ui:component> tags, trim their surrounding content. This behavior is desirable when including content from one complete XHTML document within another complete XHTML document. There are cases, however, when we do not want Facelets to trim the content outside the Facelets tag, such as when we are decorating content on one page with additional JSF or HTML markup defi ned in another page. For example, suppose there is a section of content in our XHTML document that we want to wrap or "decorate" with an HTML <div> element defined in another Facelets page. In this scenario, we want all the content on the page to be displayed, and we are simply surrounding part of the content with additional markup defined in another Facelets template. Facelets provides the <ui:decoration> tag for this purpose. Decorating content on a Facelets page The following example demonstrates how to decorate content on a Facelets page with markup from another Facelets page using the <ui:decoration> tag. The <ui:decoration> tag has a template attribute and behaves like the <ui:composition> tag. Facelets templating typically uses the <ui:composition>. It references a Facelets template document that contains markup to be included in the current document. The main difference between the <ui:composition> tag and the <ui:decoration> tag is that Facelets trims the content outside the <ui:composition> tag but does not trim the content outside the <ui:decoration> tag. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>ui:decorate example</title><link rel="stylesheet" type="text/css" href="css/style.css" /></head><body>Text before will stay.<ui:decorate template="/WEB-INF/templates/box.jsf"><span class="header">Information Box</span><p>This is the first line of information.</p><p>This is the second line of information.</p><p>This is the third line of information.</p></ui:decorate>Text after will stay.<ui:debug /></body></html> Creating a Facelets decoration Let's examine the Facelets decoration template referenced by the previous example. The following source code demonstrates how to create a Facelets template to provide the decoration that will surround the content on another page. As we are using a <ui:composition> tag, only the content inside this tag will be used. In this example, we declare an HTML <div> element with the "box" CSS style class that contains a single Facelets <ui:insert> tag. When Facelets renders the above Facelets page, it encounters the <ui:decorate> tag that references the box.jsf page. The <ui:decorate> tag will be merged together with the associated decoration template and then rendered in the view. In this scenario, Facelets will insert the child content of the <ui:decorate> tag into the Facelets decoration template where the <ui:insert> tag is declared. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>Box</title></head><body><ui:composition><div class="box"><ui:insert /></div></ui:composition></body></html> The result is that our content is surrounded or "decorated" by the <div> element. Any text before or after the <ui:decoration> is still rendered on the page, as shown in the next screenshot: The included decoration is rendered as is, and is not nested inside a UI component as demonstrated in the following Facelets debug page:
Read more
  • 0
  • 0
  • 2670

article-image-business-process-orchestration-soa
Packt
30 Nov 2009
7 min read
Save for later

Business Process Orchestration for SOA

Packt
30 Nov 2009
7 min read
Process Orchestration can simply be defined as the coordination of events and activities in a process at technical levels, to help achieve objectives laid down by the business. From an SOA perspective, orchestration involves direction and management of multiple component services to create a composite application or an end-to-end process. While orchestration tends to imply a single central engine performing the coordination act, another overlapping concept of choreography applies to sharing this coordination activity across multiple autonomous systems. BPM Architecture and Role of Business Process Orchestration While we are covering orchestration  for SOA, it is worthwhile to also discuss reference architecture for BPM, to understand how all components of technology fit together for modeling, executing, monitoring, and optimizing a business process. Following an architecture-lead approach, as always, is a good way to initially guide BPM projects. It is not necessary to implement all aspects of this architecture from day one, but as we mature with our BPM implementation, its coverage can be increased to gain maximum value. From the perspective of this article, this reference architecture provides an understanding of how process execution and orchestration is a core activity in bridging the abstract business models and underlying SOA infrastructure. If you look at the following architecture for BPM, you will realize that it is divided into layers and groups. The vertical right side covers the aspects of modeling the processes, business rules, and services. The horizontal stack starts with the presentation layer, which allows multiple channels through which a company's customers, employees, and partners can interact. It could be a web portal, a hand-held device, and so on. These channels are supported by the process orchestration layer, which assists in orchestrating different aspects of a business process to provide information to respective users in a channel. In this layer, we will have a process engine that will take inputs from the presentation layer and interface with underlying technologies and services to complete an end-to-end process. This layer will be responsible for ensuring that information is gathered from all sources at the right time, to enable a smooth process flow. The requirements for process orchestration will be fed by the activities performed by the business modeling team and the development teams, working on the process models using standards such as BPMN and BPEL. The orchestration layer will then interface with what we call 'Enterprise Services', which could be business services, technical services, or utility services, available either as basic services, or a composition of multiple services required to support the process orchestration. To enable access to these enterprise-level services, we will have an integration layer or an Enterprise Service Bus, which will provide a standards-based interface to multiple systems within or outside the organization, and also human service providers. We also have a layer of data management services that will be different high-level data sources that the BPM landscape will use. An example is a service registry to manage multiple services or metadata, which will manage information about all of the available data sources in the landscape to which this process has access. On the vertical left side, we have the monitoring services, which will capture all the events generated by the process to help in analyzing the process performance against key performance indicators laid down by the business. As we move ahead in this article, we will use this reference architecture to understand how various technology components fit together. Let us now go ahead with an example to see how we can orchestrate a process using Oracle BPEL Process Manager. Executing BPEL Processes in BPEL Process Manager One of the fundamental benefits of using a BPM system for modeling a business process – in this case the Oracle suite of products – is to allow models created using BPMN at the business level to be executed, and to automate manual processes. It also allows a business to evaluate gaps in current processes and identify the remedial actions that can be implemented quickly using the execution engine. When working on the example for the 'Portfolio Account Opening' process, we created the business process model using BPMN, analyzed the process, converted the BPMN model into a process blueprint to be shared by the development teams, filled the technical gaps, and enriched and finally deployed the process to the BPEL Process Manager. Let us take the next step in understanding how our deployed process will work, and the functionality it offers to the users working on this process. Our aim is to make you aware of how process-driven SOA works for an end-to-end process. This explanation assumes that you have some working knowledge of BPEL constructs such as activities, partnerlinks and so on. XSD and WSDL are used with in the JDeveloper environment to create and deploy BPEL processes. For a detailed understanding of BPEL and its complex constructs, you may want to refer to these resources. For our case, we will use a simplistic representation of information, tasks and moving from one task to another. Let us go through a series of steps to trigger an instance of the account opening process: Initiation of the Process Instance First, let us initiate the services related to SOA Suite. You can open them by selecting Start SOA suite from the Program menu. After the SOA suite services have started, we will open the SOA Launch Console, which provides a dashboard for all tools under the SOA suite that can be accessed from this location. To open the console, you can either enter the URL, which is typically http://localhost:8888; unless you have specified something specific during your installation. You can also access the console from the Program menu and select SOA Launch Console. The following screenshot shows what the SOA Suite console looks like. And As you can see, it provides, in addition to from all the product literature and technical guides, links to the main components of the SOA suite including BPEL Control, which is highlighted in the image. Open the Oracle BPEL Process Manager administration interface by clicking the BPEL Control link to access the details of the account opening process we deployed earlier. The first screen we see is the Process Dashboard, which provides us with the information on the currently-deployed processes in the database. As we can see, we have our 'Portfolio Account Opening Process'. There are currently some instances of the processes already running, and some instances have completed recently. To test the flow of the process and its behavior, trigger a new process instance for the deployed process through this console. To do this, click on the 'Portfolio_Account_Opening_Process' link on the dashboard to access details of our deployed process, and initiate a new instance. In a production environment, this step could be automated through a customized graphical interface. We will use the BPEL Process Manager to initiate this test process. As you can see, the BPEL process Portfolio_Account_Opening_Process has been deployed from the development environment inside the BPEL Process Manager. To initiate the process instance, we have used a simple string as the input. In this case, we will just start the process by providing Open Account as the payload string, and posting the XML message to initiate the process instance. To check whether the process instance has started, we can view the visual flow for the instance by clicking the visual flow link. The following visual flow shows that we have triggered the instance of the process, and it has reached a stage where the bank has received the application.
Read more
  • 0
  • 0
  • 4113

article-image-skin-customization-jboss-richfaces-33
Packt
30 Nov 2009
5 min read
Save for later

Skin Customization in JBoss RichFaces 3.3

Packt
30 Nov 2009
5 min read
Skinnability Every RichFaces component gives the support for skinnability and it means that just by changing the skin, we change the look for all of the components. That's very good for giving our application a consistent look and not repeating the same CSS values for each component every time. RichFaces still uses CSS, but it also enhances it in order to make it simpler to manage and maintain. Customize skin parameters A skin file contains the basic settings (such as font, colors, and so on) that we'll use for all the components—just by changing those settings, we can customize the basic look and feel for the RichFaces framework. As you might know, RichFaces comes with some built-in skins (and other external plug 'n' skin ones)—you can start with those skins in order to create your own custom skin. The built-in skins are: Plain emeraldTown blueSky wine japanCherry ruby classic deepMarine The plug 'n' skin ones are: laguna darkX glassX The plug 'n' skin skins are packaged in external jar files (that you can download from the same location as that of the RichFaces framework) that must be added into the project in order to be able to use them. Remember that the skin used by the application can be set as context-param in the web.xml file: <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>emeraldTown</param-value></context-param> This is an example with the emeralTown skin set: If we change the skin to japanCherry, we have the following screenshot: That's without changing a single line of CSS or XHTML! Edit a basic skin Now let's start creating our own basic skin. In order to do that, we are going to reuse one of the built-in skin files and change it. You can find the skin files in the richfaces-impl-3.x.x.jar file inside the META-INF/skins directory. Let's open the file and then open, for example, the emeraldTown.skin.properties file that looks like this (yes, the skin file is a .properties file!): #ColorsheaderBackgroundColor=#005000headerGradientColor=#70BA70headerTextColor=#FFFFFFheaderWeightFont=boldgeneralBackgroundColor=#f1f1f1generalTextColor=#000000generalSizeFont=18pxgeneralFamilyFont=Arial, Verdana, sans-serifcontrolTextColor=#000000controlBackgroundColor=#ffffffadditionalBackgroundColor=#E2F6E2shadowBackgroundColor=#000000shadowOpacity=1panelBorderColor=#C0C0C0subBorderColor=#fffffftabBackgroundColor=#ADCDADtabDisabledTextColor=#67AA67trimColor=#BBECBBtipBackgroundColor=#FAE6B0tipBorderColor=#E5973EselectControlColor=#FF9409generalLinkColor=#43BD43hoverLinkColor=#FF9409visitedLinkColor=#43BD43# FontsheaderSizeFont=18pxheaderFamilyFont=Arial, Verdana, sans-seriftabSizeFont=11tabFamilyFont=Arial, Verdana, sans-serifbuttonSizeFont=18buttonFamilyFont=Arial, Verdana, sans-seriftableBackgroundColor=#FFFFFFtableFooterBackgroundColor=#cccccctableSubfooterBackgroundColor=#f1f1f1tableBorderColor=#C0C0C0tableBorderWidth=2px#Calendar colorscalendarWeekBackgroundColor=#f5f5f5calendarHolidaysBackgroundColor=#FFEBDAcalendarHolidaysTextColor=#FF7800calendarCurrentBackgroundColor=#FF7800calendarCurrentTextColor=#FFEBDAcalendarSpecBackgroundColor=#E2F6E2calendarSpecTextColor=#000000warningColor=#FFE6E6warningBackgroundColor=#FF0000editorBackgroundColor=#F1F1F1editBackgroundColor=#FEFFDA#GradientsgradientType=plain In order to test it, let's open our application project, create a file called mySkin.skin.properties inside the directory /resources/WEB-INF/, and add the above text. Then, let's open the build.xml file and edit it, and add the following code into the war target: <copy tofile="${war.dir}/WEB-INF/classes/mySkin.skin.properties"file="${basedir}/resources/WEB-INF/mySkin.skin.properties"overwrite="true"/> Also, as our application supports multiple skins, let's open the components.xml file and add support to it: <property name="defaultSkin">mySkin</property><property name="availableSkins"> <value>mySkin</value> <value>laguna</value> <value>darkX</value> <value>glassX</value> <value>blueSky</value> <value>classic</value> <value>ruby</value> <value>wine</value> <value>deepMarine</value> <value>emeraldTown</value> <value>japanCherry</value></property> If you just want to select the new skin as the fixed skin, you would just edit the web.xml file and select the new skin by inserting the name into the context parameter (as explained before). Just to make an (bad looking, but understandable) example, let's change some parameters in the skin file: #ColorsheaderBackgroundColor=#005000headerGradientColor=#70BA70headerTextColor=#FFFFFFheaderWeightFont=boldgeneralBackgroundColor=#f1f1f1generalTextColor=#000000generalSizeFont=18pxgeneralFamilyFont=Arial, Verdana, sans-serifcontrolTextColor=#000000controlBackgroundColor=#ffffffadditionalBackgroundColor=#E2F6E2shadowBackgroundColor=#000000shadowOpacity=1panelBorderColor=#C0C0C0subBorderColor=#fffffftabBackgroundColor=#ADCDADtabDisabledTextColor=#67AA67trimColor=#BBECBBtipBackgroundColor=#FAE6B0tipBorderColor=#E5973EselectControlColor=#FF9409generalLinkColor=#43BD43hoverLinkColor=#FF9409visitedLinkColor=#43BD43# FontsheaderSizeFont=18pxheaderFamilyFont=Arial, Verdana, sans-seriftabSizeFont=11tabFamilyFont=Arial, Verdana, sans-serifbuttonSizeFont=18buttonFamilyFont=Arial, Verdana, sans-seriftableBackgroundColor=#FFFFFFtableFooterBackgroundColor=#cccccctableSubfooterBackgroundColor=#f1f1f1tableBorderColor=#C0C0C0tableBorderWidth=2px#Calendar colorscalendarWeekBackgroundColor=#f5f5f5calendarHolidaysBackgroundColor=#FFEBDAcalendarHolidaysTextColor=#FF7800calendarCurrentBackgroundColor=#FF7800calendarCurrentTextColor=#FFEBDAcalendarSpecBackgroundColor=#E2F6E2calendarSpecTextColor=#000000warningColor=#FFE6E6warningBackgroundColor=#FF0000editorBackgroundColor=#F1F1F1editBackgroundColor=#FEFFDA#GradientsgradientType=plain Here is the screenshot of what happened with the new skin: How do I know which parameters to change? The official RichFaces Developer Guide contains, for every component, a table with the correspondences between the skin parameters and the CSS properties they are connected to.
Read more
  • 0
  • 0
  • 2656
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-hibernate-types
Packt
27 Nov 2009
3 min read
Save for later

Hibernate Types

Packt
27 Nov 2009
3 min read
Hibernate allows transparent persistence, which means the application is absolutely isolated from the underlying database storage format. Three players in the Hibernate scene implement this feature: Hibernate dialect, Hibernate types, and HQL. The Hibernate dialect allows us to use a range of different databases, supporting different, proprietary variants of SQL and column types. In addition, HQL allows us to query persisted objects, regardless of their relational persisted form in the database. Hibernate types are a representation of databases SQL types, provide an abstraction of the underlying database types, and prevent the application from getting involved with the actual database column types. They allow us to develop the application without worrying about the target database and the column types that the database supports. Instead, we get involved with mapping Java types to Hibernate types. The database dialect, as part of Hibernate, is responsible for transforming Java types to SQL types, based on the target database. This gives us the flexibility to change the database to one that may support different column types or SQL without changing the application code. Built-in types Hibernate includes a rich and powerful range of built-in types. These types satisfy most needs of a typical application, providing a bridge between basic Java types and common SQL types. Java types mapped with these types range from basic, simple types, such as long and int, to large and complex types, such as Blob and Clob. The following table categorizes Hibernate built-in types with corresponding Java and SQL types: Java Type Hibernate Type Name SQL Type Primitives Boolean or boolean boolean BIT true_false CHAR(1)('T'or'F') yes_no CHAR(1)('Y'or'N') Byte or byte byte TINYINT char or Character character CHAR double or Double double DOUBLE float or float float FLOAT int or Integer integer INTEGER long or Long long BIGINT short or Short short SMALLINT String java.lang.String string VARCHAR character CHAR(1) text CLOB Arbitrary Precision Numeric java.math.BigDecimal big_decimal NUMERIC Byte Array byte[] or Byte[] binary VARBINARY   Time and Date java.util.Date date DATE time TIME timestamp TIMESTAMP java.util.Calendar calendar TIMESTAMP calendar_date DATE java.sql.Date date DATE java.sql.Time time TIME java.sql.Timestamp timestamp TIMESTAMP Localization java.util.Locale locale VARCHAR java.util.TimeZone timezone java.util.Currency currency Class Names java.lang.Class class VARCHAR Any Serializable Object java.io.Serializable Serializable VARBINARY JDBC Large Objects java.sql.Blob blob BLOB java.sql.Clob clob CLOB
Read more
  • 0
  • 0
  • 2442

article-image-business-rules-management-bpm-and-soa
Packt
27 Nov 2009
13 min read
Save for later

Business Rules Management, BPM, and SOA

Packt
27 Nov 2009
13 min read
Introduction to Business Rules Management Let us start by understanding some key concepts around business rules. What are Business Rules? Business rules can be defined as the key decisions and policies of the business. Rules are virtually everywhere in an organization; an example is the rule in a bank to deny a loan for a customer if his or her annual income is less than $15,000. We can generally categorize business rules under the following categories: Business Policies: These are rules associated with general business policies of a company, for example, loan approval policies, escalation policies, and so on. Constraints: These are the rules which business has to keep in mind, and work within the scope of while going about their operations. Rules associated with regulatory requirements will fall under this category. Computation: These are the rules associated with decisions involving any calculations, for example, discounting rules, premium adjustments, and so on. Reasoning capabilities: These are the rules that apply logic and inference course of actions based on multiple criteria. For example, rules associated with the up-sell or cross-sell of products and services to customers based on their profile. Allocation Rules: There are some rules that are applicable in terms of determining the course of action for the process, based on information from the previous tasks. They also include rules that manage the receiving, assignment, routing, and tracking of work. Business Rules Anatomy To understand the anatomy of a business rule, we can divide a business rule primarily into the following four blocks: Definitions of Terms: This helps in providing a vocabulary for expressing the rules. Defining a term acts as the category for the rules. For example, customer, car, claims, and so on define the entities for the business. Facts: These are used to relate terms in definitions with each other. For example, a customer may apply for a claim. Constraints: These are the constraints, limitations, or controls on how an organization wants to use and update the data. For example, for opening an account, a customer's passport details or social security details are required. Inference: This basically applies to logical assertions such as 'if X, then Y' to a fact, and infers new facts. For example, if we have a single account validation rule (if an applicant is a defaulter, then the applicant is high-risk), and we know that Harry (the applicant) has defaulted earlier on his payments for other bank services, we can infer that Harry is a high-risk customer. Automating Business Rules As we discuss the externalization and automation of business rules, it's important to understand the distinction between implicit and explicit rules. An implicit rule can be viewed as a rule that is a part of a larger context within the system. It's like multiple rules that are implemented in traditional applications to implement decision logic, for example, assessing the risk level for a loan. Its implementation is usually part of the application it is being developed for, and is never considered beyond the scope of the application, perhaps to be re-used. So Typically, in the IT world, these implicit rules are embedded within the complex application code and spread across multiple systems, making it extremely difficult to introduce changes quickly, and without creating a domino effect across systems. Some of these issues can be resolved by implementing a Business Rules Management System (BRMS) in collaboration with the BPM system in place. This allows the decision logic, which is being used by the process during its execution, to be driven by a central repository where all the rules are stored and managed. This repository provides a way to abstract the decision logic from the applications, and helps in managing this logic centrally, allowing for better management and flexibility for change and re-use. Hence, these rules are explicit in nature. For the loan approval example, business rules such as these would traditionally be embedded in application code, and might appear in an application as follows: public boolean checkAnnualIncome(Customer customer){boolean declineLoan = false;int income = customer.getincome();if( income < 10000 ){declineLoan = true;}return declineLoan;} The above example shows that this rule is obviously difficult for the business users to understand. In today's world, with the need for an organization to be agile, (considering our previous example) the business has to wait for weeks before a small change can be implemented by IT. What is required is the ability of the business users to define and control their own rules, and to be able to get the changes out in the market faster. Business Rules Management and related technology tries to solve this problem. Automating Business Rules for Business Issues Automation of business rules via BRMS is ideal for use, where the following issues are being faced by an organization: Dynamism and Volatility: Companies need to repeatedly change business policies, procedures, and products to meet the market needs. In this case, the rules change very dynamically, and having a BRMS can help in implementing these changes faster, and reducing the time to market and cost of implementation. Time to Market: In this case, the organization might want a particular set of changes to be released quickly due to market pressure, or to gain a competitive advantage. So, Even though the rules are not changed very often, a delay in their implementation could lead to a serious business loss. In this case, the organization needs to have the ability to get these changes in quickly, without roadblocks, which can be addressed by a BRMS. Regulatory Compliance: Failure to comply with regulatory requirements such as Anti-Money Laundering (AML) laws can result in millions of dollars in fines, and legal issues for the organizations. To solve these issues, institutions can combine business rules with SOA to create an effective strategy for enforcing compliance. Business rules technology helps in implementing these rules quickly, and helps them to be kept up–to-date across an enterprise. Business Participation: There could be rules which might be better off being controlled and owned by the business users. In this case, a BRMS can expose certain rules to be managed and edited by selected business users, providing an easy to use interface. Rules related to product configuration, customer eligibility, discounts and so on, are some examples where business users can manage the rules, and change them as required by changing scenarios. Complexity: Some scenarios, such as complex product and service pricing, require extremely complex dependencies between several rules to implement the scenario logic. These kinds of rules are best suited for implementation inside a BRMS rather than a procedural language, as is being done traditionally. Telecom Fraud Management, for example, is an area where rules management is being used along with BAM to identify potential frauds. There are similar applications in credit card and banking industries. Consistency: Rules managed centrally provides a more consistent way of managing certain policies requiring re-use and consistency across the enterprise. This is especially true in cases where inconsistency was an issue due to multiple applications, databases, and different lines of businesses. Business Rules Management, BPM, and SOA Business Rules Management, BPM, and SOA share a synergistic relationship, especially, when used together to provide agility to an organization. The term 'Agility' can be defined as "the ability of an enterprise to sense and predict change in their environment and respond quickly, efficiently, and effectively to that change". Agility, requires the organization to be flexible enough in introducing change and in modifying their current operations, to achieve higher levels of performance or output. A process-driven approach to SOA allows business users to introduce changes to the process for faster execution, and with less cost. This value is amplified by using a Business Rules platform alongside process orchestration. If we look at the BPM reference architecture again, rules functionality features in various layers of the architecture, in the initial rules discovery phase, during process mapping, and in its orchestration in the SOA environment. Business Rules-related technologies have been in the market for a number of years now. However, with the acceptance of BPM and SOA as enablers for increasing an organization's agility, today's enterprises are increasingly looking at using rules management to externalize their rules. Business rules management helps automate decisions and apply policies within processes. Automation of these decisions requires determining the meaning of a given situation, and applying a business policy in response to this. Business rules platforms provide tools to define this 'reasoning' logic for use by either developers or business analysts, and business stakeholders. Organizations are looking at Business Rules Management to deploy rules related to policy decisions, work allocation, compliance and control, business exception management, and even data validation. For example, a major financial services company uses business rules to apply privacy and anti-fraud policies to all of its transactions. Even more, these Business Rules are being considered as an asset for an organization that should be managed centrally and re-used across departments and systems, instead of being hard-coded into an application. So, it is important to ensure that business rules have a place in your SOA. Carefully defining and exposing your rules as services will enable all of the applications and services within your architecture to have simple access to a common rules repository. From an SOA perspective, before beginning a business rules implementation, you should: Incorporate a business rules platform into your SOA: This would be a service-enabled repository of your business rules, where instead of data you would maintain and execute rulesets using a business rules engine. Create standards and best practices for developing business rules: To maximize benefits from your rules implementation, you should focus on developing common standards and best practices for discovery, design, development, and interfacing of your rules. Some of the best practices for writing and designing business rules are: Declarative: Business rules should be declared, and not stated as procedures as in coding. How a rule will be enforced should not be part of a rule definition. For example, "If the customer is a premium customer, offer him further 5% discount." Precise: It's easier for business rules definitions to be misinterpreted due to the use of natural language syntax by business. One business rule should be open to only one interpretation, and would need rephrasing if it was found to be ambiguous. Consistency and non-redundancy: Business rules should be consistent and not conflict other rules. Similarly, you should look out for business rules that are redundant. Business Focused and Owned: Business rules should be declared using the business vocabulary so that they can understood by relevant business stakeholders. Avoid using technical jargons in business rules. Also business rules are best left under the ownership of the business, community, as that is the source for the rules. Key Considerations for Selecting a BRMS The following are some key considerations when selecting a BRMS to work with BPM and SOA: Standards-based Integration capability: The ability to integrate with the SOA landscape using a service layer. Business User Interface: The ability to provide the capability for business users to access and modify business rules through a user-friendly interface. Rule Language: The ability to provide support for natural languages for easily expressing a complex set of rules. Performance: The ability to provide support for high-volume transactions for mission-critical applications, which is normally measured in terms of the number of rules processed per second. Rules Monitoring and Reporting: The ability to feature support for rules debugging, rules reporting, and real time monitoring of rules. Rules Repository: The ability to provide a centralized repository for storing all rule-specific artifacts. The repository should also support change management by storing different versions of rules, and providing audit capabilities. Key components of a BRMS—A Brief Look into Oracle Business Rules Typically, a BRMS will comprise four main components: Business UI: This is a user interface component for writing and editing business rules. Typically, it will be a web-based interface for business users to log in and access existing business rules, create new ones, and so on. Rules Development Environment: Developers will be working in this environment to convert business rules defined by business users into code that can be implemented in the business rules engine. This will be also an environment where the service layer for the rules will be defined and implemented for integration with other applications and SOA components. Rules Repository: This will be a centralized repository where all rules-related information will be stored. Rules Execution Engine: This is the heart of the rules management system and will be responsible for executing the business rules in the run time environment. In SOA terms, this component will receive request for rules processing from the business process orchestration environment, based on which, it will run appropriate rules and provide decision information that will be sent back to the orchestration layer. Oracle also provides a suite of components under its Oracle Business Rules product to support rules management and execution, which are as follows: Oracle Rule Author: Rule Author provides a web-based graphical authoring environment that enables the easy creation of business rules via a web browser. The application developer uses Rule Author to define a data model and an initial set of rules. The business analyst uses Rule Author either to work with the initial set of rules, or to modify and customize the initial set of rules according to business needs. Using Rule Author, a business analyst can create and customize rules with little or no assistance from a programmer. Rules Engine: This is the heart of the rules system and executes and manages rules in a proper and efficient manner. This allows inference-based rule execution, based on the very popular Rete algorithm. The Rete algorithm is an efficient pattern-matching algorithm used for rules and facts, and stores partially-matched results in a single network of nodes in current working memory, allowing the rules engine to avoid unnecessary rechecking when facts are deleted, added, or modified. Oracle's rules engine provides a data-driven forward-changing system. This means that the facts will determine which rules can be triggered. When a particular rule is triggered, based on pattern matching within a set of facts, the rule may further add new facts. The new facts are again run against the rules as an iterative process untill it reaches an end state. This allows rules to be interlinked and triggered in a cycle, also referred to as an inference cycle.The rules engine also provides a web service interface with its SOA environment using 'Decision Services', which is available in a JDeveloper environment during the coding of business processes in BPEL. This can also be used to make a web service call to rules running in the rules engine. It also exposes a Rules API, which is based on JSR 94, a runtime specification for rules engines to integrate business rules application with other applications in an organization. Rule Repository: A rule repository is the database that stores business rules. The Oracle rules repository allows rules to be grouped as rulesets, and make it part of the rules dictionary in a central repository. These dictionaries can be versioned for better governance. Oracle's rules repository supports a WebDAV (Web Distributed Authoring and Versioning) repository and a file repository. Rules SDK: This allows users to develop and integrate the Rules Repository in to a custom authoring environment. This component also allows the development of a customized UI for business users to access and update the Rules repository, if required.
Read more
  • 0
  • 0
  • 2732

article-image-plotting-data-using-matplotlib-part-1
Packt
19 Nov 2009
10 min read
Save for later

Plotting data using Matplotlib: Part 1

Packt
19 Nov 2009
10 min read
The examples are: Plotting data from a database Plotting data from a web page Plotting the data extracted by parsing an Apache log file Plotting the data read from a comma-separated values (CSV) file Plotting extrapolated data using curve fitting Third-party tools using Matplotlib (NetworkX and mpmath) Let's begin Plotting data from a database Databases often tend to collect much more information than we can simply extract and watch in a tabular format (let's call it the "Excel sheet" report style). Databases not only use efficient techniques to store and retrieve data, but they are also very good at aggregating it. One suggestion we can give is to let the database do the work. For example, if we need to sum up a column, let's make the database sum the data, and not sum it up in the code. In this way, the whole process is much more efficient because: There is a smaller memory footprint for the Python code, since only the aggregate value is returned, not the whole result set to generate it The database has to read all the rows in any case. However, if it's smart enough, then it can sum values up as they are read The database can efficiently perform such an operation on more than one column at a time The data source we're going to query is from an open source project: the Debian distribution. Debian has an interesting project called UDD , Ultimate Debian Database, which is a relational database where a lot of information (either historical or actual) about the distribution is collected and can be analyzed. On the project website http://udd.debian.org/, we can fi nd a full dump of the database (quite big, honestly) that can be downloaded and imported into a local PostgreSQL instance (refer to http://wiki.debian.org/UltimateDebianDatabase/CreateLocalReplica for import instructions Now that we have a local replica of UDD, we can start querying it: # module to access PostgreSQL databasesimport psycopg2# matplotlib pyplot moduleimport matplotlib.pyplot as plt Since UDD is stored in a PostgreSQL database, we need psycopg2 to access it. psycopg2 is a third-party module available at http://initd.org/projects/psycopg # connect to UDD databaseconn = psycopg2.connect(database="udd")# prepare a cursorcur = conn.cursor() We will now connect to the database server to access the udd database instance, and then open a cursor on the connection just created. # this is the query we'll be makingquery = """select to_char(date AT TIME ZONE 'UTC', 'HH24'), count(*) from upload_history where to_char(date, 'YYYY') = '2008' group by 1 order by 1""" We have prepared the select statement to be executed on UDD. What we wish to do here is extract the number of packages uploaded to the Debian archive (per hour) in the whole year of 2008. date AT TIME ZONE 'UTC': As date field is of the type timestamp with time zone, it also contains time zone information, while we want something independent from the local time. This is the way to get a date in UTC time zone. group by 1: This is what we have encouraged earlier, that is, let the database do the work. We let the query return the already aggregated data, instead of coding it into the program. # execute the querycur.execute(query)# retrieve the whole result setdata = cur.fetchall() We execute the query and fetch the whole result set from it. # close cursor and connectioncur.close()conn.close() Remember to always close the resources that we've acquired in order to avoid memory or resource leakage and reduce the load on the server (removing connections that aren't needed anymore). # unpack data in hours (first column) and# uploads (second column)hours, uploads = zip(*data) The query result is a list of tuples, (in this case, hour and number of uploads), but we need two separate lists—one for the hours and another with the corresponding number of uploads. zip() solves this with *data, we unpack the list, returning the sublists as separate arguments to zip(), which in return, aggregates the elements in the same position in the parameters into separated lists. Consider the following example: In [1]: zip(['a1', 'a2'], ['b1', 'b2'])Out[1]: [('a1', 'b1'), ('a2', 'b2')] To complete the code: # graph codeplt.plot(hours, uploads)# the the x limits to the 'hours' limitplt.xlim(0, 23)# set the X ticks every 2 hoursplt.xticks(range(0, 23, 2))# draw a gridplt.grid()# set title, X/Y labelsplt.title("Debian packages uploads per hour in 2008")plt.xlabel("Hour (in UTC)")plt.ylabel("No. of uploads") The previous code snippet is the standard plotting code, which results in the following screenshot: From this graph we can see that in 2008, the main part of Debian packages uploads came from European contributors. In fact, uploads were made mainly in the evening hours (European time), after the working days are over (as we can expect from a voluntary project). Plotting data from the Web Often, the information we need is not distributed in an easy-to-use format such as XML or a database export but for example only on web sites. More and more often we find interesting data on a web page, and in that case we have to parse it to extract that information: this is called web scraping . In this example, we will parse a Wikipedia article to extracts some data to plot. The article is at http://it.wikipedia.org/wiki/Demografia_d'Italia and contains lots of information about Italian demography (it's in Italian because the English version lacks a lot of data); in particular, we are interested in the population evolution over the years. Probably the best known Python module for web scraping is BeautifulSoup ( http://www.crummy.com/software/BeautifulSoup/). It's a really nice library that gets the job done quickly, but there are situations (in particular with JavaScript embedded in the web page, such as for Wikipedia) that prevent it from working. As an alternative, we find lxml quite productive (http://codespeak.net/lxml/). It's a library mainly used to work with XML (as the name suggests), but it can also be used with HTML (given their quite similar structures), and it is powerful and easy–to-use. Let's dig into the code now: # to get the web pagesimport urllib2# lxml submodule for html parsingfrom lxml.html import parse# regular expression moduleimport re# Matplotlib moduleimport matplotlib.pyplot as plt Along with the Matplotlib module, we need the following modules: urllib2: This is the module (from the standard library) that is used to access resources through URL (we will download the webpage with this). lxml: This is the parsing library. re: Regular expressions are needed to parse the returned data to extract the information we need. re is a module from the standard library, so we don't need to install a third-party module to use it. # general urllib2 configuser_agent = 'Mozilla/5.0 (compatible; MSIE 5.5; Windows NT)'headers = { 'User-Agent' : user_agent }url = "http://it.wikipedia.org/wiki/Demografia_d'Italia" Here, we prepare some configuration for urllib2, in particular, the user_agent header is used to access Wikipedia and the URL of the page. # prepare the request and open the urlreq = urllib2.Request(url, headers=headers)response = urllib2.urlopen(req) Then we make a request for the URL and get the HTML back. # we parse the webpage, getroot() return the document rootdoc = parse(response).getroot() We parse the HTML using the parse() function of lxml.html and then we get the root element. XML can be seen as a tree, with a root element (the node at the top of the tree from where every other node descends), and a hierarchical structure of elements. # find the data table, using css elementstable = doc.cssselect('table.wikitable')[0] We leverage the structure of HTML accessing the first element of type table of class wikitable because that's the table we're interested in. # prepare data structures, will contain actual datayears = []people = [] Preparing the lists that will contain the parsed data. # iterate over the rows of the table, except first and last onesfor row in table.cssselect('tr')[1:-1]: We can start parsing the table. Since there is a header and a footer in the table, we skip the first and the last line from the lines (selected by the tr tag) to loop over. # get the row cell (we will use only the first two)data = row.cssselect('td') We get the element with the td tag that stands for table data: those are the cells in an HTML table. # the first cell is the yeartmp_years = data[0].text_content()# cleanup for cases like 'YYYY[N]' (date + footnote link)tmp_years = re.sub('[.]', '', tmp_years) We take the first cell that contains the year, but we need to remove the additional characters (used by Wikipedia to link to footnotes). # the second cell is the population counttmp_people = data[1].text_content()# cleanup from '.', used as separatortmp_people = tmp_people.replace('.', '') We also take the second cell that contains the population for a given year. It's quite common in Italy to separate thousands in number with a '.' character: we have to remove them to have an appropriate value. # append current data to data lists, converting to integersyears.append(int(tmp_years))people.append(int(tmp_people)) We append the parsed values to the data lists, explicitly converting them to integer values. # plot dataplt.plot(years,people)# ticks every 10 yearsplt.xticks(range(min(years), max(years), 10))plt.grid()# add a note for 2001 Censusplt.annotate("2001 Census", xy=(2001, people[years.index(2001)]), xytext=(1986, 54.5*10**6), arrowprops=dict(arrowstyle='fancy')) Running the example results in the following screenshot that clearly shows why the annotation is needed: In 2001, we had a national census in Italy, and that's the reason for the drop in that year: the values released from the National Institute for Statistics (and reported in the Wikipedia article) are just an estimation of the population. However, with a census, we have a precise count of the people living in Italy. Plotting data by parsing an Apache log file Plotting data from a log file can be seen as the art of extracting information from it. Every service has a log format different from the others. There are some exceptions of similar or same format (for example, for services that come from the same development teams) but then they may be customized and we're back at the beginning. The main differences in log files are: Fields orders: Some have time information at the beginning, others in the middle of the line, and so on Fields types: We can find several different data types such as integers, strings, and so on Fields meanings: For example, log levels can have very different meanings From all the data contained in the log file, we need to extract the information we are interested in from the surrounding data that we don't need (and hence we skip). In our example, we're going to analyze the log file of one of the most common services: Apache. In particular, we will parse the access.log file to extract the total number of hits and amount of data transferred per day. Apache is highly configurable, and so is the log format. Our Apache configuration, contained in the httpd.conf file, has this log format: "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i"" This is in LogFormat specification where Log directive Description %h The host making the request %l Identity of the client (which is usually not available) %u User making the request (usually not available) %t The time the request was received %r The request %>s The status code %b The size (in bytes) of the response sent to the client (excluding the headers) %{Referer}i The page from where the requests originated (for example, the HTML page where a PNG image is requested) %{User-Agent}i The user agent used to make the request
Read more
  • 0
  • 0
  • 4336

article-image-plotting-data-using-matplotlib-part-2
Packt
19 Nov 2009
15 min read
Save for later

Plotting data using Matplotlib: Part 2

Packt
19 Nov 2009
15 min read
Plotting data from a CSV file A common format to export and distribute datasets is the Comma-Separated Values (CSV) format. For example, spreadsheet applications allow us to export a CSV from a working sheet, and some databases also allow for CSV data export. Additionally, it's a common format to distribute datasets on the Web. In this example, we'll be plotting the evolution of the world's population divided by continents, between 1950 and 2050 (of course they are predictions), using a new type of graph: bars stacked. Using the data available at http://www.xist.org/earth/pop_continent.aspx (that fetches data from the official UN data at http://esa.un.org/unpp/index.asp), we have prepared the following CSV file: Continent,1950,1975,2000,2010,2025,2050Africa,227270,418765,819462,1033043,1400184,1998466Asia,1402887,2379374,3698296,4166741,4772523,5231485Europe,547460,676207,726568,732759,729264,691048Latin America,167307,323323,521228,588649,669533,729184Northern America,171615,242360,318654,351659,397522,448464Oceania,12807,21286,31160,35838,42507,51338 In the first line, we can find the header with a description of what the data in the columns represent. The other lines contain the continent's name and its population (in thousands) for the given years. In the first line, we can find the header with a description of what the data in the columns represent. The other lines contain the continent's name and its population (in thousands) for the given years. There are several ways to parse a CSV file, for example: NumPy's loadtxt() (what we are going to use here) Matplotlib's mlab.csv2rec() The csv module (in the standard library) but we decided to go with loadtxt() because it's very powerful (and it's what Matplotlib is standardizing on). Let's look at how we can plot it then: # for file opening made easierfrom __future__ import with_statement We need this because we will use the with statement to read the file. # numpyimport numpy as np NumPy is used to load the CSV and for its useful array data type. # matplotlib plotting moduleimport matplotlib.pyplot as plt# matplotlib colormap moduleimport matplotlib.cm as cm# needed for formatting Y axisfrom matplotlib.ticker import FuncFormatter# Matplotlib font managerimport matplotlib.font_manager as font_manager In addition to the classic pyplot module, we need other Matplotlib submodules: cm (color map): Considering the way we're going to prepare the plot, we need to specify the color map of the graphical elements FuncFormatter: We will use this to change the way the Y-axis labels are displayed font_manager: We want to have a legend with a smaller font, and font_manager allows us to do that def billions(x, pos): """Formatter for Y axis, values are in billions""" return '%1.fbn' % (x*1e-6) This is the function that we will use to format the Y-axis labels. Our data is in thousands. Therefore, by dividing it by one million, we obtain values in the order of billions. The function is called at every label to draw, passing the label value and the position. # bar widthwidth = .8 As said earlier, we will plot bars, and here we defi ne their width. The following is the parsing code. We know that it's a bit hard to follow (the data preparation code is usually the hardest one) but we will show how powerful it is. # open CSV filewith open('population.csv') as f: The function we're going to use, NumPy loadtxt(), is able to receive either a filename or a file descriptor, as in this case. We have to open the file here because we have to strip the header line from the rest of the file and set up the data parsing structures. # read the first line, splitting the yearsyears = map(int, f.readline().split(',')[1:]) Here we read the first line, the header, and extract the years. We do that by calling the split() function and then mapping the int() function to the resulting list, from the second element onwards (as the first one is a string). # we prepare the dtype for exacting data; it's made of:# <1 string field> <len(years) integers fields>dtype = [('continents', 'S16')] + [('', np.int32)]*len(years) NumPy is flexible enough to allow us to define new data types. Here, we are creating one ad hoc for our data lines: a string (of maximum 16 characters) and as many integers as the length of years list. Also note how the fi rst element has a name, continents, while the last integers have none: we will need this in a bit. # we load the file, setting the delimiter and the dtype abovey = np.loadtxt(f, delimiter=',', dtype=dtype) With the new data type, we can actually call loadtxt(). Here is the description of the parameters: f: This is the file descriptor. Please note that it now contains all the lines except the first one (we've read above) which contains the headers, so no data is lost. delimiter: By default, loadtxt() expects the delimiter to be spaces, but since we are parsing a CSV file, the separator is comma. dtype: This is the data type that is used to apply to the text we read. By default, loadtxt() tries to match against float values # "map" the resulting structure to be easily accessible:# the first column (made of string) is called 'continents'# the remaining values are added to 'data' sub-matrix# where the real data arey = y.view(np.dtype([('continents', 'S16'), ('data', np.int32, len(years))])) Here we're using a trick: we view the resulting data structure as made up of two parts, continents and data. It's similar to the dtype that we defined earlier, but with an important difference. Now, the integer's values are mapped to a field name, data. This results in the column continents with all the continents names,and the matrix data that contains the year's values for each row of the file. data = y['data']continents = y['continents'] We can separate the data and the continents part into two variables for easier usage in the code. # prepare the bottom arraybottom = np.zeros(len(years)) We prepare an array of zeros of the same length as years. As said earlier, we plot stacked bars, so each dataset is plot over the previous ones, thus we need to know where the bars below finish. The bottom array keeps track of this, containing the height of bars already plotted. # for each line in datafor i in range(len(data)): Now that we have our information in data, we can loop over it. # create the bars for each element, on top of the previous barsbt = plt.bar(range(len(data[i])), data[i], width=width, color=cm.hsv(32*i), label=continents[i], bottom=bottom) and create the stacked bars. Some important notes: We select the the i-th row of data, and plot a bar according to its element's size (data[i]) with the chosen width. As the bars are generated in different loops, their colors would be all the same. To avoid this, we use a color map (in this case hsv), selecting a different color at each iteration, so the sub-bars will have different colors. We label each bar set with the relative continent's name (useful for the legend) As we have said, they are stacked bars. In fact, every iteration adds a piece of the global bars. To do so, we need to know where to start drawing the bar from (the lower limit) and bottom does this. It contains the value where to start drowing the current bar. # update the bottom arraybottom += data[i] We update the bottom array. By adding the current data line, we know what the bottom line will be to plot the next bars on top of it. # label the X ticks with yearsplt.xticks(np.arange(len(years))+width/2, [int(year) for year in years]) We then add the tick's labels, the years elements, right in the middle of the bar. # some information on the plotplt.xlabel('Years')plt.ylabel('Population (in billions)')plt.title('World Population: 1950 - 2050 (predictions)') Add some information to the graph. # draw a legend, with a smaller fontplt.legend(loc='upper left', prop=font_manager.FontProperties(size=7)) We now draw a legend in the upper-left position with a small font (to better fit the empty space). # apply the custom function as Y axis formatterplt.gca().yaxis.set_major_formatter(FuncFormatter(billions) Finally, we change the Y-axis label formatter, to use the custom formatting function that we defined earlier. The result is the next screenshot where we can see the composition of the world population divided by continents: In the preceding screenshot, the whole bar represents the total world population, and the sections in each bar tell us about how much a continent contributes to it. Also observe how the custom color map works: from bottom to top, we have represented Africa in red, Asia in orange, Europe in light green, Latin America in green, Northern America in light blue, and Oceania in blue (barely visible as the top of the bars). Plotting extrapolated data using curve fitting While plotting the CSV values, we have seen that there were some columns representing predictions of the world population in the coming years. We'd like to show how to obtain such predictions using the mathematical process of extrapolation with the help of curve fitting. Curve fitting is the process of constructing a curve (a mathematical function) that better fits to a series of data points. This process is related to other two concepts: interpolation: A method of constructing new data points within the range of a known set of points extrapolation: A method of constructing new data points outside a known set of points The results of extrapolation are subject to a greater degree of uncertainty and are influenced a lot by the fitting function that is used. So it works this way: First, a known set of measures is passed to the curve fitting procedure that computes a function to approximate these values With this function, we can compute additional values that are not present in the original dataset Let's first approach curve fitting with a simple example: # Numpy and Matplotlibimport numpy as npimport matplotlib.pyplot as plt These are the classic imports. # the known points setdata = [[2,2],[5,0],[9,5],[11,4],[12,7],[13,11],[17,12]] This is the data we will use for curve fitting. They are the points on a plane (so each has a X and a Y component) # we extract the X and Y components from previous pointsx, y = zip(*data) We aggregate the X and Y components in two distinct lists. # plot the data points with a black crossplt.plot(x, y, 'kx') Then plot the original dataset as a black cross on the Matplotlib image. # we want a bit more data and more fine grained for# the fitting functionsx2 = np.arange(min(x)-1, max(x)+1, .01) We prepare a new array for the X values because we wish to have a wider set of values (one unit on the right and one on to the left of the original list) and a fine grain to plot the fitting function nicely. # lines styles for the polynomialsstyles = [':', '-.', '--'] To differentiate better between the polynomial lines, we now define their styles list. # getting style and count one at timefor d, style in enumerate(styles): Then we loop over that list by also considering the item count. # degree of the polynomialdeg = d + 1 We define the actual polynomial degree. # calculate the coefficients of the fitting polynomialc = np.polyfit(x, y, deg) Then compute the coefficients of the fitting polynomial whose general format is: c[0]*x**deg + c[1]*x**(deg – 1) + ... + c[deg]# we evaluate the fitting function against x2y2 = np.polyval(c, x2) Here, we generate the new values by evaluating the fitting polynomial against the x2 array. # and then we plot itplt.plot(x2, y2, label="deg=%d" % deg, linestyle=style) Then we plot the resulting function, adding a label that indicates the degree of the polynomial and using a different style for each line. # show the legendplt.legend(loc='upper left') We then show the legend, and the final result is shown in the next screenshot: Here, the polynomial with degree=1 is drawn as a dotted blue line, the one with degree=2 is a dash-dot green line, and the one with degree=3 is a dashed red line. We can see that the higher the degree, the better is the fit of the function against the data. Let's now revert to our main intention, trying to provide an extrapolation for population data. First a note: we take the values for 2010 as real data and not predictions (well, we are quite near to that year) else we have very few values to create a realistic extrapolation. Let's see the code: # for file opening made easierfrom __future__ import with_statement# numpyimport numpy as np# matplotlib plotting moduleimport matplotlib.pyplot as plt# matplotlib colormap moduleimport matplotlib.cm as cm# Matplotlib font managerimport matplotlib.font_manager as font_manager# bar widthwidth = .8# open CSV filewith open('population.csv') as f: # read the first line, splitting the years years = map(int, f.readline().split(',')[1:]) # we prepare the dtype for exacting data; it's made of: # <1 string field> <6 integers fields> dtype = [('continents', 'S16')] + [('', np.int32)]*len(years) # we load the file, setting the delimiter and the dtype above y = np.loadtxt(f, delimiter=',', dtype=dtype) # "map" the resulting structure to be easily accessible: # the first column (made of string) is called 'continents' # the remaining values are added to 'data' sub-matrix # where the real data are y = y.view(np.dtype([('continents', 'S16'), ('data', np.int32, len(years))]))# extract fieldsdata = y['data']continents = y['continents'] This is the same code that is used for the CSV example (reported here for completeness). x = years[:-2]x2 = years[-2:] We are dividing the years into two groups: before and after 2010. This translates to split the last two elements of the years list. What we are going to do here is prepare the plot in two phases: First, we plot the data we consider certain values After this, we plot the data from the UN predictions next to our extrapolations # prepare the bottom arrayb1 = np.zeros(len(years)-2) We prepare the array (made of zeros) for the bottom argument of bar(). # for each line in datafor i in range(len(data)): # select all the data except the last 2 values d = data[i][:-2] For each data line, we extract the information we need, so we remove the last two values. # create bars for each element, on top of the previous barsbt = plt.bar(range(len(d)), d, width=width, color=cm.hsv(32*(i)), label=continents[i], bottom=b1)# update the bottom arrayb1 += d Then we plot the bar, and update the bottom array. # prepare the bottom arrayb2_1, b2_2 = np.zeros(2), np.zeros(2) We need two arrays because we will display two bars for the same year—one from the CSV and the other from our fitting function. # for each line in datafor i in range(len(data)): # extract the last 2 values d = data[i][-2:] Again, for each line in the data matrix, we extract the last two values that are needed to plot the bar for CSV. # select the data to compute the fitting functiony = data[i][:-2] Along with the other values needed to compute the fitting polynomial. # use a polynomial of degree 3c = np.polyfit(x, y, 3) Here, we set up a polynomial of degree 3; there is no need for higher degrees. # create a function out of those coefficientsp = np.poly1d(c) This method constructs a polynomial starting from the coefficients that we pass as parameter. # compute p on x2 values (we need integers, so the map)y2 = map(int, p(x2)) We use the polynomial that was defined earlier to compute its values for x2. We also map the resulting values to integer, as the bar() function expects them for height. # create bars for each element, on top of the previous barsbt = plt.bar(len(b1)+np.arange(len(d)), d, width=width/2, color=cm.hsv(32*(i)), bottom=b2_1) We draw a bar for the data from the CSV. Note how the width is half of that of the other bars. This is because in the same width we will draw the two sets of bars for a better visual comparison. # create the bars for the extrapolated valuesbt = plt.bar(len(b1)+np.arange(len(d))+width/2, y2, width=width/2, color=cm.bone(32*(i+2)), bottom=b2_2) Here, we plot the bars for the extrapolated values, using a dark color map so that we have an even better separation for the two datasets. # update the bottom arrayb2_1 += db2_2 += y2 We update both the bottom arrays. # label the X ticks with yearsplt.xticks(np.arange(len(years))+width/2, [int(year) for year in years]) We add the years as ticks for the X-axis. # draw a legend, with a smaller fontplt.legend(loc='upper left', prop=font_manager.FontProperties(size=7)) To avoid a very big legend, we used only the labels for the data from the CSV, skipping the interpolated values. We believe it's pretty clear what they're referring to. Here is the screenshot that is displayed on executing this example: The conclusion we can draw from this is that the United Nations uses a different function to prepare the predictions, especially because they have a continuous set of information, and they can also take into account other environmental circumstances while preparing such predictions. Tools using Matplotlib Given that it's has an easy and powerful API, Matplotlib is also used inside other programs and tools when plotting is needed. We are about to present a couple of these tools: NetworkX Mpmath
Read more
  • 0
  • 0
  • 3058
article-image-advanced-matplotlib-part-1
Packt
19 Nov 2009
7 min read
Save for later

Advanced Matplotlib: Part 1

Packt
19 Nov 2009
7 min read
The basis for all of these topics is the object-oriented interface. Object-oriented versus MATLAB styles We have seen  a lot of examples, and in all of them we used the matplotlib.pyplot module to create and manipulate the plots, but this is not the only way to make use of the Matplotlib plotting power. There are three ways to use Matplotlib: pyplot: The module used so far in this article pylab:  A module to merge Matplotlib and NumPy together in an environment closer to MATLAB Object-oriented way: The Pythonic way to interface with Matplotlib Let's first elaborate a bit about the pyplot module: pyplot provides a MATLAB-style, procedural, state-machine interface to the underlying object-oriented library in Matplotlib. A state machine is a system with a global status, where each operation performed on the system changes its status. matplotlib.pyplot is stateful because the underlying engine keeps track of the current figure and plotting area information, and plotting functions change that information. To make it clearer, we did not use any object references during our plotting we just issued a pyplot command, and the changes appeared in the figure. At a higher level, matplotlib.pyplot is a collection of commands and functions that make Matplotlib behave like MATLAB (for plotting). This is really useful when doing interactive sessions, because we can issue a command and see the result immediately, but it has several drawbacks when we need something more such as low-level customization or application embedding. If we remember, Matplotlib started as an alternative to MATLAB, where we have at hand both numerical and plotting functions. A similar interface exists for Matplotlib, and its name is pylab. pylab (do you see the similarity in the names?) is a companion module, installed next to matplotlib that merges matplotlib.pyplot (for plotting) and numpy (for mathematical functions) modules in a single namespace to  provide an environment as near to MATLAB as possible, so that the transition would be easy. We and the authors of Matplotlib discourage the use of pylab, other than for proof-of-concept snippets. While being rather simple to use, it teaches developers the wrong way to use Matplotlib. The third way to use Matplotlib is through the object-oriented interface (OO, from now on). This is the most powerful way to write Matplotlib code because it allows for complete control of the result however it is also the most complex. This is the Pythonic way to use Matplotlib, and it's highly encouraged when programming with Matplotlib rather than working interactively. We will use it a lot from now on as it's needed to go down deep into Matplotlib. Please allow us to highlight again the preferred style that the author of this article, and the authors of Matplotlib want to enforce: a bit of pyplot will be used, in particular for convenience functions, and the remaining plotting code is either done with the OO style or with pyplot, with numpy explicitly imported and used for numerical functions. In this preferred style, the initial imports are: import matplotlib.pyplot as pltimport numpy as np In this way, we know exactly which module the function we use comes from (due to the module prefix), and it's exactly what we've always done in the code so far. Now, let's present the same piece of code expressed in the three possible forms which we just described. First, we present it in the style, pyplot only: In [1]: import matplotlib.pyplot as pltIn [2]: import numpy as npIn [3]: x = np.arange(0, 10, 0.1)In [4]: y = np.random.randn(len(x))In [5]: plt.plot(x, y)Out[5]: [<matplotlib.lines.Line2D object at 0x1fad810>]In [6]: plt.title('random numbers')In [7]: plt.show() The preceding code snippet results in: Now, let's see how we can do the same thing using the pylab interface: $ ipython -pylab... In [1]: x = arange(0, 10, 0.1)In [2]: y = randn(len(x)) In [3]: plot(x, y)Out[3]: [<matplotlib.lines.Line2D object at 0x4284dd0>] In [4]: title('random numbers')In [5]: show() Note that: ipython -pylab is not the same as running ipython and then: from pylab import * This is because ipython's-pylab switch, in addition to importing everything from pylab, also enables a specific ipython threading mode so that both the interactive interpreter and the plot window can be active at the same time. Finally, lets make the same chart by using OO style, but with some pyplot convenience functions: In [1]: import matplotlib.pyplot as pltIn [2]: import numpy as np In [3]: x = np.arange(0, 10, 0.1)In [4]: y = np.random.randn(len(x))In [5]: fig = plt.figure()In [6]: ax = fig.add_subplot(111)In [7]: l, = plt.plot(x, y)In [8]: t = ax.set_title('random numbers')In [9]: plt.show() The pylab code is the simplest, and ,pyplot is in the middle, while the OO is the most complex or verbose. As the Python Zen teaches us, "Explicit is better than implicit" and "Simple is better than complex" and those statements are particularly true for this example: for simple interactive sessions, pylab or ,pyplot are the perfect choice because they hide a lot of complexity, but if we need something more advanced, then the OO API makes clearer where things are coming from, and what's going on. This expressiveness will be appreciated when we will embed Matplotlib inside GUI applications. From now on, we will start presenting our code using the OO interface mixed with some pyplot functions. A brief introduction to Matplotlib objects Before we can go on in a productive way, we need to briefly introduce which Matplotlib objects compose a figure. Let's see from the higher levels to the lower ones how objects are nested: Object Description FigureCanvas Container class for the Figure instance Figure Container for one or more Axes instances Axes The rectangular areas to hold the basic elements, such as lines, text, and so on     Our first (simple) example of OO Matplotlib In the previous pieces of code, we had transformed this: ...In [5]: plt.plot(x, y)Out[5]: [<matplotlib.lines.Line2D object at 0x1fad810>]... into: ...In [7]: l, = plt.plot(x, y)... The new code uses an explicit reference, allowing a lot more customizations. As we can see in the first piece of code, the plot() function returns a list of Line2D instances, one for each line (in this case, there is only one), so in the second code, l is a reference to the line object, so every operation allowed on Line2D can be done using l. For example, we can set the line color with: l.set_color('red') Instead of using the keyword argument to plot(), so the line information can be changed after the plot() call. Subplots In the previous section, we have seen a couple of important functions without introducing them. Let's have a look at them now: fig = plt.figure(): This function returns a Figure, where we can add one or more Axes instances. ax = fig.add_subplot(111): This function returns an Axes instance, where we can plot (as done so far), and this is also the reason why we call the variable referring to that instance ax (from Axes). This is a common way to add an Axes to a Figure, but add_subplot() does a bit more: it adds a subplot. So far we have only seen a Figure with one Axes instance, so only one area where we can draw, but Matplotlib allows more than one. add_subplot() takes three parameters: fig.add_subplot(numrows, numcols, fignum) where: numrows  represents the number of rows of subplots to prepare numcols  represents the number of columns of subplots to prepare fignum  varies from 1 to numrows*numcols and specifies the current subplot (the one used now) Basically, we describe a matrix of numrows*numcols subplots that we want into the Figure; please note that fignum is 1 at the upper-left corner of the Figure and it's equal to numrows*numcols at the bottom-right corner. The following table should provide a visual explanation of this:   numrows=2, numcols=2, fignum=1 numrows=2, numcols=2, fignum=2 numrows=2, numcols=2, fignum=3 numrows=2, numcols=2, fignum=4
Read more
  • 0
  • 0
  • 3687

article-image-advanced-matplotlib-part-2
Packt
19 Nov 2009
10 min read
Save for later

Advanced Matplotlib: Part 2

Packt
19 Nov 2009
10 min read
Plotting dates Sooner or later, we all have had the need to plot some information over time, be it for the bank account balance each month, the total web site accesses for each day of the year, or one of many other reasons. Matplotlib has a plotting function ad hoc for dates, plot_date() that considers data on X, Y, or both axes, as dates, labeling the axis accordingly. As usual, we now present an example, and we will discuss it later: In [1]: import matplotlib as mplIn [2]: import matplotlib.pyplot as pltIn [3]: import numpy as npIn [4]: import datetime as dtIn [5]: dates = [dt.datetime.today() + dt.timedelta(days=i) ...: for i in range(10)]In [6]: values = np.random.rand(len(dates))In [7]: plt.plot_date(mpl.dates.date2num(dates), values, linestyle='-');In [8]: plt.show() First, a note about linestyle keyword argument: without it, there's no line connecting the markers that are displayed alone. We  created  the dates array using timedelta(), a datetime function that helps us define a date interval—10 days in this case. Note how we had to convert our date values using the date2num() function. This is because Matplotlib represents dates as float values corresponding to the number of days since 0001-01-01 UTC. Also note how the X-axis labels, the ones that have data values, are badly rendered. Matplotlib provides ways to address the previous two points—date formatting and conversion, and axes formatting. Date formatting Commonly, in Python programs, dates are represented as datetime objects, so we have to first convert other data values into datetime objects, sometimes by using the dateutil companion module, for example: import datetimedate = datetime.datetime(2009, 03, 28, 11, 34, 59, 12345) or import dateutil.parserdatestrings = ['2008-07-18 14:36:53.494013','2008-07-20 14:37:01.508990', '2008-07-28 14:49:26.183256']dates = [dateutil.parser.parse(s) for s in datestrings] Once we have the datetime objects, in order to let Matplotlib use them, we have to convert them into floating point numbers that represent the number of days since 0001-01-01 00:00:00 UTC. To do that, Matplotlib itself provides several helper functions contained in the matplotlib.dates module: date2num():  This function converts one or a sequence of datetime objects to float values representing days since 0001-01-01 00:00:00 UTC (the fractional parts represent hours, minutes, and seconds) num2date():  This function converts one or a sequence of float values representing days since 0001-01-01 00:00:00 UTC to datetime objects (or a sequence, if the input is a sequence) drange(dstart, dend, delta): This function returns a date range (a sequence) of float values in Matplotlib date format; dstart and dend are datetime objects while delta is a datetime.timedelta instance Usually, what we will end up doing is converting a sequence of datetime objects into a Matplotlib representation, such as: dates = list of datetime objectsmpl_dates = matplotlib.dates.date2num(dates) drange() can be useful in situations like this one: import matplotlib as mplfrom matplotlib import datesimport datetime as dtdate1 = dt.datetime(2008, 9, 23)date2 = dt.datetime(2009, 4, 12)delta = dt.timedelta(days=10)dates = mpl.dates.drange(date1, date2, delta) where dates will be a sequence of floats starting from date1 and ending at date2 with a delta timestamp between each item of the list. Axes formatting with axes tick locators and formatters As we have already seen, the X labels on the first image are not that nice looking. We would expect Matplotlib to allow a better way to label the axis, and indeed, there is. The solution is to change the two parts that form the axis   ticks—locators and formatters. Locators control the tick's position, while formatters control the formatting of labels. Both have a major and minor mode: the major locator and formatter are active by default and are the ones we commonly see, while minor mode can be turned on by passing a relative locator or formatter function (because minors are turned off by default by assigning NullLocator and NullFormatter to them). While this is a general tuning operation and can be applied to all Matplotlib plots, there are some specific locators and formatters for date plotting, provided by matplotlib.dates: MinuteLocator, HourLocator,DayLocator, WeekdayLocator,MonthLocator, YearLocator are all the  locators available that place a tick at the time specified by the name, for example, DayLocator will draw a tick at each day. Of course, a minimum knowledge of the date interval that we are about to draw is needed to select the best locator. DateFormatter is the tick formatter that uses strftime() to format strings.   The default locator and formatter are matplotlib.ticker.AutoDateLocator and matplotlib.ticker.AutoDateFormatter, respectively. Both are set by the plot_date() function when called. So, if you wish to set a different locator and/or formatter, then we suggest to do that after the plot_date() call in order to avoid the plot_date() function resetting them to the default values. Let's group all this up in an example: In [1]: import matplotlib as mplIn [2]: import matplotlib.pyplot as pltIn [3]: import numpy as npIn [4]: import datetime as dtIn [5]: fig = plt.figure()In [6]: ax2 = fig.add_subplot(212)In [7]: date2_1 = dt.datetime(2008, 9, 23)In [8]: date2_2 = dt.datetime(2008, 10, 3)In [9]: delta2 = dt.timedelta(days=1)In [10]: dates2 = mpl.dates.drange(date2_1, date2_2, delta2)In [11]: y2 = np.random.rand(len(dates2))In [12]: ax2.plot_date(dates2, y2, linestyle='-');In [13]: dateFmt = mpl.dates.DateFormatter('%Y-%m-%d')In [14]: ax2.xaxis.set_major_formatter(dateFmt)In [15]: daysLoc = mpl.dates.DayLocator()In [16]: hoursLoc = mpl.dates.HourLocator(interval=6)In [17]: ax2.xaxis.set_major_locator(daysLoc)In [18]: ax2.xaxis.set_minor_locator(hoursLoc)In [19]: fig.autofmt_xdate(bottom=0.18) # adjust for date labels displayIn [20]: fig.subplots_adjust(left=0.18)In [21]: ax1 = fig.add_subplot(211)In [22]: date1_1 = dt.datetime(2008, 9, 23)In [23]: date1_2 = dt.datetime(2009, 2, 16)In [24]: delta1 = dt.timedelta(days=10)In [25]: dates1 = mpl.dates.drange(date1_1, date1_2, delta1)In [26]: y1 = np.random.rand(len(dates1))In [27]: ax1.plot_date(dates1, y1, linestyle='-');In [28]: monthsLoc = mpl.dates.MonthLocator()In [29]: weeksLoc = mpl.dates.WeekdayLocator()In [30]: ax1.xaxis.set_major_locator(monthsLoc)In [31]: ax1.xaxis.set_minor_locator(weeksLoc)In [32]: monthsFmt = mpl.dates.DateFormatter('%b')In [33]: ax1.xaxis.set_major_formatter(monthsFmt)In [34]: plt.show() The result of executing the previous code snippet is as shown: We drew the subplots in reverse order to avoid some minor overlapping problems. fig.autofmt_xdate() is used to nicely format date tick labels. In particular, this function rotates the labels (by using rotation keyword argument, with a default value of 30°) and gives them  more room (by using bottom keyword argument, with a default value of 0.2). We can achieve the same result, at least for the additional spacing, with: fig = plt.figure()fig.subplots_adjust(bottom=0.2)ax = fig.add_subplot(111) This can also be done by creating the Axes instance directly with: ax = fig.add_axes([left, bottom, width, height]) while specifying the explicit dimensions. The subplots_adjust() function allows us to control the spacing around the subplots by using the following keyword arguments: bottom, top, left, right: Controls the spacing at the bottom, top, left, and right of the subplot(s)     wspace, hspace: Controls the horizontal and vertical spacing between subplots We can also control the spacing by using these parameters in the Matplotlib configuration file: figure.subplot.<position> = <value> Custom formatters and locators Even if it's not strictly related to date plotting, tick formatters allow for custom formatters too: ...import matplotlib.ticker as ticker...def format_func(x, pos): return <a transformation on x>...formatter = ticker.FuncFormatter(format_func)ax.xaxis.set_major_formatter(formatter)... The  function format_func will be called for each label to draw, passing its value and position on the axis. With those two arguments, we can apply a transformation (for example, divide x by 10) and then return a value that will be used to actually draw the tick label. Here's a general note on NullLocator: it can be used to remove axis ticks by simply issuing: ax.xaxis.set_major_locator(matplotlib.ticker.NullLocator()) Text properties, fonts, and LaTeX Matplotlib has excellent text support, including mathematical expressions, TrueType font support for raster and vector outputs, newline separated text with arbitrary rotations, and Unicode. We have total control over every text property (font size, font weight, text location, color, and so on) with sensible defaults set in the rc configuration file. Specifically for those interested in mathematical or scientific figures, Matplotlib implements a large number of TeX math symbols and commands to support mathematical expressions anywhere in the figure. We already saw some text functions, but the following list contains all the functions which can be used to insert text with the pyplot interface, presented along with the corresponding API method and a description: Pyplot function API method Description text() mpl.axes.Axes.text() Adds text at an arbitrary location to the Axes xlabel() mpl.axes.Axes.set_xlabel() Adds an axis label to the X-axis ylabel() mpl.axes.Axes.set_ylabel() Adds an axis label to the Y-axis title() mpl.axes.Axes.set_title() Adds a title to the Axes figtext() mpl.figure.Figure.text() Adds text at an arbitrary location to the Figure suptitle() mpl.figure.Figure.suptitle() Adds a centered title to the Figure annotate() mpl.axes.Axes.annotate() Adds an annotation with an optional arrow to the Axes     All of these commands return a matplotlib.text.Text instance. We can customize the text properties by passing keyword arguments to the functions or by using matplotlib.artist.setp(): t = plt.xlabel('some text', fontsize=16, color='green') We can do it as: t = plt.xlabel('some text')plt.setp(t, fontsize=16, color='green') Handling objects allows for several new possibilities; such as setting the same property to all the objects in a specific group. Matplotlib has several convenience functions to return the objects of a plot. Let's take the example of the tick labels: ax.get_xticklabels() This line of code returns a sequence of object instances (the labels for the X-axis ticks) that we can tune: for t in ax.get_xticklabels(): t.set_fontsize(5.) or else, still using setp(): setp(ax.get_xticklabels(), fontsize=5.) It can take a sequence of objects, and apply the same property to all of them. To recap, all of the properties such as color, fontsize, position, rotation, and so on, can be set either: At function call using keyword arguments Using setp() referencing the Text instance Using the modification function Fonts Where there is text, there are also fonts to draw it. Matplotlib allows for several font customizations. The most complete documentation on this is currently available in the Matplotlib configuration file, /etc/matplotlibrc. We are now reporting that information here. There are six font properties available for modification. Property name Values and description font.family It has five values: serif (example, Times) sans-serif (example, Helvetica) cursive (example, Zapf-Chancery) fantasy (example, Western) monospace (example, Courier) Each of these font families has a default list of font names in decreasing order of priority associated with them (next table). In addition to these generic font names, font.family may also be an explicit name of a font available on the system. font.style Three values: normal (or roman), italic, or oblique. The oblique style will be used for italic, if it is not present. font.variant Two values: normal or small-caps. For TrueType fonts, which are scalable fonts, small-caps is equivalent to using a font size of smaller, or about 83% of the current font size. font.weight Effectively has 13 values-normal, bold, bolder, lighter, 100, 200, 300, ..., 900. normal is the same as 400, and bold is 700. bolder and lighter are relative values with respect to the current weight. font.stretch 11 values-ultra-condensed, extra-condensed, condensed, semi-condensed, normal, semi-expanded, expanded, extra-expanded, ultra-expanded, wider, and narrower. This property is not currently implemented. It works if the font supports it, but only few do. font.size The default font size for text, given in points. 12pt is the standard value.
Read more
  • 0
  • 1
  • 3758

article-image-build-advanced-contact-manager-using-jboss-richfaces-33-part-1
Packt
18 Nov 2009
11 min read
Save for later

Build an Advanced Contact Manager using JBoss RichFaces 3.3: Part 1

Packt
18 Nov 2009
11 min read
The main layout Let's start preparing the space for the core features of the application. We want a three-column layout for groups, contacts list, and contact detail. Let's open the home.xhtml file and add a three-column panel grid inside the body: <h:panelGrid columns="3" width="100%" columnClasses="main-group-column, main-contacts-list-column,  main-contact-detail-column"></h:panelGrid> We are using three new CSS classes (one for every column). Let's open the /view/stylesheet/theme.css file and add the following code: .main-group-column { width: 20%; vertical-align: top;}.main-contacts-list-column { width: 40%; vertical-align: top;}.main-contact-detail-column { width: 40%; vertical-align: top;} The main columns are ready; now we want to split the content of every column in a separate file (so we don't have a large and difficult file to read) by using the Facelets templating capabilities—let's create a new folder inside the/view folder called main, and let's create the following empty files inside it: contactsGroups.xhtml contactsList.xhtml contactEdit.xhtml contactView.xhtml Now let's open them and put the standard code for an empty (included) file: <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><ui:composition ><!-- my code here --></ui:composition> Now, we have all of the pieces ready to be included into the home.xhtml file, let's open it and start adding the first column inside h:panelGrid: <a:outputPanel id="contactsGroups"> <ui:include src="main/contactsGroups.xhtml"/></a:outputPanel> As you can see, we surrounded the include with an a:outputPanel that will be used as a placeholder for the re-rendering purpose. Include a Facelets tag (ui:include) into the a:outputPanel that we used in order to include the page at that point. Ajax placeholders A very important concept to keep in mind while developing is that the Ajax framework can't add or delete, but can only replace existing elements in the page.For this reason, if you want to append some code, you need to use a placeholder. RichFaces has a component that can be used as a placeholder—a4j:outputPanel. Inside a4j:outputPanel, you can put other components that use the "rendered" attribute in order to decide if they are visible or not. When you want to re-render all the included components, just re-render the outputPanel, and all will work without any problem. Here is a non-working code snippet: <h:form> <h:inputText value="#{aBean.myText}"> <a4j:support event="onkeyup" reRender="out1" /> </h:inputText></h:form><h:outputText id="out1" value="#{aBean.myText}" rendered="#{not empty aBean.myText}"/> This code seems the same as that of the a4j:support example, but it won't work. The problem is that we added the rendered attribute to outputText, so initially, out1 will not be rendered (because the text property is initially empty and rendered will be equal to false). After the Ajax response, the JavaScript Engine will not find the out1 element (it is not in the page because of rendered="false"), and it will not be able to update it (remember that you can't add or delete elements, only replace them). It is very simple to make the code work: <h:form> <h:inputText value="#{aBean.myText}"> <a4j:support event="onkeyup" reRender="out2" /> </h:inputText></h:form><a4j:outputPanel id="out2"> <h:outputText id="out1" rendered="#{not empty aBean.myText}" value="#{aBean.myText}" /></a4j:outputPanel> As you can see, you just have to put the out1 component inside a4j:outputPanel (called out2) and tell a4j:support to re-render out2 instead of out1. Initially, out2 will be rendered but empty (because out1 will not be rendered). After the Ajax response, the empty out2 will be replaced with markup elements that also contain the out1 component (that is now visible, because the myText property is not empty after the Ajax update and the rendered property is true). A very important concept to keep in mind while developing is that the Ajax framework can't add or delete, but can only replace existing elements of the page. For this reason, if you want to append some code, you need to use a placeholder. The groups box This box will contain all the contacts groups, so the user will be able to organize contacts in different groups in a better way. We will not implement the group box features in this article. Therefore, by now the group column is just a rich:panel with a link to refresh the contact list. Let's open the contactsGroups.xhtml file and insert the following code: <h:form> <rich:panel> <f:facet name="header"> <h:outputText value="#{messages['groups']}" /> </f:facet> <h:panelGrid columns="1"> <a:commandLink value="#{messages['allContacts']}" ajaxSingle="true" reRender="contactsList"> <f:setPropertyActionListener value="#{null}" target="#{homeContactsListHelper.contactsList}" /> </a:commandLink> </h:panelGrid> </rich:panel></h:form> As you can see, we've put a three-column h:panelGrid (to be used in the future) and a:commandLink, which just sets the contactsList property of the homeContactListHelper bean (that we will see in the next section) to null, in order to make the list be read again. At the end of the Ajax interaction, it will re-render the contactsList column in order to show the new data. Also, notice that we are still supporting i18n for every text using the messages property; the task to fill the messages_XX.properties file is left as an exercise for the user. The contacts list The second column inside h:panelGrid of home.xhtml looks like: <a:outputPanel id="contactsList"> <ui:include src="main/contactsList.xhtml"/></a:outputPanel> As for groups, we used a placeholder surrounding the ui:include tag. Now let's focus on creating the data table—open the /view/main/contactsList.xhtml file and add the first snippet of code for dataTable: <h:form> <rich:dataTable id="contactsTable" reRender="contactsTableDS" rows="20" value="#{homeContactsListHelper.contactsList}" var="contact"> <rich:column width="45%"> <h:outputText value="#{contact.name}"/> </rich:column> <rich:column width="45%"> <h:outputText value="#{contact.surname}"/> </rich:column> <f:facet name="footer"> <rich:datascroller id="contactsTableDS" for="contactsTable" renderIfSinglePage="false"/> </f:facet> </rich:dataTable> <h:outputText value="#{messages['noContactsInList']}" rendered="#{homeContactsListHelper.contactsList.size()==0}"/></h:form> We just added the rich:dataTable component with some columns and an Ajax data scroller at the end. Differences between h:dataTable and rich:dataTable RichFaces provides its own version of h:dataTable, which contains more features and is better integrated with the RichFaces framework. The first important additional feature, in fact, is the skinnability support following the RichFaces standards. Other features are row and column spans support (we will discuss it in the Columns and column groups section), out-of-the-box filter and sorting (discussed in the Filtering and sorting section), more JavaScript event handlers (such as onRowClick, onRowContextMenu, onRowDblClick, and so on) and the reRender attribute. Like other data iteration components of the RichFaces framework, it also supports the partial-row update. Data pagination Implementing Ajax data pagination using RichFaces is really simple—just decide how many rows must be shown in every page by setting the rows attribute of dataTable (in our case, we've chosen 20 rows per page), and then "attach" the rich:datascroller component to it by filling the for attribute with the dataTable id: <rich:datascroller id="contactsTableDS" for="contactsTable" renderIfSinglePage="false"/> Here you can see another very useful attribute (renderIfSinglePage) that makes the component hidden when there is just a single page in the list (it means the list contains a number of items that is less than or equal to the value of the rows attribute). A thing to keep in mind is that the rich:datascroller component must stay inside a form component (h:form or a:form) in order to work. Customizing rich:datascroller is possible not only by using CSS classes (as usual), but also by personalizing our own parts using the following facets: pages controlsSeparator first, first_disabled last, last_disabled next, next_disabled previous, previous_disabled fastforward, fastforward_disabled fastrewind, fastrewinf_disabled Here is an example with some customized facets (using strings): <rich:datascroller id="contactsTableDS" for="contactsTable" renderIfSinglePage="false"> <f:facet name="first"> <h:outputText value="First" /> </f:facet> <f:facet name="last"> <h:outputText value="Last" /> </f:facet></rich:datascroller> Here is the result: You can use an image (or another component) instead of text, in order to create your own customized scroller. Another interesting example is: <rich:datascroller id="contactsTableDS" for="contactsTable" renderIfSinglePage="false"> <f:facet name="first"> <h:outputText value="First"/> </f:facet> <f:facet name="last"> <h:outputText value="Last"/> </f:facet> <f:attribute name="pageIndexVar" value="pageIndexVar"/> <f:attribute name="pagesVar" value="pagesVar"/> <f:facet name="pages"> <h:panelGroup> <h:outputText value="Page #{pageIndexVar} / #{pagesVar}"/> </h:panelGroup> </f:facet></rich:datascroller> The result is: By setting the pageIndexVar and pagesVar attributes, we are able to use them in an outputText component, as we've done in the example. A useful attribute of the component is maxPages that sets the maximum number of page links (the numbers in the middle), which the scroller shows—therefore, we can control the size of it. The page attribute could be bound to a property of a bean, in order to switch to a page giving the number—a simple use-case could be using an inputText and a commandButton, in order to let the client insert the page number that he/she wants to go to. Here is the code that shows how to implement it: <rich:datascroller for="contactsList" maxPages="20" fastControls="hide" page="#{customDataScrollerExampleHelper.scrollerPage}" pagesVar="pages" id="ds"> <f:facet name="first"> <h:outputText value="First" /> </f:facet> <f:facet name="first_disabled"> <h:outputText value="First" /> </f:facet> <f:facet name="last"> <h:outputText value="Last" /> </f:facet> <f:facet name="last_disabled"> <h:outputText value="Last" /> </f:facet> <f:facet name="previous"> <h:outputText value="Previous" /> </f:facet> <f:facet name="previous_disabled"> <h:outputText value="Previous" /> </f:facet> <f:facet name="next"> <h:outputText value="Next" /> </f:facet> <f:facet name="next_disabled"> <h:outputText value="Next" /> </f:facet> <f:facet name="pages"> <h:panelGroup> <h:outputText value="Page "/> <h:inputText value="#{customDataScrollerExampleHelper. scrollerPage}" size="4"> <f:validateLongRange minimum="0" /> <a:support event="onkeyup" timeout="500" oncomplete="#{rich:component('ds')}. switchToPage(this.value)" /> </h:inputText> <h:outputText value=" of #{pages}"/> </h:panelGroup> </f:facet></rich:datascroller> As you can see, besides customizing the text of the First, Last, Previous, and Next sections, we defined a pages facet by inserting h:inputText connected with an integer value inside a backing bean. We also added the a:support tag, in order to trim the page change after the keyup event is completed. We've also set the timeout attribute, in order to call the server every 500 ms and not every time the user types. You can see a screenshot of the feature here:
Read more
  • 0
  • 0
  • 1080
article-image-build-advanced-contact-manager-using-jboss-richfaces-33-part-2
Packt
18 Nov 2009
10 min read
Save for later

Build an Advanced Contact Manager using JBoss RichFaces 3.3: Part 2

Packt
18 Nov 2009
10 min read
The contact detail For the third column, we would like to show three different statuses: The "No contact selected" message when no contact is selected (so the property is null) A view-only box when we are not in the edit mode (the property selectedContactEditing is set to false) An edit box when we are in the edit mode (the property selectedContactEditing is set to true) So, let's open the home.xhtml page and insert the third column inside the panel grid with the three statuses: <a:outputPanel id="contactDetail"> <a:outputPanel rendered="#{homeSelectedContactHelper. selectedContact==null}"> <rich:panel> <h:outputText value="#{messages['noContactSelected']}"/> </rich:panel></a:outputPanel> <a:outputPanel rendered="#{homeSelectedContactHelper. selectedContact!=null and homeSelectedContactHelper. selectedContactEditing==false}"> <ui:include src="main/contactView.xhtml"/> </a:outputPanel> <a:outputPanel rendered="#{homeSelectedContactHelper. selectedContact!=null and homeSelectedContactHelper. selectedContactEditing==true}"> <ui:include src="main/contactEdit.xhtml"/> </a:outputPanel></a:outputPanel> Here, we have put the main a:outputPanel as the main placeholder, and inside it we put three more instances of a:outputPanel (one for every state) with the rendered attribute in order to decide which one to show. The first one just shows a message when homeSelectedContactHelper.selectedContact is set to null: The second instance of a:outputPanel will include the main/contactView.xhtml file only if homeSelectedContactHelper.selectedContact is not null, and we are not in editing mode (so homeSelectedContactHelper.selectedContactEditing is set to false); the third one will be shown only if homeSelectedContactHelper.selectedContact is not null, and we are in the edit mode (that is homeSelectedContactHelper.selectedContactEditing is equal to true). Before starting to write the include sections, let's see how the main bean for the selected contact would look, and connect it with the data table for selecting the contact from it. The support bean Let's create a new class called HomeSelectedContactHelper inside the book.richfaces.advcm.modules.main package; the class might look like this: @Name("homeSelectedContactHelper")@Scope(ScopeType.CONVERSATION)public class HomeSelectedContactHelper { @In(create = true) EntityManager entityManager; @In(required = true) Contact loggedUser; @In FacesMessages facesMessages; // My code here} This is a standard JBoss Seam component and  now let's add the properties. The bean that we are going to use for view and edit features is very simple to understand—it just contains two properties (namely selectedContact and selectedContactEditing) and some action methods to manage them. Let's add the properties to our class: private Contact selectedContact;private Boolean selectedContactEditing;public Contact getSelectedContact() { return selectedContact;}public void setSelectedContact(Contact selectedContact) { this.selectedContact = selectedContact;}public Boolean getSelectedContactEditing() { return selectedContactEditing;}public void setSelectedContactEditing(Boolean selectedContactEditing) { this.selectedContactEditing = selectedContactEditing;} As you can see, we just added two properties with standard the getter and setter. Let's now see the action methods: public void createNewEmptyContactInstance() { setSelectedContact(new Contact());}public void insertNewContact() { // Attaching the owner of the contact getSelectedContact().setContact(loggedUser); entityManager.persist(getSelectedContact()); facesMessages.addFromResourceBundle(StatusMessage.Severity.INFO,  "contactAdded");}public void saveContactData() { entityManager.merge(getSelectedContact()); facesMessages.addFromResourceBundle(StatusMessage.Severity.INFO, "contactSaved");}public void deleteSelectedContact() { entityManager.remove(getSelectedContact()); // De-selecting the current contact setSelectedContact(null); setSelectedContactEditing(null); facesMessages.addFromResourceBundle(StatusMessage.Severity.INFO, "contactDeleted");}public boolean isSelectedContactManaged() { return getSelectedContact() != null && entityManager.contains (getSelectedContact());} It's not difficult to understand what they do, however, in order to be clear, we are going to describe what each method does. The method createNewEmptyContactInstance() simply sets the selectedContact property with a new instance of the Contact class—it will be called by the "add contact" button. After the user has clicked on the "add contact" button and inserted the contact data, he/she has to persist this new instance of data into the database. It is done by the insertNewContact() method, called when he/she clicks on the Insert button. If the user edits a contact and clicks on the "Save" button, the saveContactData() method will be called, in order to store the modifications into the database. As for saving, the deleteSelectedContact() method will be called by the "Delete" button, in order to remove the instance from the database. A special mention for the isSelectedContactManaged() method—it is used to determine if the selectedContact property contains a bean that exists in the database (so, I'm editing it), or a new instance not yet persisted to the database. We use it especially in rendered properties, in order to determine which component to show (you will see this in the next section). Selecting the contact from the contacts list We will use the contacts list in order to decide which contact must be shown in the detail view. The simple way is to add a new column into the dataTable, and put a command button (or link) to select the bean in order to visualize the detail view. Let's open the contactsList.xhtml file and add another column as follows: <rich:column width="10%" style="text-align: center"> <a:commandButton image="/img/view.png" reRender="contactDetail"> <f:setPropertyActionListener value="#{contact}" target="#{homeSelectedContactHelper.selectedContact}"/> <f:setPropertyActionListener value="#{false}" target="#{homeSelectedContactHelper.selectedContactEditing}"/> </a:commandButton></rich:column> Inside the column, we added the a:commandButton component (that shows an image instead of the standard text) that doesn't call any action—it uses the f:setPropertyAction method to set the homeSelectedContactHelper.selectedContact value to contact (the row value of the dataTable), and to tell to show the view box and not the edit one (setting homeSelectedContactHelper.selectedContactEditing to false). After the Ajax call, it will re-render the contactDetail box in order to reflect the change. Also, the header must be changed to reflect the column add: <rich:dataTable ... > <f:facet name="header"> <rich:columnGroup> <rich:column colspan="3"> <h:outputText value="Contacts"/> </rich:column> <rich:column breakBefore="true"> <h:outputText value="Name"/> </rich:column> <rich:column> <h:outputText value="Surname"/> </rich:column> <rich:column> <rich:spacer/> </rich:column> </rich:columnGroup> </f:facet> ... We incremented the colspan attribute value and added a new (empty) column header. The new contacts list will look like the following screenshot: Adding a new contact Another feature we would like to add to the contacts list is the "Add contact" button. In order to do that, we are going to use the empty toolbar. Let's add a new action button into the rich:toolbar component: <a:commandButton image="/img/addcontact.png" reRender="contactDetail"action="#{homeSelectedContactHelper.createNewEmptyContactInstance}"> <f:setPropertyActionListener value="#{true}"target="#{homeSelectedContactHelper.selectedContactEditing}"/></a:commandButton> This button will call the homeSelectedContactHelper.createNewEmptyContactInstance() action method in order to create and select an empty instance and will set homeSelectedContactHelper.selectedContactEditing to true in order to start the editing; after those Ajax calls, it will re-render the contactDetail box to reflect the changes. Viewing contact detail We are ready to implement the view contact detail box; just open the /view/main/contactView.xhtml file and add the following code: <h:form> <rich:panel> <f:facet name="header"> <h:outputText value="#{homeSelectedContactHelper.selectedContact.name} #{homeSelectedContactHelper.selectedContact.surname}"/> </f:facet> <h:panelGrid columns="2" rowClasses="prop" columnClasses="name,value"> <h:outputText value="#{messages['name']}:"/> <h:outputText value="#{homeSelectedContactHelper.selectedContact.name}"/> <h:outputText value="#{messages['surname']}:"/> <h:outputText value="#{homeSelectedContactHelper.selectedContact.surname}"/> <h:outputText value="#{messages['company']}:"/> <h:outputText value="#{homeSelectedContactHelper.selectedContact.company}"/> <h:outputText value="#{messages['email']}:"/> <h:outputText value="#{homeSelectedContactHelper.selectedContact.email}"/> </h:panelGrid> </rich:panel> <rich:toolBar> <rich:toolBarGroup> <a:commandLink ajaxSingle="true" reRender="contactDetail" styleClass="image-command-link"> <f:setPropertyActionListener value="#{true}" target="#{homeSelectedContactHelper.selectedContactEditing}"/> <h:graphicImage value="/img/edit.png" /> <h:outputText value="#{messages['edit']}" /> </a:commandLink> </rich:toolBarGroup> </rich:toolBar></h:form> The first part is just rich:panel containing h:panelGrid with the fields' detail. In the second part of the code, we put rich:toolBar containing a command link (with an image and a text) that activates the edit mode—it, in fact, just sets the homeSelectedContactHelper.selectedContactEditing property to true and re-renders contactDetail in order to make it appear in the edit box. We also added a new CSS class into the /view/stylesheet/theme.css file to manage the layout of command links with images: .image-command-link { text-decoration: none;}.image-command-link img { vertical-align: middle; padding-right: 3px;} The view box looks like: We are now ready to develop the edit box. Editing contact detail When in the edit mode, the content of the /view/main/contactEdit.xhtml file will be shown in the contact detail box—let's open it for editing. Let's add the code for creating the main panel: <h:form> <rich:panel> <f:facet name="header"> <h:panelGroup> <h:outputText value="#{homeSelectedContactHelper.selectedContact.name} #{homeSelectedContactHelper.selectedContact.surname}"rendered="#{homeSelectedContactHelper.selectedContactManaged}"/> <h:outputText value="#{messages['newContact']}"rendered="#{!homeSelectedContactHelper.selectedContactManaged}"/> </h:panelGroup> </f:facet> <!-- my code here --> </rich:panel><!-- my code here --></h:form> This is a standard rich:panel with a customized header—it has two h:outputText components that will be shown depending on the rendered attribute (whether it's a new contact or not). More than one component inside f:facetRemember that f:facet must have only one child, so, to put more than one component, you have to use a surrounding one like h:panelGroup or something similar. Inside the panel, we are going to put h:panelGrid containing the components for data editing: <rich:graphValidator> <h:panelGrid columns="3" rowClasses="prop" columnClasses="name,value,validatormsg"> <h:outputLabel for="scName" value="#{messages['name']}:"/> <h:inputText id="scName" value="#{homeSelectedContactHelper.selectedContact.name}"/> <rich:message for="scName" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> <h:outputLabel for="scSurname" value="#{messages['surname']}:"/> <h:inputText id="scSurname" value="#{homeSelectedContactHelper.selectedContact.surname}"/> <rich:message for="scSurname" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> <h:outputLabel for="scCompany" value="#{messages['company']}:"/> <h:inputText id="scCompany" value="#{homeSelectedContactHelper.selectedContact.company}"/> <rich:message for="scCompany" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> <h:outputLabel for="scEmail" value="#{messages['email']}:"/> <h:inputText id="scEmail" value="#{homeSelectedContactHelper.selectedContact.email}"/> <rich:message for="scEmail" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> </h:panelGrid><rich:graphValidator> Nothing complicated here, we've just used h:outputLabel, h:inputText, and rich:message for every Contact property to be edited; it appears as follows:
Read more
  • 0
  • 0
  • 873

article-image-create-quick-application-cakephp-part-2
Packt
18 Nov 2009
7 min read
Save for later

Create a Quick Application in CakePHP: Part 2

Packt
18 Nov 2009
7 min read
Editing a Task Now that we can add tasks to CakeTooDoo, the next thing that we will be doing is to have the ability to edit tasks. This is necessary because the users should be able to tick on a task when it has been completed. Also, if the users are not happy with the title of the task, they can change it. To have these features in CakeTooDoo, we will need to add another action to our Tasks Controller and also add a view for this action. Time for Action: Creating the Edit Task Form Open the file tasks_controller.php and add a new action named edit as shown in the following code: function edit($id = null) { if (!$id) { $this->Session->setFlash('Invalid Task'); $this->redirect(array('action'=>'index'), null, true); } if (empty($this->data)) { $this->data = $this->Task->find(array('id' => $id)); } else { if ($this->Task->save($this->data)) { $this->Session->setFlash('The Task has been saved'); $this->redirect(array('action'=>'index'), null, true); } else { $this->Session->setFlash('The Task could not be saved. Please, try again.'); } } } Inside the directory /CakeTooDoo/app/views/tasks, create a new file named edit.ctp and add the following code to it: <?php echo $form->create('Task');?> <fieldset> <legend>Edit Task</legend> <?php echo $form->hidden('id'); echo $form->input('title'); echo $form->input('done'); ?> </fieldset> <?php echo $form->end('Save');?> We will be accessing the Task Edit Form from the List All Task page. So, let's add a link from the List All Tasks page to the Edit Task page. Open the index.ctp file in /CakeTooDoo/app/views directory, and replace the HTML comment <!-- different actions on tasks will be added here later --> with the following code: <?php echo $html->link('Edit', array('action'=>'edit', $task['Task']['id'])); ?> Now open the List All Tasks page in the browser by pointing it to http://localhost/CakeTooDoo/tasks/index and we will see an edit link beside all the tasks. Click on the edit link of the task you want to edit, and this will take you to do the Edit Task form, as shown below: Now let us add links in the Edit Task Form page to the List All Tasks and Add New Task page. Add the following code to the end of edit.ctp in /CakeTooDoo/app/views: <?php echo $html->link('List All Tasks', array('action'=>'index')); ?><br /> <?php echo $html->link('Add Task', array('action'=>'add')); ?> What Just Happened? We added a new action named edit in the Tasks controller. Then we went on to add the view file edit.ctp for this action. Lastly, we linked the other pages to the Edit Task page using the HTML helper. When accessing this page, we need to tell the action which task we are interested to edit. This is done by passing the task id in the URL. So, if we want to edit the task with the id of 2, we need to point our browser to http://localhost/CakeTooDoo/tasks/edit/2. When such a request is made, Cake forwards this request to the Tasks controller's edit action, and passes the value of the id to the first parameter of the edit action. If we check the edit action, we will notice that it accepts a parameter named $id. The task id passed in the URL is stored in this parameter. When a request is made to the edit action, the first thing that it does is to check if any id has been supplied or not. To let users edit a task, it needs to know which task the user wants to edit. It cannot continue if there is no id supplied. So, if $id is undefined, it stores an error message to the session and redirects to the index action that will show the list of current tasks along with the error message. If $id is defined, the edit action then checks whether there is any data stored in $this->data. If no data is stored in $this->data, it means that the user has not yet edited. And so, the desired task is fetched from the Task model, and stored in $this->data in the line: $this->data = $this->Task->find(array('id' => $id)); Once that is done, the view of the edit action is then rendered, displaying the task information. The view fetches the task information to be displayed from $this->data. The view of the edit action is very similar to that of the add action with a single difference. It has an extra line with echo $form->hidden('id');. This creates an HTML hidden input with the value of the task id that is being edited. Once the user edits the task and clicks on the Save button, the edited data is resent to the edit action and saved in $this->data. Having data in $this->data confirms that the user has edited and submitted the changed data. Thus, if $this->data is not empty, the edit action then tries to save the data by calling the Task Model's save() function: $this->Task->save($this->data). This is the same function that we used to add a new task in the add action. You may ask how does the save() function of model knows when to add a new record and when to edit an existing one? If the form data has a hidden id field, the function knows that it needs to edit an existing record with that id. If no id field is found, the function adds a new record. Once the data has been successfully updated, a success message is stored in the session and it redirects to the index action. Of course the index page will show the success message. Adding Data Validation If you have come this far, by now you should have a working CakeTooDoo. It has the ability to add a task, list all the tasks with their statuses, and edit a task to change its status and title. But, we are still not happy with it. We want the CakeTooDoo to be a quality application, and making a quality application with CakePHP is as easy as eating a cake. A very important aspect of any web application (or software in general), is to make sure that the users do not enter inputs that are invalid. For example, suppose a user mistakenly adds a task with an empty title, this is not desirable because without a title we cannot identify a task. We would want our application to check whether the user enters title. If they do not enter a title, CakeTooDoo should not allow the user to add or edit a task, and should show the user a message stating the problem. Adding these checks is what we call Data Validation. No matter how big or small our applications are, it is very important that we have proper data validation in place. But adding data validation can be a painful and time consuming task. This is especially true, if we have a complex application with lots of forms. Thankfully, CakePHP comes with a built-in data validation feature that can really make our lives much easier. Time for Action: Adding Data Validation to Check for Empty Title In the Task model that we created in /CakeTooDoo/app/models, add the following code inside the Task Model class. The Task Model will look like this: <?php class Task extends AppModel { var $name = 'Task'; var $validate = array( 'title' => array( 'rule' => VALID_NOT_EMPTY, 'message' => 'Title of a task cannot be empty' ) ); } ?> Now open the Add Task form in the browser by pointing it to http://localhost/CakeTooDoo/tasks/add, and try to add a task with an empty title. It will show the following error message:
Read more
  • 0
  • 0
  • 1759