Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-working-complex-associations-using-cakephp
Packt
28 Oct 2009
6 min read
Save for later

Working with Complex Associations using CakePHP

Packt
28 Oct 2009
6 min read
Defining Many-To-Many Relationship in Models In the previous article in this series on Working with Simple Associations using CakePHP, we assumed that a book can have only one author. But in real life scenario, a book may also have more than one author. In that case, the relation between authors and books is many-to-many. We are now going to see how to define associations for a many-to-many relation. We will modify our existing code-base that we were working on in the previous article to set up the associations needed to represent a many-to-many relation. Time for Action: Defining Many-To-Many Relation Empty the database tables: TRUNCATE TABLE `authors`;TRUNCATE TABLE `books`; Remove the author_id field from the books table: ALTER TABLE `books` DROP `author_id` Create a new table, authors_books:; CREATE TABLE `authors_books` (`author_id` INT NOT NULL ,`book_id` INT NOT NULL Modify the Author (/app/models/author.php) model: <?phpclass Author extends AppModel{ var $name = 'Author'; var $hasAndBelongsToMany = 'Book';}?> Modify the Book (/app/models/book.php) model: <?phpclass Book extends AppModel{ var $name = 'Book'; var $hasAndBelongsToMany = 'Author';}?> Modify the AuthorsController (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?> Modify the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; var $scaffold;}?> Now, visit the following URLs and add some test data into the system:http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We first emptied the database and then dropped the field author_id from the books table. Then we added a new join table authors_books that will be used to establish a many-to-many relation between authors and books. The following diagram shows how a join table relates two tables in many-to-many relation: In a many-to-many relation, one record of any of the tables can be related to multiple records of the other table. To establish this link, a join table is used—a join table contains two fields to hold the primary-keys of both of the records in relation. CakePHP has certain conventions for naming a join table—join tables should be named after the tables in relation, in alphabetical order, with underscores in between. The join table between authors and books tables should be named authors_books, not books_authors. Also by Cake convention, the default value for the foreign keys used in the join table must be underscored, singular name of the models in relation, suffixed with _id. After creating the join table, we defined associations in the models, so that our models also know about the new relationship that they have. We added hasAndBelongsToMany (HABTM) associations in both of the models. HABTM is a special type of association used to define a many-to-many relation in models. Both the models have HABTM associations to define the many-to-many relationship from both ends. After defining the associations in the models, we created two controllers for these two models and put in scaffolding in them to see the association working. We could also use an array to set up the HABTM association in the models. Following code segment shows how to use an array for setting up an HABTM association between authors and books in the Author model: var $hasAndBelongsToMany = array( 'Book' => array( 'className' => 'Book', 'joinTable' => 'authors_books', 'foreignKey' => 'author_id', 'associationForeignKey' => 'book_id' ) ); Like, simple relationships, we can also override default association characteristics by adding/modifying key/value pairs in the associative array. The foreignKey key/value pair holds the name of the foreign-key found in the current model—default is underscored, singular name of the current model suffixed with _id. Whereas, associationForeignKey key/value pair holds the foreign-key name found in the corresponding table of the other model—default is underscored, singular name of the associated model suffixed with _id. We can also have conditions, fields, and order key/value pairs to customize the relationship in more detail. Retrieving Related Model Data in Many-To-Many Relation Like one-to-one and one-to-many relations, once the associations are defined, CakePHP will automatically fetch the related data in many-to-many relation. Time for Action: Retrieving Related Model Data Take out scaffolding from both of the controllers—AuthorsController (/app/controllers/authors_controller.php) and BooksController (/app/controllers/books_controller.php). Add an index() action inside the AuthorsController (/app/controllers/authors_controller.php), like the following: <?phpclass AuthorsController extends AppController { var $name = 'Authors'; function index() { $this->Author->recursive = 1; $authors = $this->Author->find('all'); $this->set('authors', $authors); }}?> Create a view file for the /authors/index action (/app/views/authors/index.ctp): <?php foreach($authors as $author): ?><h2><?php echo $author['Author']['name'] ?></h2><hr /><h3>Book(s):</h3><ul><?php foreach($author['Book'] as $book): ?><li><?php echo $book['title'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Write down the following code inside the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; function index() { $this->Book->recursive = 1; $books = $this->Book->find('all'); $this->set('books', $books); }}?> Create a view file for the action /books/index (/app/views/books/index.ctp): <?php foreach($books as $book): ?><h2><?php echo $book['Book']['title'] ?></h2><hr /><h3>Author(s):</h3><ul><?php foreach($book['Author'] as $author): ?><li><?php echo $author['name'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Now, visit the following URLs:http://localhost/relationship/authors/http://localhost/relationship/books/ What Just Happened? In both of the models, we first set the value of $recursive attributes to 1 and then we called the respective models find('all') functions. So, these subsequent find('all') operations return all associated model data that are related directly to the respective models. These returned results of the find('all') requests are then passed to the corresponding view files. In the view files, we looped through the returned results and printed out the models and their related data. In the BooksController, this returned data from find('all') is stored in a variable $books. This find('all') returns an array of books and every element of that array contains information about one book and its related authors. Array ( [0] => Array ( [Book] => Array ( [id] => 1 [title] => Book Title ... ) [Author] => Array ( [0] => Array ( [id] => 1 [name] => Author Name ... ) [1] => Array ( [id] => 3 ... 54 54 ... ...) Same for the Author model, the returned data is an array of authors. Every element of that array contains two arrays: one contains the author information and the other contains an array of books related to this author. These arrays are very much like what we got from a find('all') call in case of the hasMany association.
Read more
  • 0
  • 0
  • 5521

article-image-working-xml-flex-3-and-java-part2
Packt
28 Oct 2009
7 min read
Save for later

Working with XML in Flex 3 and Java-part2

Packt
28 Oct 2009
7 min read
  Loading external XML documents You can use the URLLoader class to load external data from a URL. The URLLoader class downloads data from a URL as text or binary data. In this section, we will see how to use the URLLoader class for loading external XML data into your application. You can create a URLLoader class instance and call the load() method by passing URLRequest as a parameter and register for its complete event to handle loaded data. The following code snippet shows how exactly this works: private var xmlUrl:String = "http://www.foo.com/rssdata.xml";private var request:URLRequest = new URLRequest(xmlUrl);private var loader:URLLoader = new URLLoader(;private var rssData:XML;loader.addEventListener(Event.COMPLETE, completeHandler);loader.load(request);private function completeHandler(event:Event):void { rssData = XML(loader.data); trace(rssData);} Let's see one quick complete sample of loading RSS data from the Internet: <?xml version="1.0" encoding="utf-8"?><mx:Application creationComplete="loadData();"> <mx:Script> <![CDATA[ import mx.collections.XMLListCollection; private var xmlUrl:String = "http://sessions.adobe.com/360FlexSJ2008/feed.xml"; private var request:URLRequest = new URLRequest(xmlUrl); private var loader:URLLoader = new URLLoader(request); [Bindable] private var rssData:XML; private function loadData():void { loader.addEventListener(Event.COMPLETE, completeHandler); loader.load(request); } private function completeHandler(event:Event):void { rssData = new XML(loader.data); } ]]></mx:Script><mx:Panel title="RSS Feed Reader" width="100%" height="100%"> <mx:DataGrid id="dgGrid" dataProvider="{rssData.channel.item}" height="100%" width="100%"> <mx:columns> <mx:DataGridColumn headerText="Title" dataField="title"/> <mx:DataGridColumn headerText="Link" dataField="link"/> <mx:DataGridColumn headerText="pubDate" dataField="pubDate"/> <mx:DataGridColumn headerText="Description" dataField="description"/> </mx:columns></mx:DataGrid><mx:TextArea width="100%" height="80" text="{dgGrid.selectedItem.description}"/></mx:Panel></mx:Application> In the code above, we are loading RSS feed from an external URL and displaying it in DataGrid by using data binding. Output: An example: Building a book explorer In this section, we will build something more complicated and interesting by using many features, including custom components, events, data binding, E4X, loading external XML data, and so on. We will build a sample books explorer, which will load a books catalog from an external XML file and allow the users to explore and view details of books. We will also build a simple shopping cart component, which will list books that a user would add to cart by clicking on the Add to cart button. Create a new Flex project using Flex Builder. Once the project is created, create an assetsimages folder under its src folder. This folder will be used to store images used in this application. Now start creating the following source files into its source folder. Let's start by creating a simple book catalog XML file as follows: bookscatalog.xml:<books> <book ISBN="184719530X"> <title>Building Websites with Joomla! 1.5</title> <author> <lastName>Hagen</lastName> <firstName>Graf</firstName> </author>  <pageCount>363</pageCount> <price>Rs.1,247.40</price> <description>The best-selling Joomla! tutorial guide updated for the latest 1.5 release </description> </book> <book ISBN="1847196160"> <title>Drupal 6 JavaScript and jQuery</title> <author> <lastName>Matt</lastName> <firstName>Butcher</firstName> </author>  <pageCount>250</pageCount> <price>Rs.1,108.80</price> <description>Putting jQuery, AJAX, and JavaScript effects into your Drupal 6 modules and themes</description> </book> <book ISBN="184719494X"> <title>Expert Python Programming</title> <author> <lastName>Tarek</lastName> <firstName>Ziadé</firstName> </author>  <pageCount>350</pageCount> <price>Rs.1,247.4</price> <description>Best practices for designing, coding, and distributing your Python software</description> </book> <book ISBN="1847194885"> <title>Joomla! Web Security</title> <author> <lastName>Tom</lastName> <firstName>Canavan</firstName> </author>  <pageCount>248</pageCount> <price>Rs.1,108.80</price> <description>Secure your Joomla! website from common security threats with this easy-to-use guide</description> </book></books> The above XML file contains details of individual books in an XML form. You can also deploy this file on your web server and specify its URL into URLRequest while loading it. Next, we will create a custom event which we will be dispatching from our custom component. Make sure you create an events package under your src folder in Flex Builder called events, and place this file in it. AddToCartEvent.as:package events{ import flash.events.Event; public class AddToCartEvent extends Event { public static const ADD_TO_CART:String = "addToCart"; public var book:Object; public function AddToCartEvent(type:String, bubbles_Boolean=false, cancelable_Boolean=false) { super(type, bubbles, cancelable); } }} This is a simple custom event created by inheriting the flash.events.Event class. This class defines the ADD_TO_CART string constant, which will be used as the name of the event in the addEventListener() method. You will see this in the BooksExplorer.mxml code. We have also defined an object to hold the reference of the book which the user can add into the shopping cart. In short, this object will hold the XML node of a selected book. Next, we will create the MXML custom component called BookDetailItemRenderer.mxml. Make sure that you create a package under your src folder in Flex Builder called components, and place this file in it and copy the following code in it: <?xml version="1.0" encoding="utf-8"?><mx:HBox cornerRadius="8" paddingBottom="2" paddingLeft="2"paddingRight="2" paddingTop="2"><mx:Metadata>[Event(name="addToCart", type="flash.events.Event")]</mx:Metadata><mx:Script><![CDATA[import events.AddToCartEvent;import mx.controls.Alert;[Bindable][Embed(source="../assets/images/cart.gif")]public var cartImage:Class;private function addToCardEventDispatcher():void {var addToCartEvent:AddToCartEvent = new AddToCartEvent("addToCart", true, true);addtoCartEvent.book = data;dispatchEvent(addtoCartEvent);}]]></mx:Script><mx:HBox width="100%" verticalAlign="middle" paddingBottom="2"paddingLeft="2" paddingRight="2" paddingTop="2" height="100%"borderStyle="solid" borderThickness="2" borderColor="#6E6B6B"cornerRadius="4"><mx:Image id="bookImage" source="{data.image}" height="109"width="78" maintainAspectRatio="false"/><mx:VBox height="100%" width="100%" verticalGap="2"paddingBottom="0" paddingLeft="0" paddingRight="0"paddingTop="0" verticalAlign="middle"><mx:Label id="bookTitle" text="{data.title}"fontSize="12" fontWeight="bold"/><mx:Label id="bookAuthor" text="By: {data.author.lastName},{data.author.firstName}" fontWeight="bold"/><mx:Label id="coverPrice" text="Price: {data.price}"fontWeight="bold"/><mx:Label id="pageCount" text="Pages: {data.pageCount}"fontWeight="bold"/><mx:HBox width="100%" backgroundColor="#3A478D"horizontalAlign="right" paddingBottom="0" paddingLeft="0"paddingRight="5" paddingTop="0" height="22"verticalAlign="middle"><mx:Label text="Add to cart " color="#FFFFFF"fontWeight="bold"/><mx:Button icon="{cartImage}" height="20" width="20"click="addToCardEventDispatcher();"/></mx:HBox></mx:VBox></mx:HBox></mx:HBox>
Read more
  • 0
  • 0
  • 929

article-image-understanding-business-activity-monitoring-oracle-soa-suite
Packt
28 Oct 2009
14 min read
Save for later

Understanding Business Activity Monitoring in Oracle SOA Suite

Packt
28 Oct 2009
14 min read
How BAM differs from traditional business intelligence The Oracle SOA Suite stores the state of all processes in a database in documented schemas so why do we need yet another reporting tool to provide insight into our processes and services? In other words how does BAM differ from traditional BI (Business Intelligence)? In traditional BI, reports are generated and delivered either on a scheduled basis or in response to a user request. Any changes to the information will not be reflected until the next scheduled run or until a user requests the report to be rerun. BAM is an event-driven reporting tool that generates alerts and reports in real time, based on a continuously changing data stream, some of whose data may not be in the database. As events occur in the Services and Processes, the business has defined they are captured by BAM and reports and views are updated in real time. Where necessary these updated reports are delivered to users. This delivery to users can take several forms. The best known is the dashboard on users' desktops that will automatically update without any need for the user to refresh the screen. There are also other means to deliver reports to the end user, including sending them via a text message or an email. Traditional reporting tools such as Oracle Reports and Oracle Discoverer as well as Oracles latest Business Intelligence Suite can be used to provide some real-time reporting needs but they do not provide the event driven reporting that gives the business a continuously updating view of the current business situation. Event Driven Architecture Event Driven Architecture (EDA) is about building business solutions around responsiveness to events. Events may be simple triggers such as a stock out event or they may be more complex triggers such as the calculations to realize that a stock out will occur in three days. An Event Driven Architecture will often take a number of simple events and then combine them through a complex event processing sequence to generate complex events that could not have been raised without aggregation of several simpler events. Oracle BAM scenarios Oracle Business Activity Monitoring is typically used to monitor two distinct types of real-time data. Firstly it may be used to monitor the overall state of processes in the business. For example it may be used to track how many auctions are currently running, how many have bids on them, and how many have completed in the last 24 hours (or other time periods). Secondly it may be used to track in real-time Key Performance Indicators or KPIS. For example it may be used to provide a real-time updating dashboard to a seller to show the current total value of all the sellers' auctions and to track this against an expected target. In the first case, we are interested in how business processes are progressing and are using BAM to identify bottlenecks and failure points within those processes. Bottlenecks can be identified by too much time being spent on given steps in the process. BAM allows us to compute the time taken between two points in a process, such as the time between order placement and shipping, and provide real-time feedback on those times. Similarly BAM can be used to track the percentage drop-out rate between steps in a sales process, allowing the business to take appropriate action. In the second case, our interest is on some aggregate number, such as our total liabilities should we win all the auctions we are bidding on. This requires us to aggregate results from many events, possibly performing some kind of calculation on them to provide us with a single KPI that gives an indication to the business of how things are going. BAM allows us to continuously update this number in real on a dashboard without the need for continued polling. It also allows us to trigger alerts, perhaps through email or SMS, to notify an individual, when a threshold is breached. In both cases reports delivered can be customized based on the individual receiving the report. BAM architecture It may seem odd to have a section on architecture in the middle of a article about how to effectively use BAM, but key to successful utilization of BAM is an understanding of how the different tiers relate to each other. Logical view The following diagram represents a logical view of how BAM operates. Events are acquired from one or more sources through event acquisition and then normalized, correlated, and stored in event storage (generally a memory area in BAM that is backed up to disc). The report cache generates reports based on events in storage and then delivers those reports, together with real-time updates through the report delivery layer. Event processing is also performed on events in storage, and when defined conditions are met, alerts will be delivered through the alert delivery service. Physical view To better understand the physical view of the architecture of BAM, we have divided this section into four parts. Let us discuss these in detail. Capture This logical view maps onto the physical BAM components shown in the following diagram. Data acquisition in the SOA Suite is handled by sensors in BPEL and ESB. BAM can also receive events from JMS message queues and access data in databases (useful for historical comparison). For complex data formats or for other data sources then Oracle Data Integrator (ODI is a separate product to the SOA Suite) is recommended by Oracle. Although potentially less efficient and more work than running ODI, it is also possible to use adapters to acquire data from multiple sources and feed it into BAM through ESB or BPEL. At the data capture level we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must consider the sources of that data and the best way to load it into BAM. Store Once the data is captured, it is then stored in a normalized form in the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item rather than creating multiple data items. Process Reports are run based on user demand. Once a report is run it will update the user's screen on a real time basis. Where multiple users are accessing the same report only one instance of the report is maintained by the report server. As events are captured and stored in real time the report engine will continuously monitor them for any changes that need to be made to those reports which are currently active. When changes are detected that impact active reports, then the appropriate report will be updated in memory and the updates sent to the user screen. In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain time-outs have expired. The event processor will often need to perform calculations across multiple data items to do this. Deliver Delivery of reports takes place in two ways. First, users request reports to be delivered to their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The second approach is that reports are sent out as a result of events being triggered by the Event Processing Engine. In the latter case, the report may be delivered by email, SMS, or voice messaging using the notifications service. A final option available for these event generated reports is to invoke a web service to take some sort of automated action. Closing the loop While monitoring what is happening is all very laudable, it is only of benefit if we actually do something about what we are monitoring. BAM provides the real-time monitoring ability very well but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where as a result of monitoring we are able to reach back into the processes and either alter their execution or start new ones. For example when a stock out or low stock event is raised then the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us to trigger events across multiple applications and locations in a way that may not be possible within a single application or process. For example, in response to a stock out, instead of requesting our supplier to provide more stock, we may be monitoring stock levels in independent systems and, based on stock levels elsewhere, may redirect stock from one location to another. BAM platform anomaly In 10g SOA Suite, BAM runs only as a Windows application. Unlike the rest of SOA Suite, it does not run on a JEE Application Server and it can only run on the Windows platform. In the next release, 11g, BAM will be provided as a JEE application that can run on a number of application servers and operating systems. User interface Development in Oracle BAM is done through a web-based user interface. This user interface gives access to four different applications that allow you to interact with different parts of BAM. These are: Active Viewer for giving access to reports; this relates to the deliver stage for user requested reports. Active Studio for building reports; this relates to the 'process' stage for creating reports. Architect for setting up both inbound and outbound events. Data elements are defined here as data sources. Alerts are also configured here. This covers setting up, acquire and store stages as well as the deliver stage for alerts. Administrator for managing users and roles as well as defining the types of message sources. We will not examine the applications individually but will take a task-focused look at how to use them as part of providing some specific reports. Monitoring process state Now that we have examined how BAM is constructed, let us use this knowledge to construct some simple dashboards that track the state of a business process. We will instrument a simple version of an auction process. The process is shown in the following figure: An auction is started and then bids are placed until the time runs out at which point the auction is completed. This is modelled in BPEL. This process has three distinct states: Started Bid received Completed We are interested in the number of auctions in each state as well as the total value of auctions in progress. One needs to follow these steps to build the dashboard: Define our data within the Active Data Cache Create sensors in BPEL and map to data in the ADC Create suitable reports Run the reports Defining data objects Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects. Before defining our data objects let's group them into an Auction folder so they are easy to find. To do this we use the BAM Architect application and select Data Objects which gives us the following screen: We select Create subfolder to create the folder and give it a name Auction. We then select Create folder to actually create the folder and we get a confirmation message to tell us that the folder was created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen. Now we have our folder we can create a data object. Again we select Data Objects from the drop-down menu. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder if it is not already open and selectCreate Data Object. If we don't select the Auction folder then we pick it later when filling in the details of the data object. We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object we can now create the data fields by selecting Add a field. When adding fields we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be public "available for display" and what if any tool tip text it should have. Once all the data fields have been defined then we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created. Grouping data into hierarchies When creating a data object it is possible to specify Dimensions for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to group the object by the fields in the given dimension. If multiple fields are selected for a single dimension then they can be layered into a hierarchy, for example to allow analysis by country, region, and city. In this case all three elements would be selected into a single dimension, perhaps called geography. Within geography a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies. A digression on populating data object fields In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object at one moment in time. Do not confuse data objects with the low level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low level events that populate them. In our auction example there will be just one auction object for every auction. However there will be at least two and usually more messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate or in some cases overwrite different parts of the auction data object. The table shows how the three messages populate different parts of the data object. Message Auction ID State Highest bid Reserve Expires Seller Highest bidder Auction Started Inserted Inserted Inserted Inserted Inserted Inserted   Bid Received   Updated Updated       Updated Auction Finished   Updated           Summary In this article we have explored how Business Activity Monitoring differs from and is complementary to more traditional Business Intelligence solutions such as Oracle Reports and Business Objects. We have explored how BAM can allow the business to monitor the state of business targets and Key Performance Indicators, such as the current most popular products in a retail environment or the current time taken to serve customers in a service environment.
Read more
  • 0
  • 0
  • 3193
Banner background image

article-image-working-xml-flex-3-and-java-part1
Packt
28 Oct 2009
10 min read
Save for later

Working with XML in Flex 3 and Java-part1

Packt
28 Oct 2009
10 min read
In today's world, many server-side applications make use of XML to structure data because XML is a standard way of representing structured information. It is easy to work with, and people can easily read, write, and understand XML without the need of any specialized skills. The XML standard is widely accepted and used in server communications such as Simple Object Access Protocol (SOAP) based web services. XML stands for eXtensible Markup Language. The XML standard specification is available at http://www.w3.org/XML/. Adobe Flex provides a standardized ECMAScript-based set of API classes and functionality for working with XML data. This collection of classes and functionality provided by Flex are known as E4X. You can use these classes provided by Flex to build sophisticated Rich Internet Applications using XML data. XML basics XML is a standard way to represent categorized data into a tree structure similar to HTML documents. XML is written in plain-text format, and hence it is very easy to read, write, and manipulate its data. A typical XML document looks like this: <book>    <title>Flex 3 with Java</title>    <author>Satish Kore</author>    <publisher>Packt Publishing</publisher>    <pages>300</pages> </book> Generally, XML data is known as XML documents and it is represented by tags wrapped in angle brackets (< >). These tags are also known as XML elements. Every XML document starts with a single top-level element known as the root element. Each element is distinguished by a set of tags known as the opening tag and the closing tag. In the previous XML document, <book> is the opening tag and </book> is the closing tag. If an element contains no content, it can be written as an empty statement (also called self-closing statement). For example, <book/> is as good as writing <book></book>. XML documents can also be more complex with nested tags and attributes, as shown in the following example: <book ISBN="978-1-847195-34-0">   <title>Flex 3 with Java</title>   <author country="India" numberOfBooks="1">    <firstName>Satish</firstName>    <lastName>Kore</lastName> </author>   <publisher country="United Kingdom">Packt Publishing</publisher>   <pages>300</pages> </book> Notice that the above XML document contains nested tags such as <firstName> and <lastName> under the <author> tag. ISBN, country, and numberOfBooks, which you can see inside the tags, are called XML attributes. To learn more about XML, visit the W3Schools' XML Tutorial at http://w3schools.com/xml/. Understanding E4X Flex provides a set of API classes and functionality based on the ECMAScript for XML (E4X) standards in order to work with XML data. The E4X approach provides a simple and straightforward way to work with XML structured data, and it also reduces the complexity of parsing XML documents. Earlier versions of Flex did not have a direct way of working with XML data. The E4X provides an alternative to DOM (Document Object Model) interface that uses a simpler syntax for reading and querying XML documents. More information about other E4X implementations can be found at http://en.wikipedia.org/wiki/E4X. The key features of E4X include: It is based on standard scripting language specifications known as ECMAScript for XML. Flex implements these specifications in the form of API classes and functionality for simplifying the XML data processing. It provides easy and well-known operators, such as the dot (.) and @, to work with XML objects. The @ and dot (.) operators can be used not only to read data, but also to assign data to XML nodes, attributes, and so on. The E4X functionality is much easier and more intuitive than working with the DOM documents to access XML data. ActionScript 3.0 includes the following E4X classes: XML, XMLList, QName, and Namespace. These classes are designed to simplify XML data processing into Flex applications. Let's see one quick example: Define a variable of type XML and create a sample XML document. In this example, we will assign it as a literal. However, in the real world, your application might load XML data from external sources, such as a web service or an RSS feed. private var myBooks:XML =   <books publisher="Packt Pub">    <book title="Book1" price="99.99">    <author>Author1</author>    </book>    <book title="Book2" price="59.99">    <author>Author2</author>    </book>    <book title="Book3" price="49.99">    <author>Author3</author>    </book> </books>; Now, we will see some of the E4X approaches to read and parse the above XML in our application. The E4X uses many operators to simplify accessing XML nodes and attributes, such as dot (.) and attribute identifier (@), for accessing properties and attributes. private function traceXML():void {    trace(myBooks.book.(@price < 50.99).@title); //Output: Book3    trace(myBooks.book[1].author); //Output: Author2    trace(myBooks.@publisher); //Output: Packt Pub    //Following for loop outputs prices of all books    for each(var price in myBooks..@price) {    trace(price);    } } In the code above, we are using a conditional expression to extract the title of the book(s) whose price is set below 50.99$ in the first trace statement. If we have to do this manually, imagine how much code would have been needed to parse the XML. In the second trace, we are accessing a book node using index and printing its author node's value. And in the third trace, we are simply printing the root node's publisher attribute value and finally, we are using a for loop to traverse through prices of all the books and printing each price. The following is a list of XML operators: Operator Name Description    @   attribute identifier Identifies attributes of an XML or XMLList object.     { }     braces(XML) Evaluates an expression that is used in an XML or XMLList initializer.   [ ]     brackets(XML) Accesses a property or attribute of an XML or XMLList object, for example myBooks.book["@title"].     + concatenation(XMLList) Concatenates (combines) XML or XMLList values into an XMLList object.     += concatenation assignment (XMLList) Assigns expression1 The XML object An XML class represents an XML element, attribute, comment, processing instruction, or a text element. We have used the XML class in our example above to initialize the myBooks variable with an XML literal. The XML class is included into an ActionScript 3.0 core class, so you don't need to import a package to use it. The XML class provides many properties and methods to simplify XML processing, such as ignoreWhitespace and ignoreComments properties, used for ignoring whitespaces and comments in XML documents respectively. You can use the prependChild() and appendChild() methods to prepend and append XML nodes to existing XML documents. Methods such as toString() and toXMLString() allow you to convert XML to a string. An example of an XML object: private var myBooks:XML = <books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="120.00"> <author>Author2</author> </book> </books>;   In the above example, we have created an XML object by assigning an XML literal to it. You can also create an XML object from a string that contains XML data, as shown in the following example: private var str:String = "<books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="59.99"> <author>Author2</author> </book> </books>"; private var myBooks:XML = new XML(str); trace(myBooks.toXMLString()); //outputs formatted xml as string If the XML data in string is not well-formed (for example, a closing tag is missing), then you will see a runtime error. You can also use binding expressions in the XML text to extract contents from a variable data. For example, you could bind a node's name attribute to a variable value, as in the following line: private var title:String = "Book1" var aBook:XML = <book title="{title}">; To read more about XML class methods and properties, go through Flex 3 LiveDocs at http://livedocs.adobe.com/flex/3/langref/XML.html. The XMLList object As the class name indicates, XMLList contains one or more XML objects. It can contain full XML documents, XML fragments, or the results of an XML query. You can typically use all of the XML class's methods and properties on the objects from XMLList. To access these objects from the XMLList collection, iterate over it using a for each… statement. The XMLList provides you with the following methods to work with its objects: child(): Returns a specified child of every XML object children(): Returns specified children of every XML object descendants(): Returns all descendants of an XML object elements(): Calls the elements() method of each XML object in the XMLList. Returns all elements of the XML object parent(): Returns the parent of the XMLList object if all items in the XMLList object have the same parent attribute(attributeName): Calls the attribute() method of each XML object and returns an XMLList object of the results. The results match the given attributeName parameter attributes(): Calls the attributes() method of each XML object and returns an XMLList object of attributes for each XML object contains(): Checks if the specified XML object is present in the XMLList copy(): Returns a copy of the given XMLList object length(): Returns the number of properties in the XMLList object valueOf(): Returns the XMLList object For details on these methods, see the ActionScript 3.0 Language Reference. Let's return to the example of the XMLList: var xmlList:XMLList = myBooks.book.(@price == 99.99); var item:XML; for each(item in xmlList) { trace("item:"+item.toXMLString()); } Output: item:<book title="Book1" price="99.99"> <author>Author1</author> </book> In the example above, we have used XMLList to store the result of the myBooks.book.(@price == 99.99); statement. This statement returns an XMLList containing XML node(s) whose price is 99.99$. Working with XML objects The XML class provides many useful methods to work with XML objects, such as the appendChild() and prependChild() methods to add an XML element to the beginning or end of an XML object, as shown in the following example: var node1:XML = <middleInitial>B</middleInitial> var node2:XML = <lastName>Kore</lastName> var root:XML = <personalInfo></personalInfo> root = root.appendChild(node1); root = root.appendChild(node2); root = root.prependChild(<firstName>Satish</firstName>); The output is as follows: <personalInfo> <firstName>Satish</firstName> <middleInitial>B</middleInitial> <lastName>Kore</lastName> </personalInfo> You can use the insertChildBefore() or insertChildAfter() method to add a property before or after a specified property, as shown in the following example: var x:XML = <count> <one>1</one> <three>3</three> <four>4</four> </count>; x = x.insertChildBefore(x.three, "<two>2</two>"); x = x.insertChildAfter(x.four, "<five>5</five>"); trace(x.toXMLString()); The output of the above code is as follows: <count> <one>1</one> <two>2</two> <three>3</three> <four>4</four> <five>5</five> </count>
Read more
  • 0
  • 0
  • 2124

article-image-using-business-rules-define-decision-points-oracle-soa-suite-part-2
Packt
28 Oct 2009
6 min read
Save for later

Using Business Rules to Define Decision Points in Oracle SOA Suite: Part 2

Packt
28 Oct 2009
6 min read
To invoke a rule we need to go through a number of steps. First we must create a session with the rules engine, then we can assert one or more facts, before executing the rule set and finally we can retrieve the results. We do this in BPEL via a Decision Service; this is essentially a web service wrapper around a rules dictionary, which takes cares of managing the session with the rules engine as well as governing which rule set we wish to apply. The wrapper allows a BPEL process to assert one or more facts, execute a rule set against the asserted facts, retrieve the results and then reset the session. This can be done within a single invocation of an operation, or over multiple operations. Creating a Rule Engine Connection Before you can create a Decision Service you need to create a connection to the repository in which the required rule set is stored. In the Connections panel within JDeveloper, right-click on the Rule Engines folder and select New Rule Engine Connection… as shown in the following screenshot: This will launch the Create Rule Engine Connection dialogue; first you need to specify whether the connection is for a file repository or WebDAV repository. Using a file based repository If you are using a file repository, all we need to specify is the location of the actual file. Once the connection has been created, we can use this to create a decision service for any of the rule sets contained within that repository. However, it is important to realize that when you create a decision service based on this connection, JDeveloper will take a copy of the repository and copy this into the BPEL project. When you deploy the BPEL process, then the copy of this repository will be deployed with the BPEL process. This has a number of implications; first if you want to modify the rule set used by the BPEL Process you need to modify the copy of the repository deployed with the BPEL Process. To modify the rule set deployed with a BPEL Process, log onto the BPEL console, from here click on the BPEL Processes tab, and then select the process that uses the decision service. Next click on the Descriptor tab; this will list all the Partner Links for that process, including the Decision Service (for example LeaveApprovalDecisionServicePL) as shown in the following screenshot: This PartnerLink will have the property decisionServiceDetails, with the link Rule Service Details (circled in the previous screenshot); click on this and the console will display details of the decision service. From here click on the link Open Rule Author; this will open the Rule Author complete with a connection to the file based rule repository. The second implication is that if you use the same rule set within multiple BPEL Processes, each process will have its own copy of the rule set. You can work round this by either wrapping each rule set with a single BPEL process, which is then invoked by any other process wishing to use that rule set. Or once you have deployed the rule set for one process, then you can access it directly via the WSDL for the deployed rule set, for example LeaveApprovalDecisionService.wsdl in the above screenshot. Using a WebDAV repository For the reasons mentioned above, it often makes sense to use a WebDAV based repository to hold your rules. This makes it far simpler to share a rule set between multiple clients, such as BPEL and Java. Before you can create a Rule Engine Connection to a WebDAV repository, you must first define a WebDAV connection to JDeveloper, which is also created from the Connections palette. Creating a Decision Service To create a decision service within our BPEL process, select the Services page from the Component Palette and drag a Decision Service onto your process, as shown in the following screenshot: This will launch the Decision Service Wizard dialogue, as shown: Give the service a name, and then select Execute Ruleset as the invocation pattern. Next click on the flashlight next to Ruleset to launch the Rule Explorer. This allows us to browse any previously defined rule engine connection and select the rule set we wish to invoke via the decision service. For our purposes, select the LeaveApprovalRules as shown below, and click OK. This will bring us back to the Decision Service Wizard which will be updated to list the facts that we can exchange with the Rule Engine, as shown in the following screenshot: This dialogue will only list XML Facts that map to global elements in the XML Schema. Here we need to define which facts we want to assert, that is which facts we pass as inputs to the rule engine from BPEL, and which facts we want to watch, that is which facts we want to return in the output from the rules engine back to our BPEL process. For our example, we will pass in a single leave request. The rule engine will then apply the rule set we defined earlier and update the status of the request to Approved if appropriate. So we need to specify that Assert and Watch facts of type LeaveRequest. Finally, you will notice the checkbox Check here to assert all descendants from the top level element; this is important when an element contains nested elements (or facts) to ensure that nested facts are also evaluated by the rules engine. For example if we had a fact of type LeaveRequestList which contained a list of multiple LeaveRequests, if we wanted to ensure the rules engine evaluated these nested facts, then we would need to check this checkbox. Once you have specified the facts to Assert and Watch, click Next and complete the dialogue; this will then create a decision service partner link within your BPEL process. Adding a Decide activity We are now ready to invoke our rule set from within our BPEL process. From the Component Palette, drag a Decide activity onto our BPEL process (at the point before we execute the LeaveRequest Human Task). This will open up the Edit Decide window (shown in the following screenshot). Here we need to specify a Name for the activity, and select the Decision Service we want to invoke (that is the LeaveApprovalDecisionService that we just created). Once we've specified the service, we need to specify how we want to interact with it. For example, whether we want to incrementally assert a number of facts over a period of time, before executing the rule set and retrieving the result or whether we want to assert all the facts, execute the rule set and get the result within a single invocation. We specify this through the Operation attribute. For our purpose we just need to assert a single fact and run the rule set, so select the value of Assert facts, execute rule set, retrieve results. Once we have selected the operation to invoke on the decision service, the Decision Service Facts will be updated to allow you to assign input and output facts as appropriate.  
Read more
  • 0
  • 0
  • 1472

article-image-designing-your-very-own-aspnet-mvc-application
Packt
28 Oct 2009
8 min read
Save for later

Designing your very own ASP.NET MVC Application

Packt
28 Oct 2009
8 min read
When downloading and installing the ASP.NET MVC framework SDK, a new project template is installed in Visual Studio—the ASP.NET MVC project template. This article by Maarten Balliauw describes how to use this template. We will briefly touch all aspects of ASP.NET MVC by creating a new ASP.NET MVC web application based on this Visual Studio template. Besides view, controller, and model, new concepts including ViewData—a means of transferring data between controller and view, routing—the link between a web browser URL and a specific action method inside a controller, and unit testing of a controller are also illustrated in this article. (For more resources on .NET, see here.) Creating a new ASP.NET MVC web application project Before we start creating an ASP.NET MVC web application, make sure that you have installed the ASP.NET MVC framework SDK from http://www.asp.net/mvc. After installation, open Visual Studio 2008 and select menu option File | New | Project. The following screenshot will be displayed. Make sure that you select the .NET framework 3.5 as the target framework. You will notice a new project template called ASP.NET MVC Web Application. This project template creates the default project structure for an ASP.NET MVC application. After clicking on OK, Visual Studio will ask you if you want to create a test project. This dialog offers the choice between several unit testing frameworks that can be used for testing your ASP.NET MVC application. You can decide for yourself if you want to create a unit testing project right now—you can also add a testing project later on. Letting the ASP.NET MVC project template create a test project now is convenient because it creates all of the project references, and contains an example unit test, although this is not required. For this example, continue by adding the default unit test project. What's inside the box? After the ASP.NET MVC project has been created, you will notice a default folder structure. There's a Controllers folder, a Models folder, a Views folder, as well as a Content folder and a Scripts folder. ASP.NET MVC comes with the convention that these folders (and namespaces) are used for locating the different blocks used for building the ASP.NET MVC framework. The Controllers folder obviously contains all of the controller classes; the Models folder contains the model classes; while the Views folder contains the view pages. Content will typically contain web site content such as images and stylesheet files, and Scripts will contain all of the JavaScript files used by the web application. By default, the Scripts folder contains some JavaScript files required for the use of Microsoft AJAX or jQuery. Locating the different building blocks is done in the request life cycle. One of the first steps in the ASP.NET MVC request life cycle is mapping the requested URL to the correct controller action method. This process is referred to as routing. A default route is initialized in the Global.asax file and describes to the ASP.NET MVC framework how to handle a request. Double-clicking on the Global.asax file in the MvcApplication1 project will display the following code: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Mvc;using System.Web.Routing;namespace MvcApplication1{ public class GlobalApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } }} In the Application_Start() event handler, which is fired whenever the application is compiled or the web server is restarted, a route table is registered. The default route is named Default, and responds to a URL in the form of http://www.example.com/{controller}/{action}/{id}. The variables between { and } are populated with actual values from the request URL or with the default values if no override is present in the URL. This default route will map to the Home controller and to the Index action method, according to the default routing parameters. We won't have any other action with this routing map. By default, all the possible URLs can be mapped through this default route. It is also possible to create our own routes. For example, let's map the URL http://www.example.com/Employee/Maarten to the Employee controller, the Show action, and the firstname parameter. The following code snippet can be inserted in the Global.asax file we've just opened. Because the ASP.NET MVC framework uses the first matching route, this code snippet should be inserted above the default route; otherwise the route will never be used. routes.MapRoute( "EmployeeShow", // Route name "Employee/{firstname}", // URL with parameters new { // Parameter defaults controller = "Employee", action = "Show", firstname = "" } ); Now, let's add the necessary components for this route. First of all, create a class named EmployeeController in the Controllers folder. You can do this by adding a new item to the project and selecting the MVC Controller Class template located under the Web | MVC category. Remove the Index action method, and replace it with a method or action named Show. This method accepts a firstname parameter and passes the data into the ViewData dictionary. This dictionary will be used by the view to display data. The EmployeeController class will pass an Employee object to the view. This Employee class should be added in the Models folder (right-click on this folder and then select Add | Class from the context menu). Here's the code for the Employee class: namespace MvcApplication1.Models{ public class Employee { public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } }} After adding the EmployeeController and Employee classes, the ASP.NET MVC project now appears as shown in the following screenshot: The EmployeeController class now looks like this: using System.Web.Mvc;using MvcApplication1.Models;namespace MvcApplication1.Controllers{ public class EmployeeController : Controller { public ActionResult Show(string firstname) { if (string.IsNullOrEmpty(firstname)) { ViewData["ErrorMessage"] = "No firstname provided!"; } else { Employee employee = new Employee { FirstName = firstname, LastName = "Example", Email = firstname + "@example.com" }; ViewData["FirstName"] = employee.FirstName; ViewData["LastName"] = employee.LastName; ViewData["Email"] = employee.Email; } return View(); } }} The action method we've just created can be requested by a user via a URL—in this case, something similar to http://www.example.com/Employee/Maarten. This URL is mapped to the action method by the route we've created before. By default, any public action method (that is, a method in a controller class) can be requested using the default routing scheme. If you want to avoid a method from being requested, simply make it private or protected, or if it has to be public, add a [NonAction] attribute to the method. Note that we are returning an ActionResult (created by the View() method), which can be a view-rendering command, a page redirect, a JSON result, a string, or any other custom class implementation inheriting the ActionResult that you want to return. Returning an ActionResult is not necessary. The controller can write content directly to the response stream if required, but this would be breaking the MVC pattern—the controller should never be responsible for the actual content of the response that is being returned. Next, create a Show.aspx page in the Views | Employee folder. You can create a view by adding a new item to the project and selecting the MVC View Content Page template, located under the Web | MVC category, as we want this view to render in a master page (located in Views | Shared). There is an alternative way to create a view related to an action method, which will be covered later in this article. In the view, you can display employee information or display an error message if an employee is not found. Add the following code to the Show.aspx page: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" Inherits=" System.Web.Mvc.ViewPage" %><asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <% if (ViewData["ErrorMessage"] != null) { %> <h1><%=ViewData["ErrorMessage"]%></h1> <% } else { %> <h1><%=ViewData["FirstName"]%> <%=ViewData["LastName"]%></h1> <p> E-mail: <%=ViewData["Email"]%> </p> <% } %></asp:Content> If the ViewData, set by the controller, is given an ErrorMessage, then the ErrorMessage is displayed on the resulting web page. Otherwise, the employee details are displayed. Press the F5 button on your keyboard to start the development web server. Alter the URL in your browser to something ending in /Employee/Your_Name_Here, and see the action method and the view we've just created in action.
Read more
  • 0
  • 0
  • 5223
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-using-business-rules-define-decision-points-oracle-soa-suite-part-1
Packt
28 Oct 2009
11 min read
Save for later

Using Business Rules to Define Decision Points in Oracle SOA Suite: Part 1

Packt
28 Oct 2009
11 min read
The advantage of separating out decision points as external rules is that we not only ensure that each rule is used in a consistent fashion, but in addition make it simpler and quicker to modify; that is we only have to modify a rule once and can do this with almost immediate effect, thus increasing the agility of our solution. Business Rule concepts Before we implement our first rule, let's briefly introduce the key components which make up a Business Rule. These are: Facts: Represent the data or business objects that rules are applied to. Rules: A rule consists of two parts, an IF part which consists of one or more tests to be applied to fact(s), and a THEN part, which lists the actions to be carried out should the test to evaluate to true Rule Set: As the name implies, it is just a set of one or more related rules that are designed to work together . Dictionary: A dictionary is the container of all components that make up a business rule, it holds all the facts, rule sets, and rules for a business rule. In addition, a dictionary may also contain functions, variables, and constraints. We will introduce these in more detail later in this article. To execute a business rule, you submit one or more facts to the rules engine. It will apply the rules to the facts, that is each fact will be tested against the IF part of the rule and if it evaluates to true, then it will perform the specified actions for that fact. This may result in the creation of new facts or the modification of existing facts (which may result in further rule evaluation). Leave approval rule To begin with, we will write a simple rule to automatically approve a leave request that is of type Vacation and only for 1 day's duration. A pretty trivial example, but once we've done this we will look at how to extend this rule to handle more complex examples. Using the Rule Author In SOA Suite 10.1.3 you use the Rule Author, which is a browser based interface for defining your business rules. To launch the Rule Author within your browser go to the following URL: http://<host name>:<port number>/ruleauthor/ This will bring up the Rule Author Log In screen. Here you need to log in as user that belongs to the rule-administrators role. You can either log in as the user oc4jadmin (default password Welcome1), which automatically belongs to this group, or define your own user. Creating a Rule Repository Within Oracle Business Rules, all of our definitions (that is facts, constraints, variables, and functions) and rule sets are defined within a dictionary. A dictionary is held within a Repository. A repository can contain multiple dictionaries and can also contain multiple versions of a dictionary. So, before we can write any rules, we need to either connect to an existing repository, or create a new one. Oracle Business Rules supports two types of repository—File based and WebDAV. For simplicity we will use a File based repository, though typically in production you want to use a WebDAV based repository as this makes it simpler to share rules between multiple BPEL Processes. WebDAV is short for Web-based Distributed Authoring and Versioning. It is an extension to HTTP that allows users to collaboratively edit and manage files (that is business rules in our case) over the Web. To create a File based repository click on the Repository tab within the Rule Author, this will display the Repository Connect screen as shown in the following screenshot: From here we can either connect to an existing repository (WebDAV or File based) or create and connect to a new file-based repository. For our purposes, select a Repository Type of File, and specify the full path name of where you want to create the repository and then click Create. To use a WebDAV repository, you will first need to create this externally from the Rule Author. Details on how to do this can be found in Appendix B of the Oracle Business Rules User Guide (http://download.oracle.com/docs/cd/B25221_04/web.1013/b15986/toc.htm). From a development perspective it can often be more convenient to develop your initial business rules in a file repository. Once complete, you can then export the rules from the file repository and import them into a WebDAV repository. Creating a dictionary Once we have connected to a repository, the next step is to create a dictionary. Click on the Create tab, circled in the following screenshot, and this will bring up the Create Dictionary screen. Enter a New Dictionary Name (for example LeaveApproval) and click Create. This will create and load the dictionary so it's ready to use. Once you have created a dictionary, then next time you connect to the repository you will select the Load tab (next to the Create tab) to load it. Defining facts Before we can define any rules, we first need to define the facts that the rules will be applied to. Click on the Definitions tab, this will bring up the page which summarizes all the facts defined within the current dictionary. You will see from this that the rule engine supports three types of facts: Java Facts, XML Facts, and RL Facts. The type of fact that you want to use really depends on the context in which you will be using the rules engine. For example, if you are calling the rule engine from Java, then you would work with Java Facts as this provides a more integrated way of combining the two components. As we are using the rule engine with BPEL then it makes sense to use XML Facts. Creating XML Facts The Rule Author uses XML Schemas to generate JAXB 1.0 classes, which are then imported to generate the corresponding XML Facts. For our example we will use the Leave Request schema, shown as follows for convenience: <?xml version="1.0" encoding="windows-1252"?> <xsd:schema targetNamespace="http://schemas.packtpub.com/LeaveRequest" elementFormDefault="qualified" > <xsd:element name="leaveRequest" type="tLeaveRequest"/> <xsd:complexType name="tLeaveRequest"> <xsd:sequence> <xsd:element name="employeeId" type="xsd:string"/> <xsd:element name="fullName" type="xsd:string" /> <xsd:element name="startDate" type="xsd:date" /> <xsd:element name="endDate" type="xsd:date" /> <xsd:element name="leaveType" type="xsd:string" /> <xsd:element name="leaveReason" type="xsd:string"/> <xsd:element name="requestStatus" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:schema> Using JAXB, particularly when used in conjunction with BPEL, places a number of constraints on how we define our XML Schemas, including: When defining rules, the Rule Author can only work with globally defined types. This is because it's unable to introspect the properties (i.e. attributes and elements) of global elements. Within BPEL you can only define variables based on globally defined elements. The net result is that any facts we want to pass from BPEL to the rules engine (or vice versa) must be defined as global elements for BPEL and have a corresponding global type definition so that we can define rules against it. The simplest way to achieve this is to define a global type (for example tLeaveRequest in the above schema) and then define a corresponding global element based on that type (for example, leaveRequest in the above schema). Even though it is perfectly acceptable with XML Schemas to use the same name for both elements and types, it presents problems for JAXB, hence the approach taken above where we have prefixed every type definition with t as in tLeaveRequest. Fortunately this approach corresponds to best practice for XML Schema design. The final point you need to be aware of is that when creating XML facts the JAXB processor maps the type xsd:decimal to java.lang.BigDecimal and xsd:integer to java.lang.BigInteger. This means you can't use the standard operators (for example >, >=, <=, and <) within your rules to compare properties of these types. To simplify your rules, within your XML Schemas use xsd:double in place of xsd:decimal and xsd:int in place of xsd:integer. To generate XML facts, from the XML Fact Summary screen (shown previously), click Create, this will display the XML Schema Selector page as shown: Here we need to specify the location of the XML Schema, this can either be an absolute path to an xsd file containing the schema or can be a URL. Next we need to specify a temporary JAXB Class Directory in which the generated JAXB classes are to be created. Finally, for the Target Package Name we can optionally specify a unique name that will be used as the Java package name for the generated classes. If we leave this blank, the package name will be automatically generated based on the target namespace of the XML Schema using the JAXB XML-to-Java mapping rules. For example, our leave request schema has a target namespace of http://schemas.packtpub.com/LeaveRequest; this will result in a package name of com.packtpub.schemas.leaverequest. Next click on Add Schema; this will cause the Rule Author to generate the JAXB classes for our schema in the specified directory. This will update the XML Fact Summary screen to show details of the generated classes; expand the class navigation tree until you can see the list of all the generated classes, as shown in the following screenshot: Select the top level node (that is com) to specify that we want to import all the generated classes. We need to import the TLeaveRequest class as this is the one we will use to implement rules and the LeaveRequest class as we need this to pass this in as a fact from BPEL to the rules engine. The ObjectFactory class is optional, but we will need this if we need to generate new LeaveRequest facts within our rule sets. Although we don't need to do this at the moment it makes sense to import it now in case we do need it in the future. Once we have selected the classes to be imported, click Import (circled in previous screenshot) to load them into the dictionary. The Rule Author will display a message to confirm that the classes have been successfully imported. If you check the list of generated JAXB classes, you will see that the imported classes are shown in bold. In the process of importing your facts, the Rule Author will assign default aliases to each fact and a default alias to all properties that make up a fact, where a property corresponds to either an element or an attribute in the XML Schema. Using aliases Oracle Business Rules allows you to specify your own aliases for facts and properties in order to define more business friendly names which can then be used when writing rules. For XML facts if you have followed standard naming conventions when defining your XML Schemas, we typically find that the default aliases are clear enough and that if you start defining aliases it can actually cause more confusion unless applied consistently across all facts. Hiding facts and properties The Rule Author lets you hide facts and properties so that they don't appear in the drop downs within the Rule Author. For facts which have a large number of properties, hiding some of these can be worth while as it can simplify the creation of rules. Another obvious use of this might be to hide all the facts based on elements, since we won't be implementing any rules directly against these. However, any facts you hide will also be hidden from BPEL, so you won't be able to pass facts of these types from BPEL to the rules engine (or vice versa). In reality, the only fact you will typically want to hide will be the ObjectFactory (as you will have one of these per XML Schema that you import). Saving the rule dictionary As you define your business rules, it makes sense to save your work at regular intervals. To save the dictionary, click on the Save Dictionary link in the top right hand corner of the Rule Author page. This will bring up the Save Dictionary page. Here either click on the Save button to update the current version of the dictionary with your changes or, if you want to save the dictionary as a new version or under a new dictionary name, then click on the Save As link and amend the dictionary name and version as appropriate.
Read more
  • 0
  • 0
  • 1676

article-image-oracle-web-rowset-part2
Packt
27 Oct 2009
4 min read
Save for later

Oracle Web RowSet - Part2

Packt
27 Oct 2009
4 min read
Reading a Row Next, we will read a row from the OracleWebRowSet object. Click on Modify Web RowSet link in the CreateRow.jsp. In the ModifyWebRowSet JSP click on the Read Row link. The ReadRow.jsp JSP is displayed. In the ReadRow JSP specify the Database Row to Read and click on Apply. The second row values are retrieved from the Web RowSet: In the ReadRow JSP the readRow() method of the WebRowSetQuery.java application is invoked. TheWebRowSetQuery object is retrieved from the session object. WebRowSetQuery query=( webrowset.WebRowSetQuery)session.getAttribute("query"); The String[] values returned by the readRow() method are added to theReadRow JSP fields. In the readRow() method theOracleWebRowSet object cursor is moved to the row to be read. webRowSet.absolute(rowRead); Retrieve the row values with the getString() method and add to String[]. Return the String[] object. String[] resultSet=new String[5];resultSet[0]=webRowSet.getString(1);resultSet[1]=webRowSet.getString(2);resultSet[2]=webRowSet.getString(3);resultSet[3]=webRowSet.getString(4);resultSet[4]=webRowSet.getString(5);return resultSet; ReadRow.jsp JSP is listed as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd"><%@ page contentType="text/html;charset=windows-1252"%><%@ page session="true"%><html><head><meta http-equiv="Content-Type" content="text/html;charset=windows-1252"><title>Read Row with Web RowSet</title></head><body><form><h3>Read Row with Web RowSet</h3><table><tr><td><a href="ModifyWebRowSet.jsp">Modify Web RowSetPage</a></td></tr></table></form><%webrowset.WebRowSetQuery query=null;query=( webrowset.WebRowSetQuery)session.getAttribute("query");String rowRead=request.getParameter("rowRead");String journalUpdate=request.getParameter("journalUpdate");String publisherUpdate=request.getParameter("publisherUpdate");String editionUpdate=request.getParameter("editionUpdate");String titleUpdate=request.getParameter("titleUpdate");String authorUpdate=request.getParameter("authorUpdate");if((rowRead!=null)){int row_Read=Integer.parseInt(rowRead);String[] resultSet=query.readRow(row_Read);journalUpdate=resultSet[0];publisherUpdate=resultSet[1];editionUpdate=resultSet[2];titleUpdate=resultSet[3];authorUpdate=resultSet[4];}%><form name="query" action="ReadRow.jsp" method="post"><table><tr><td>Database Row to Read:</td></tr><tr><td><input name="rowRead" type="text" size="25"maxlength="50"/></td></tr><tr><td>Journal:</td></tr><tr><td><input name="journalUpdate" value='<%=journalUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Publisher:</td></tr><tr><td><input name="publisherUpdate"value='<%=publisherUpdate%>' type="text" size="50"maxlength="250"/></td></tr><tr><td>Edition:</td></tr><tr><td><input name="editionUpdate" value='<%=editionUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Title:</td></tr><tr><td><input name="titleUpdate" value='<%=titleUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Author:</td></tr><tr><td><input name="authorUpdate" value='<%=authorUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td><input class="Submit" type="submit" value="Apply"/></td></tr></table></form></body></html>
Read more
  • 0
  • 0
  • 1003

article-image-new-soa-capabilities-biztalk-server-2009-uddi-services
Packt
27 Oct 2009
6 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: UDDI Services

Packt
27 Oct 2009
6 min read
All truths are easy to understand once they are discovered; the point is to discover them.-Galileo Galilei What is UDDI? Universal Description and Discovery Information (UDDI) is a type of registry whose primary purpose is to represent information about web services. It describes the service providers, the services that provider offers, and in some cases, the specific technical specifications for interacting with those services. While UDDI was originally envisioned as a public, platform independent registry that companies could exploit for listing and consuming services, it seems that many have chosen instead to use UDDI as an internal resource for categorizing and describing their available enterprise services. Besides simply listing available services for others to search and peruse, UDDI is arguably most beneficial for those who wish to perform runtime binding to service endpoints. Instead of hard-coding a service path in a client application, one may query UDDI for a particular service's endpoint and apply it to their active service call. While UDDI is typically used for web services, nothing prevents someone from storing information about any particular transport and allowing service consumers to discover and do runtime resolution to these endpoints. As an example, this is useful if you have an environment with primary, backup, and disaster access points and want your application be able to gracefully look up and failover to the next available service environment. In addition, UDDI can be of assistance if an application is deployed globally but you wish for regional consumers to look up and resolve against the closest geographical endpoint. UDDI has a few core hierarchy concepts that you must grasp to fully comprehend how the registry is organized. The most important ones are included here. Name Purpose Name in Microsoft UDDI services BusinessEntity These are the service providers. May be an organization, business unit or functional area. Provider BusinessService General reference to a business service offered by a provider. May be a logical grouping of actual services. Service BindingTemplate Technical details of an individual service including endpoint Binding tModel (Technical Model) Represents metadata for categorization or description such as transport or protocol tModel As far as relationships between these entities go, a Business Entity may contain many Business Services, which in turn can have multiple Binding Templates. A binding may reference multiple tModels and tModels may be reused across many Binding Templates. What's new in UDDI version three? The latest UDDI specification calls out multiple-registry environments, support for digital signatures applied to UDDI entries, more complex categorization, wildcard searching, and a subscription API. We'll spend a bit of time on that last one in a few moments. Let's take a brief lap around at the Microsoft UDDI Services offering. For practical purposes, consider the UDDI Services to be made up of two parts: an Administration Console and a web site. The website is actually broken up into both a public facing and administrative interface, but we'll talk about them as one unit. The UDDI Configuration Console is the place to set service-wide settings ranging from the extent of logging to permissions and site security. The site node (named UDDI) has settings for permission account groups, security settings (see below), and subscription notification thresholds among others. The web node, which resides immediately beneath the parent, controls web site setting such as logging level and target database. Finally, the notification node manages settings related to the new subscription notification feature and identically matches the categories of the web node. The UDDI Services web site, found at http://localhost/uddi/, is the destination or physically listing, managing, and configuring services. The Search page enables querying by a wide variety of criteria including category, services, service providers, bindings, and tModels. The Publish page is where you go to add new services to the registry or edit the settings of existing ones. Finally, the Subscription page is where the new UDDI version three capability of registry notification is configured. We will demonstrate this feature later in this article. How to add services to the UDDI registry? Now we're ready to add new services to our UDDI registry. First, let's go to the Publish page and define our Service Provider and a pair of categorical tModels. To add a new Provider, we right-click the Provider node in the tree and choose Add Provider. Once a provider is created and named, we have the choice of adding all types of context characteristics such as a contact name(s), categories, relationships, and more. I'd like to add two tModel categories to my environment : one to identify which type of environment the service references (development, test, staging, production) and another to flag which type of transport it uses (Basic HTTP, WS HTTP, and so on). To add atModel, simply right-click the tModels node and choose Add tModel. This first one is named biztalksoa:runtimeresolution:environment. After adding one more tModel for biztalksoa:runtimeresolution:transporttype, we're ready to add a service to the registry. Right-click the BizTalkSOA provider and choose Add Service. Set the name of this service toBatchMasterService. Next, we want to add a binding (or access point) for this service, which describes where the service endpoint is physically located. Switch to the Bindings tab of the service definition and choose New Binding. We need a new access point, so I pointed to our proxy service created earlier and identified it as an endPoint. Finally, let's associate the two new tModel categories with our service. Switch to the Categories tab, and choose to Add Custom Category. We're asked to search for atModel, which represents our category, so a wildcard entry such as %biztalksoa%  is a valid search criterion. After selecting the environment category, we're asked for the key name and value. The key "name" is purely a human-friendly representation of the data whereas the tModel identifier and the key value comprise the actual name-value pair. I've entered production as the value on the environment category, and WS-Http as the key value on thetransporttype category. At this point, we have a service sufficiently configured in the UDDI directory so that others can discover and dynamically resolve against it.
Read more
  • 0
  • 0
  • 1866

article-image-implementing-basic-helloworld-wcf-windows-communication-foundation-service
Packt
27 Oct 2009
7 min read
Save for later

Implementing a Basic HelloWorld WCF (Windows Communication Foundation) Service

Packt
27 Oct 2009
7 min read
We will build a HelloWorld WCF service by carrying out the following steps: Create the solution and project Create the WCF service contract interface Implement the WCF service Host the WCF service in the ASP.NET Development Server Create a client application to consume this WCF service Creating the HelloWorld solution and project Before we can build the WCF service, we need to create a solution for our service projects. We also need a directory in which to save all the files. Throughout this article, we will save our project source codes in the D:SOAwithWCFandLINQProjects directory. We will have a subfolder for each solution we create, and under this solution folder, we will have one subfolder for each project. For this HelloWorld solution, the final directory structure is shown in the following image: You don't need to manually create these directories via Windows Explorer; Visual Studio will create them automatically when you create the solutions and projects. Now, follow these steps to create our first solution and the HelloWorld project: Start Visual Studio 2008. If the Open Project dialog box pops up, click Cancel to close it. Go to menu File | New | Project. The New Project dialog window will appear. From the left-hand side of the window (Project types), expand Other Project Types and then select Visual Studio Solutions as the project type. From the right-hand side of the window (Templates), select Blank Solution as the template. At the bottom of the window, type HelloWorld as the Name, and D:SOAwithWCFandLINQProjects as the Location. Note that you should not enter HelloWorld within the location, because Visual Studio will automatically create a folder for a new solution. Click the OK button to close this window and your screen should look like the following image, with an empty solution. Depending on your settings, the layout may be different. But you should still have an empty solution in your Solution Explorer. If you don't see Solution Explorer, go to menu View | Solution Explorer, or press Ctrl+Alt+L to bring it up. In the Solution Explorer, right-click on the solution, and select Add | New Project… from the context menu. You can also go to menu File | Add | New Project… to get the same result. The following image shows the context menu for adding a new project. The Add New Project window should now appear on your screen. In the left-hand side of this window (Project types), select Visual C# as the project type, and on the right-hand side of the window (Templates), select Class Library as the template. At the bottom of the window, type HelloWorldService as the Name. Leave D:SOAwithWCFandLINQProjectsHelloWorld as the Location. Again, don't add HelloWorldService to the location, as Visual Studio will create a subfolder for this new project (Visual Studio will use the solution folder as the default base folder for all the new projects added to the solution). You may have noticed that there is already a template for WCF Service Application in Visual Studio 2008. For the very first example, we will not use this template. Instead, we will create everything by ourselves so you know what the purpose of each template is. This is an excellent way for you to understand and master this new technology. Now, you can click the OK button to close this window. Once you click the OK button, Visual Studio will create several files for you. The first file is the project file. This is an XML file under the project directory, and it is called HelloWorldService.csproj. Visual Studio also creates an empty class file, called Class1.cs. Later, we will change this default name to a more meaningful one, and change its namespace to our own one. Three directories are created automatically under the project folder—one to hold the binary files, another to hold the object files, and a third one for the properties files of the project. The window on your screen should now look like the following image: We now have a new solution and project created. Next, we will develop and build this service. But before we go any further, we need to do two things to this project: Click the Show All Files button on the Solution Explorer toolbar. It is the second button from the left, just above the word Solution inside the Solution Explorer. If you allow your mouse to hover above this button, you will see the hint Show All Files, as shown in above diagram. Clicking this button will show all files and directories in your hard disk under the project folder-rven those items that are not included in the project. Make sure that you don't have the solution item selected. Otherwise, you can't see the Show All Files button. Change the default namespace of the project. From the Solution Explorer, right-click on the HelloWorldService project, select Properties from the context menu, or go to menu item Project | HelloWorldService Properties…. You will see the project properties dialog window. On the Application tab, change the Default namespace to MyWCFServices. Lastly, in order to develop a WCF service, we need to add a reference to the ServiceModel namespace. On the Solution Explorer window, right-click on the HelloWorldService project, and select Add Reference… from the context menu. You can also go to the menu item Project | Add Reference… to do this. The Add Reference dialog window should appear on your screen. Select System.ServiceModel from the .NET tab, and click OK. Now, on the Solution Explorer, if you expand the references of the HelloWorldService project, you will see that System.ServiceModel has been added. Also note that System.Xml.Linq is added by default. We will use this later when we query a database. Creating the HelloWorldService service contract interface In the previous section, we created the solution and the project for the HelloWorld WCF Service. From this section on, we will start building the HelloWorld WCF service. First, we need to create the service contract interface. In the Solution Explorer, right-click on the HelloWorldService project, and select Add | New Item…. from the context menu. The following Add New Item - HelloWorldService dialog window should appear on your screen. On the left-hand side of the window (Categories), select Visual C# Items as the category, and on the right-hand side of the window (Templates), select Interface as the template. At the bottom of the window, change the Name from Interface1.cs to IHelloWorldService.cs. Click the Add button. Now, an empty service interface file has been added to the project. Follow the steps below to customize it. Add a using statement: using System.ServiceModel; Add a ServiceContract attribute to the interface. This will designate the interface as a WCF service contract interface. [ServiceContract] Add a GetMessage method to the interface. This method will take a string as the input, and return another string as the result. It also has an attribute, OperationContract. [OperationContract] String GetMessage(String name); Change the interface to public. The final content of the file IHelloWorldService.cs should look like the following: using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ServiceModel;namespace MyWCFServices{[ServiceContract]public interface IHelloWorldService{[OperationContract]String GetMessage(String name);}}
Read more
  • 0
  • 0
  • 2174
article-image-developing-web-applications-using-javaserver-faces-part-2
Packt
27 Oct 2009
5 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 2

Packt
27 Oct 2009
5 min read
JSF Validation Earlier in this article, we discussed how the required attribute for JSF input fields allows us to easily make input fields mandatory. If a user attempts to submit a form with one or more required fields missing, an error message is automatically generated. The error message is generated by the <h:message> tag corresponding to the invalid field. The string First Name in the error message corresponds to the value of the label attribute for the field. Had we omitted the label attribute, the value of the fields id attribute would have been shown instead. As we can see, the required attribute makes it very easy to implement mandatory field functionality in our application. Recall that the age field is bound to a property of type Integer in our managed bean. If a user enters a value that is not a valid integer into this field, a validation error is automatically generated. Of course, a negative age wouldn't make much sense, however, our application validates that user input is a valid integer with essentially no effort on our part. The email address input field of our page is bound to a property of type String in our managed bean. As such, there is no built-in validation to make sure that the user enters a valid email address. In cases like this, we need to write our own custom JSF validators. Custom JSF validators must implement the javax.faces.validator.Validator interface. This interface contains a single method named validate(). This method takes three parameters: an instance of javax.faces.context.FacesContext, an instance of javax.faces.component.UIComponent containing the JSF component we are validating, and an instance of java.lang.Object containing the user entered value for the component. The following example illustrates a typical custom validator. package com.ensode.jsf.validators;import java.util.regex.Matcher;import java.util.regex.Pattern;import javax.faces.application.FacesMessage;import javax.faces.component.UIComponent;import javax.faces.component.html.HtmlInputText;import javax.faces.context.FacesContext;import javax.faces.validator.Validator;import javax.faces.validator.ValidatorException;public class EmailValidator implements Validator { public void validate(FacesContext facesContext, UIComponent uIComponent, Object value) throws ValidatorException { Pattern pattern = Pattern.compile("w+@w+.w+"); Matcher matcher = pattern.matcher( (CharSequence) value); HtmlInputText htmlInputText = (HtmlInputText) uIComponent; String label; if (htmlInputText.getLabel() == null || htmlInputText.getLabel().trim().equals("")) { label = htmlInputText.getId(); } else { label = htmlInputText.getLabel(); } if (!matcher.matches()) { FacesMessage facesMessage = new FacesMessage(label + ": not a valid email address"); throw new ValidatorException(facesMessage); } }} In our example, the validate() method does a regular expression match against the value of the JSF component we are validating. If the value matches the expression, validation succeeds, otherwise, validation fails and an instance of javax.faces.validator.ValidatorException is thrown. The primary purpose of our custom validator is to illustrate how to write custom JSF validations, and not to create a foolproof email address validator. There may be valid email addresses that don't validate using our validator. The constructor of ValidatorException takes an instance of javax.faces.application.FacesMessage as a parameter. This object is used to display the error message on the page when validation fails. The message to display is passed as a String to the constructor of FacesMessage. In our example, if the label attribute of the component is not null nor empty, we use it as part of the error message, otherwise we use the value of the component's id attribute. This behavior follows the pattern established by standard JSF validators. Before we can use our custom validator in our pages, we need to declare it in the application's faces-config.xml configuration file. To do so, we need to add a <validator> element just before the closing </faces-config> element. <validator> <validator-id>emailValidator</validator-id> <validator-class> com.ensode.jsf.validators.EmailValidator </validator-class></validator> The body of the <validator-id> sub element must contain a unique identifier for our validator. The value of the <validator-class> element must contain the fully qualified name of our validator class. Once we add our validator to the application's faces-config.xml, we are ready to use it in our pages. In our particular case, we need to modify the email field to use our custom validator. <h:inputText id="email" label="Email Address" required="true" value="#{RegistrationBean.email}"> <f:validator validatorId="emailValidator"/></h:inputText> All we need to do is nest an <f:validator> tag inside the input field we wish to have validated using our custom validator. The value of the validatorId attribute of <f:validator> must match the value of the body of the <validator-id> element in faces-config.xml. At this point we are ready to test our custom validator. When entering an invalid email address into the email address input field and submitting the form, our custom validator logic was executed and the String we passed as a parameter to FacesMessage in our validator() method is shown as the error text by the <h:message> tag for the field.
Read more
  • 0
  • 0
  • 1001

article-image-developing-web-applications-using-javaserver-faces-part-1
Packt
27 Oct 2009
6 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 1

Packt
27 Oct 2009
6 min read
Although a lot of applications have been written using these APIs, most modern Java applications are written using some kind of web application framework. As of Java EE 5, the standard framework for building web applications is Java Server Faces (JSF). Introduction to JavaServer Faces Before JSF was developed, Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available (at the time of writing, Wikipedia lists 35 Java web application frameworks, and this list is far from extensive!), often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE 5 specification resulted in having a standard web application framework available in any Java EE 5 compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all, however, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework as such, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE 5 component framework, one benefit of JSF is that it was designed with graphical tools in mind, making it easy for tools and IDEs such as NetBeans to take advantage of the JSF component model with drag-and-drop support for components. NetBeans provides a Visual Web JSF Designer that allow us to visually create JSF applications. Developing Our first JSF Application From an application developer's point of view, a JSF application consists of a series of JSP pages containing custom JSF tags, one or more JSF managed beans, and a configuration file named faces-config.xml. The faces-config.xml file declares the managed beans in the application, as well as the navigation rules to follow when navigating from one JSF page to another. Creating a New JSF Project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next, we need to enter a Project Name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the Server, Java EE Version, and Context Path of our application. In our example, we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. The Visual Web JavaServer Faces framework allows us to quickly build web pages by dragging-and-dropping components from the NetBeans palette into our pages. Although it certainly allows us to develop applications a lot quicker than manually coding, it hides a lot of the "ins" and "outs" of JSF. Having a background in standard JSF development will help us understand what the NetBeans Visual Web functionality does behind the scenes. When clicking Finish, the wizard generates a skeleton JSF project for us, consisting of a single JSP file called welcomeJSF.jsp, and a few configuration files: web.xml, faces-config.xml and, if we are using the default bundled GlassFish server, the GlassFish specific sun-web.xml file is generated as well. web.xml is the standard configuration file needed for all Java web applications. faces-config.xml is a JSF-specific configuration file used to declare JSF-managed beans and navigation rules. sun-web.xml is a GlassFish-specific configuration file that allows us to override the application's default context root, add security role mappings, and perform several other configuration tasks. The generated JSP looks like this: <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%-- This file is an entry point for JavaServer Faces application. --%> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h1> <h:outputText value="JavaServer Faces"/> </h1> </f:view> </body> </html> As we can see, a JSF enabled JSP file is a standard JSP file using a couple of JSF-specific tag libraries. The first tag library, declared in our JSP by the following line: <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> is the core JSF tag library, this library includes a number of tags that are independent of the rendering mechanism of the JSF application (recall that JSF can be used for applications other than web applications). By convention, the prefix f (for faces) is used for this tag library. The second tag library in the generated JSP, declared by the following line: <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> is the JSF HTML tag library. This tag library includes a number of tags that are used to implement HTML specific functionality, such as creating HTML forms and input fields. By convention, the prefix h (for HTML) is used for this tag library. The first JSF tag we see in the generated JSP file is the <f:view> tag. When writing a Java web application using JSF, all JSF custom tags must be enclosed inside an <f:view> tag. In addition to JSF-specific tags, this tag can contain standard HTML tags, as well as tags from other tag libraries, such as the JSTL tags. The next JSF-specific tag we see in the above JSP is <h:outputText>. This tag simply displays the value of its value attribute in the rendered page. The application generated by the new project wizard is a simple, but complete, JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's welcome page.
Read more
  • 0
  • 0
  • 1685

article-image-new-soa-capabilities-biztalk-server-2009-wcf-sql-server-adapter
Packt
26 Oct 2009
3 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: WCF SQL Server Adapter

Packt
26 Oct 2009
3 min read
Do not go where the path may lead; go instead where there is no path and leave a trail.-Ralph Waldo Emerson Many of the patterns and capabilities shown in this article are compatible with the last few versions of the BizTalk Server product. So what's new in BizTalk Server 2009?` BizTalk Server 2009 is the sixth formal release of the BizTalk Server product. This upcoming release has a heavy focus on platform modernization through new support for Windows Server 2008, Visual Studio.NET 2008, SQL Server 2008, and the .NET Framework 3.5. This will surely help developers who have already moved to these platforms in their day-to-day activities but have been forced to maintain separate environments solely for BizTalk development efforts. Lets get started. What is the WCF SQL Adapter? The BizTalk Adapter Pack 2.0 now contains five system and data adapters including SAP, Siebel, Oracle databases, Oracle applications, and SQL Server. What are these adapters and how are they different than the adapters available for previous version of BizTalk? Up until recently, BizTalk adapters were built using a commonly defined BizTalk Adapter Framework. This framework prescribed interfaces and APIs for adapter developers in order to elicit a common look and feel for the users of the adapters. Moving forward, adapter developers are encouraged by Microsoft to use the new WCF LOB Adapter SDK. As you can guess from the name, this new adapter framework, which can be considered an evolution of the BizTalk Adapter Framework, is based on WCF technologies. All of the adapters in the BizTalk Adapter Pack 2.0 are built upon the WCF LOB Adapter SDK. What this means is that all of the adapters are built as reusable, metadata-rich components that are surfaced to users as WCF bindings. So much like you have a wsHttp or netTcp binding, now you have a sqlBinding or sapBinding. As you would expect from a WCF binding, there is a rich set of configuration attributes for these adapters and they are no longer tightly coupled to BizTalk itself. Microsoft has made connection a commodity, and no longer do organizations have to spend tens of thousands of dollars to connect to line of business systems like SAP through expensive, BizTalk-only adapters. This latest version of the BizTalk Adapter Pack now includes a SQL Server adapter, which replaces the legacy BizTalk-only SQL Server adapter. What do we get from this SQL Server adapter that makes it so much better than the old one? Feature Classic SQL Adapter WCF SQL Adapter Execute create-read-update-delete statements on tables and views; execute stored procedures and generic T-SQL statements Partial (send operations only support stored procedures and updategrams) Yes Database polling via FOR XML Yes Yes Database polling via  traditional tabular results No Yes Proactive database push via SQL Query Notification No Yes Expansive adapter configuration which impacts connection management and transaction behavior No Yes Support for composite transactions which allow aggregation of operations across tables or procedures into a single atomic transaction No Yes Rich metadata browsing and retrieval for finding and selecting database operations No Yes Support for the latest data types (e.g. XML) and SQL Server 2008 platform No Yes Reusable outside of BizTalk applications by WCF or basic HTTP clients No Yes Adapter extension and configuration through out of the box WCF components or custom WCF behaviors No Yes Dynamic WSDL generation which always reflects current state of the system instead of fixed contract which always requires explicit updates No Yes
Read more
  • 0
  • 0
  • 4298
article-image-jboss-tools-palette
Packt
26 Oct 2009
4 min read
Save for later

JBoss Tools Palette

Packt
26 Oct 2009
4 min read
By default, JBoss Tools Palette is available in the Web Development perspective that can be displayed from the Window menu by selecting the Open Perspective | Other option. In the following screenshot, you can see the default look of this palette: Let's dissect this palette to see how it makes our life easier! JBoss Tools Palette Toolbar Note that on the top right corner of the palette, we have a toolbar made of three buttons (as shown in the following screenshot). They are (from left to right): Palette Editor Show/Hide Import Each of these buttons accomplishes different tasks for offering a high level of flexibility and customizability. Next, we will focus our attention on each one of these buttons. Palette Editor Clicking on the Palette Editor icon will display the Palette Editor window (as shown in the following screenshot), which contains groups and subgroups of tags that are currently supported. Also, from this window you can create new groups, subgroups, icons, and of course, tags—as you will see in a few moments. As you can see, this window contains two panels: one for listing groups of tag libraries (left side) and another that displays details about the selected tag and allows us to modify the default values (extreme right). Modifying a tag is a very simple operation that can be done like this: Select from the left panel the tag that you want to modify (for example, the <div> tag from the HTML | Block subgroup, as shown in the previous screenshot). In the right panel, click on the row from the value column that corresponds to the property that you want to modify (the name column). Make the desirable modification(s) and click the OK button for confirming it (them). Creating a set of icons The Icons node from the left panel allows you to create sets of icons and import new icons for your tags. To start, you have to right-click on this node and select the Create | Create Set option from the contextual menu (as shown in the following screenshot). This action will open the Add Icon Set window where you have to specify a name for this new set. Once you're done with the naming, click on the Finish button (as shown in the following screenshot). For example, we have created a set named eHTMLi: Importing an icon You can import a new icon in any set of icons by right-clicking on the corresponding set and selecting the Create | Import Icon option from the contextual menu (as shown in the following screenshot): This action will open the Add Icon window, where you have to specify a name and a path for your icon, and then click on the Finish button (as shown in the following screenshot). Note that the image of the icon should be in GIF format. Creating a group of tag libraries As you can see, the JBoss Tools Palette has a consistent default set of groups of tag libraries, like HTML, JSF, JSTL, Struts, XHTML, etc. If these groups are insufficient, then you can create new ones by right-clicking on the Palette node and selecting the Create | Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Create Group window, where you have to specify a name for the new group, and then click on Finish. For example, we have created a group named mygroup: Note that you can delete (only groups created by the user) or edit groups (any group) by selecting the Delete or Edit options from the contextual menu that appears when you right-click on the chosen group. Creating a tag library Now that we have created a group, it's time to create a library (or a subgroup). To do this, you have to right-click on the new group and select the Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Add Palette Group window, where you have to specify a name and an icon for this library, and then click on the Finish button (as shown in the following screenshot). As an example, we have created a library named eHTML with an icon that we had imported in the Importing an icon section discussed earlier in this article: Note that you can delete a tag library (only tag libraries created by the user) by selecting the Delete option from the contextual menu that appears when you right-click on the chosen library.
Read more
  • 0
  • 0
  • 1254

article-image-working-simple-associations-using-cakephp
Packt
24 Oct 2009
5 min read
Save for later

Working with Simple Associations using CakePHP

Packt
24 Oct 2009
5 min read
Database relationship is hard to maintain even for a mid-sized PHP/MySQL application, particularly, when multiple levels of relationships are involved because complicated SQL queries are needed. CakePHP offers a simple yet powerful feature called 'object relational mapping' or ORM to handle database relationships with ease.In CakePHP, relations between the database tables are defined through association—a way to represent the database table relationship inside CakePHP. Once the associations are defined in models according to the table relationships, we are ready to use its wonderful functionalities. Using CakePHP's ORM, we can save, retrieve, and delete related data into and from different database tables with simplicity, in a better way—no need to write complex SQL queries with multiple JOINs anymore! In this article by Ahsanul Bari and Anupom Syam, we will have a deep look at various types of associations and their uses. In particular, the purpose of this article is to learn: How to figure out association types from database table relations How to define different types of associations in CakePHP models How to utilize the association for fetching related model data How to relate associated data while saving There are basically 3 types of relationship that can take place between database tables: one-to-one one-to-many many-to-many The first two of them are simple as they don't require any additional table to relate the tables in relationship. In this article, we will first see how to define associations in models for one-to-one and one-to-many relations. Then we will look at how to retrieve and delete related data from, and save data into, database tables using model associations for these simple associations. Defining One-To-Many Relationship in Models To see how to define a one-to-many relationship in models, we will think of a situation where we need to store information about some authors and their books and the relation between authors and books is one-to-many. This means an author can have multiple books but a book belongs to only one author (which is rather absurd, as in real life scenario a book can also have multiple authors). We are now going to define associations in models for this one-to-many relation, so that our models recognize their relations and can deal with them accordingly. Time for Action: Defining One-To-Many Relation Create a new database and put a fresh copy of CakePHP inside the web root. Name the database whatever you like but rename the cake folder to relationship. Configure the database in the new Cake installation. Execute the following SQL statements in the database to create a table named authors, CREATE TABLE `authors` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `name` varchar( 127 ) NOT NULL , `email` varchar( 127 ) NOT NULL , `website` varchar( 127 ) NOT NULL ); Create a books table in our database by executing the following SQL commands: CREATE TABLE `books` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `isbn` varchar( 13 ) NOT NULL , `title` varchar( 64 ) NOT NULL , `description` text NOT NULL , `author_id` int( 11 ) NOT NULL ) Create the Author model using the following code (/app/models/authors.php): <?php class Author extends AppModel{ var $name = 'Author'; var $hasMany = 'Book';} ?> Use the following code to create the Book model (/app/models/books.php): <?phpclass Book extends AppModel{ var $name = 'Book'; var $belongsTo = 'Author';}?> Create a controller for the Author model with the following code: (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?>   Use the following code to create a controller for the Book model (/app/controllers/books_controller.php): <?php class BooksController extends AppController { var $name = 'Books'; var $scaffold; } ?> Now, go to the following URLs and add some test data: http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We have created two tables: authors and books for storing author and book information. A foreign-key named author_id is added to the books table to establish the one-to-many relation between authors and books. Through this foreign-key, an author is related to multiple books, as well as, a book is related to one single author. By Cake convention, the name of a foreign-key should be underscored, singular name of target model, suffixed with _id. Once the database tables are created and relations are established between them, we can define associations in models. In both of the model classes, Author and Book, we defined associations to represent the one-to-many relationship between the corresponding two tables. CakePHP provides two types of association: hasMany and belongsTo to define one-to-many relations in models. These associations are very appropriately named: As an author 'has many' books, Author model should have hasMany association to represent its relation with the Book model. As a book 'belongs to' one author, Book model should have belongsTo association to denote its relation with the Author model. In the Author model, an association attribute $hasMany is defined with the value Book to inform the model that every author can be related to many books. We also added a $belongsTo attribute in the Book model and set its value to Author to let the Book model know that every book is related to only one author. After defining the associations, two controllers were created for both of these models with scaffolding to see how the associations are working.
Read more
  • 0
  • 0
  • 6177