Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-build-advanced-contact-manager-using-jboss-richfaces-33-part-3
Packt
18 Nov 2009
11 min read
Save for later

Build an Advanced Contact Manager using JBoss RichFaces 3.3: Part 3

Packt
18 Nov 2009
11 min read
  The ajaxSingle and the process attributes The ajaxSingle property is very useful to control the form submission when ajaxSingle is set to true—the form is not submitted and, just the Ajax component data is sent. This attribute is available in every Ajax action component and we can use it to call an action from a button, skipping the form validation (like the JSF immediate property does), or to send the value of just an input into a form without validation and submitting the other ones. The second use case can be used, for example, when we need an input menu that dynamically changes the value of other inputs without submitting the entire form: <h:form> <!-- other input controls --> <h:selectOneMenu id="country" value="#{myBean.selectedCountry}"> <f:selectItems value="#{myBean.myCountries}"> <a:support event="onchange" ajaxSingle="true" reRender="city" /> </h:selectOneMenu> <h:selectOneMenu id="city" value="#{myBean.selectedCity}"> <f:selectItems value="#{myBean.myCities}"> </h:selectOneMenu> <!-- other input controls --></h:form> In this example, every time the user selects a new country, the value is submitted to the bean that recalculates the myCities property for the new country, after that the city menu will be re-rendered to show the new cities. All that without submitting the form or blocking the changes because of some validation problem. What if you would like to send more than one value, but still not the entire form? We can use Ajax regions (we will see in the next sections), or we can use the process attribute. It contains a list of components to process while submitting the Ajax action: <h:form> <!-- other input controls --> <h:inputText id="input1" ... /> <h:selectOneMenu id="country" value="#{myBean.selectedCountry}"> <f:selectItems value="#{myBean.myCountries}"> <a:support event="onchange" ajaxSingle="true" process="input2, input3" reRender="city" /> </h:selectOneMenu> <h:inputText id="input2" ... /> <h:inputText id="input3" ... /> <h:selectOneMenu id="city" value="#{myBean.selectedCity}"> <f:selectItems value="#{myBean.myCities}"> </h:selectOneMenu> <!-- other input controls --></h:form> In this example, we also wanted to submit the input2 and the input3 values together with the new country, because they are useful for retrieving the new cities list—just by setting the process attribute with the id list and during the submission, they will be processed. Thus input1 will not be sent. Also, for action components such as buttons, you can decide what to send using the ajaxSingle and process attributes. Form submission and processingWe speak about form "submission" to simplify the concept and make things more understandable. In reality, for every request, all of the form is submitted, but only the selected components (using ajaxSingle and/or process attributes) will be "processed". By "processed" we mean "pass through" the JSF phases (decoding, conversion, validation, and model updating). More Ajax! For every contact, we would like to add more customizable fields, so let's use the ContactField entity connected to every Contact instance. First of all, let's create a support bean called HomeSelectedContactOtherFieldsHelper inside the book.richfaces.advcm.modules.main package. It might look like this: @Name("homeSelectedContactOtherFieldsHelper")@Scope(ScopeType.CONVERSATION)public class HomeSelectedContactOtherFieldsHelper { @In(create = true) EntityManager entityManager; @In(required = true) Contact loggedUser; @In FacesMessages facesMessages; @In(required = true) HomeSelectedContactHelper homeSelectedContactHelper; // my code} A notable thing is highlighted—we injected the homeSelectedContactHelper component, because to get the list of the customized fields from the database, we need the contact owner. We also set the required attribute to true, because this bean can't live without the existence of homeSelectedContactHelper in the context. Now, let's add the property containing the list of personalized fields for the selected contact: private List<ContactField> contactFieldsList;public List<ContactField> getContactFieldsList() { if (contactFieldsList == null) { // Getting the list of all the contact fields String query = "from ContactField cf where cf.contact.id=:idContactOwner order by cf.id"; contactFieldsList = (List<ContactField>) entityManager.createQuery(query) .setParameter("idContactOwner", homeSelectedContactHelper.getSelectedContact() .getId()).getResultList(); } return contactFieldsList;}public void setContactFieldsList(List<ContactField> contactFieldsList) { this.contactFieldsList = contactFieldsList;} As you can see, it is a normal property lazy initialized using the getter. This queries the database to retrieve the list of customized fields for the selected contact. We have to put into the bean some other method useful to manage the customized field (adding and deleting field to and from the database), let's add those methods: public void createNewContactFieldInstance() { // Adding the new instance as last field (for inserting a new field) getContactFieldsList().add(new ContactField());}public void persistNewContactField(ContactField field) { // Attaching the owner of the contact field.setContact(homeSelectedContactHelper.getSelectedContact()); entityManager.persist(field);}public void deleteContactField(ContactField field) { // If it is in the database, delete it if (isContactFieldManaged(field)) { entityManager.remove(field); } // Removing the field from the list getContactFieldsList().remove(field);}public boolean isContactFieldManaged(ContactField field) { return field != null && entityManager.contains(field);} The createNewContactFieldInstance() method will just add a new (not yet persisted), empty instance of the ContactField class into the list. After the user has filled the values in, he/she will press a button that calls the persistNewContactField() method to save the new data into the database. In order to delete it, we are going to use the deleteContactField() method, and to determine if an instance is persisted into the database or not, we are going to use the isContactFieldManaged() method. Now, let's open the /view/main/contactView.xhtml file and add the code to show the personalized fields after h:panelGrid— i shows the standard ones: <a:repeat value="#{homeSelectedContactOtherFieldsHelper.contactFieldsList}" var="field"> <h:panelGrid columns="2" rowClasses="prop" columnClasses="name,value"> <h:outputText value="#{field.type} (#{field.label}):"/> <h:outputText value="#{field.value}"/> </h:panelGrid></a:repeat> We are using a new RichFaces data iteration component that permits us to iterate over a collection and put the data we want (the rich:dataTable component would instead create a table for the elements list). In our case, the h:panelGrid block will be repeated for every element of the collection (so for every customized field). Now, let's open the /view/main/contactEdit.xhtml file and add the code for editing the customized fields into the list: <a:region> <a:outputPanel id="otherFieldsList"> <a:repeat value="#{homeSelectedContactOtherFieldsHelper. contactFieldsList}" var="field"> <h:panelGrid columns="3" rowClasses="prop" columnClasses="name,value,validatormsg"> <h:panelGroup> <h:inputText id="scOtherFieldType" value="#{field.type}" required="true" size="5"> <a:support event="onblur" ajaxSingle="true"/> </h:inputText> <h:outputText value=" ("/> <h:inputText id="scOtherFieldLabel" value="#{field.label}" size="5"> <a:support event="onblur" ajaxSingle="true"/> </h:inputText> <h:outputText value=")"/><br/> <rich:message for="scOtherFieldType" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> </h:panelGroup> <h:panelGroup> <h:inputText id="scOtherFieldValue" value="#{field.value}" required="true"> <a:support event="onblur" ajaxSingle="true"/> </h:inputText><br/> <rich:message for="scOtherFieldValue" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> </h:panelGroup> <h:panelGroup> <a:commandButton image="/img/add.png" reRender="otherFieldsList" action="#{homeSelectedContactOtherFieldsHelper. persistNewContactField(field)}" rendered="#{!homeSelectedContactOtherFieldsHelper. isContactFieldManaged(field)}"> </a:commandButton> <a:commandButton image="/img/remove.png" reRender="otherFieldsList" ajaxSingle="true" action="#{homeSelectedContactOtherFieldsHelper. deleteContactField(field)}"> </a:commandButton> </h:panelGroup> </h:panelGrid> </a:repeat> <a:commandLink reRender="otherFieldsList" ajaxSingle="true" action="#{homeSelectedContactOtherFieldsHelper. createNewContactFieldInstance}" rendered="#{homeSelectedContactHelper. selectedContactManaged}" styleClass="image-command-link"> <h:graphicImage value="/img/add.png"/> <h:outputText value="#{messages['addNewField']}"/> </a:commandLink> </a:outputPanel></a:region> The code looks very similar to the one in the view box, except for the action buttons (to add a new instance, persist, save, or delete) and, for the presence of the surrounding tag a:region (highlighted). This is very important in order to make sure the form works correctly; we will see why in the next section. Also, notice that every input component has the a:support tag as a child that will update the bean with the edited value at the onblur event (which means that every time you switch the focus to another component, the value of the last one is submitted). So, if you delete or add a field, you will now loose the edited values for other fields. It is also used for Ajax validation, as the user is informed that the value is not valid when it moves the cursor to another input. Here is a screenshot with the new feature in the edit box: Using a:support only for Ajax validation If you want to use the a:support tag only for validation purpose, remember to set its bypassUpdates attribute to true, so the process would be faster as the JSF Update Model and Invoke Application phases will not be invoked. Ajax containers While developing a web application with RichFaces, it's very useful to know how to use Ajax containers (such as the a:region component) in order to optimize Ajax requests. In this section, we'll discuss about the a:region component. It is a very important component of the framework—it can define Ajax areas to limit the part of the component tree to be processed during an Ajax request. Regions can be nested during an Ajax request and the closest one will be used. By setting to true the a:region attribute called regionRenderOnly, you can use this component to limit the elements' update—In this way, in fact, only the components inside the region can be updated. Another important attribute is selfRendered; setting this to true tells the framework to render the response basing on component tree without referring to the page code—it is faster, but all of the transient elements that are not saved in the tree (such as f:verbatim or HTML code written directly without using JSF components) will be lost at the first refresh, so you can't use them in this case. To summarize, it is very useful to control the rendering process and optimize it, in order to limit the elements of a form to send during an Ajax request without validation problems, to show different indicators for Ajax status. Example of using a:region: <h:form> <a:region> <h:inputText id="it1" value="#{aBean.text1}"> <a:support event="onkeyup" reRender="text1" /> </h:inputText> <h:inputText id="it2" value="#{aBean.text2}" /> </a:region> <h:inputText id="it3" value="#{aBean.text3}" /> <a:commandButton action="#{aBean.saveTexts}" reRender="text1,text2" /></h:form><h:outputText id="text1" value="#{aBean.text1}" /><h:outputText id="text2" value="#{aBean.text2}" /> In this example, while the user is typing in the text1 value of inputText, a:support sends an Ajax request containing only the it1 and it2 values of inputText. In this case, in fact, a:region limits the components sent by every Ajax request originated from inside the region. So, the Ajax request will only update aBean.text1 and aBean.text2. Wrapping only a component inside an Ajax region is the equivalent of using the ajaxSingle property set to true. If the user clicks on the a:commandButton aBean.text1, the aBean.text2 and aBean.text3 values will be updated by the Ajax request. Coming back to our application, as all the customized fields are inside the same form component, we surround each one with the a:region tag. In this way, the single field is submitted regardless of the other ones. For example, without using a:region, if the user empties the name input value and then tries to insert a new customized field, the process will fail because the name input is not validated. If we use the a:region component, the name field will not be processed and a new field will be inserted. Now that we know how to use the a:region tag, we can combine it with ajaxSingle and process in order to decide what to send at every request, and to better optimize Ajax interactions into the application.
Read more
  • 0
  • 0
  • 940

article-image-plotting-geographical-data-using-basemap
Packt
18 Nov 2009
3 min read
Save for later

Plotting Geographical Data using Basemap

Packt
18 Nov 2009
3 min read
Basemap is a Matplotlib toolkit, a collection of application-specific functions that extends Matplotlib functionalities, and its complete documentation is available at http://matplotlib.sourceforge.net/basemap/doc/html/index.html. Toolkits are not present in the default Matplotlib installation (in fact, they also have a different namespace, mpl_toolkits), so we have to install Basemap separately. We can download it from http://sourceforge.net/projects/matplotlib/, under the matplotlib-toolkits menu of the download section, and then install it following the instructions in the documentation link mentioned previously. Basemap is useful for scientists such as oceanographers and meteorologists, but other users may also find it interesting. For example, we could parse the Apache log and draw a point on a map using GeoIP localization for each connection. We use the 0.99.3 version of Basemap for our examples. First example Let's start playing with the library. It contains a lot of things that are very specific, so we're going to just give an introduction to the basic functions of Basemap. # pyplot module importimport matplotlib.pyplot as plt# basemap importfrom mpl_toolkits.basemap import Basemap# Numpy importimport numpy as np These are the usual imports along with the basemap module. # Lambert Conformal map of USA lower 48 statesm = Basemap(llcrnrlon=-119, llcrnrlat=22, urcrnrlon=-64, urcrnrlat=49, projection='lcc', lat_1=33, lat_2=45, lon_0=-95, resolution='h', area_thresh=10000) Here, we initialize a Basemap object, and we can see it has several parameters depending upon the projection chosen. Let's see what a projection is: In order to represent the curved surface of the Earth on a two-dimensional map, a map projection is needed. This conversion cannot be done without distortion. Therefore, there are many map projections available in Basemap, each with its own advantages and disadvantages. Specifically, a projection can be: equal-area (the area of features is preserved) conformal (the shape of features is preserved) No projection can be both (equal-area and conformal) at the same time. In this example, we have used a Lambert Conformal map. This projection requires additional parameters to work with. In this case, they are lat_1, lat_2, and lon_0. Along with the projection, we have to provide the information about the portion of the Earth surface that the map projection will describe. This is done with the help of the following arguments: Argument Description llcrnrlon Longitude of lower-left corner of the desired map domain llcrnrlat Latitude of lower-left corner of the desired map domain urcrnrlon Longitude of upper-right corner of the desired map domain urcrnrlat Latitude of upper-right corner of the desired map domain     The last two arguments are:   Argument Description resolution Specifies what the resolution is of the features added to the map (such as coast lines, borders, and so on), here we have chosen high resolution (h), but crude, low, and intermediate are also available. area_thresh Specifies what the minimum size is for a feature to be plotted. In this case, only features bigger than 10,000 square kilometer
Read more
  • 0
  • 0
  • 5061

article-image-create-quick-application-cakephp-part-1
Packt
17 Nov 2009
9 min read
Save for later

Create a Quick Application in CakePHP: Part 1

Packt
17 Nov 2009
9 min read
The ingredients are fresh, sliced up, and in place. The oven is switched on, heated, and burning red. It is time for us to put on the cooking hat, and start making some delicious cake recipes. So, are you ready, baker? In this article, we are going to develop a small application that we'll call the "CakeTooDoo". It will be a simple to-do-list application, which will keep record of the things that we need to do. A shopping list, chapters to study for an exam, list of people you hate, and list of girls you had a crush on are all examples of lists. CakeTooDoo will allow us to keep an updated list. We will be able to view all the tasks, add new tasks, and tick the tasks that are done and much more. Here's another example of a to-do list, things that we are going to cover in this article: Make sure Cake is properly installed for CakeTooDoo Understand the features of CakeTooDoo Create and configure the CakeTooDoo database Write our first Cake model Write our first Cake controller Build a list that shows all the tasks in CakeTooDoo Create a form to add new tasks to CakeTooDoo Create another form to edit tasks in the to-do list Have a data validation rule to make sure users do not enter empty task title Add functionality to delete a task from the list Make separate lists for completed and pending Tasks Make the creation and modification time of a task look nicer Create a homepage for CakeTooDoo Making Sure the Oven is Ready Before we start with CakeTooDoo, let's make sure that our oven is ready. But just to make sure that we do not run into any problem later, here is a check list of things that should already be in place: Apache is properly installed and running in the local machine. MySQL database server is installed and running in the local machine. PHP, version 4.3.2 or higher, is installed and working with Apache. The latest 1.2 version of CakePHP is being used. Apache mod_rewrite module is switched on. AllowOverride is set to all for the web root directory in the Apache configuration file httpd.conf. CakePHP is extracted and placed in the web root directory of Apache. Apache has write access for the tmp directory of CakePHP. In this case, we are going to rename the Cake directory to it CakeTooDoo. CakeTooDoo: a Simple To-do List Application As we already know, CakeTooDoo will be a simple to-do list. The list will consist of many tasks that we want to do. Each task will consist of a title and a status. The title will indicate the thing that we need to do, and the status will keep record of whether the task has been completed or not. Along with the title and the status, each task will also record the time when the task has been created and last modified. Using CakeTooDoo, we will be able to add new tasks, change the status of a task, delete a task, and view all the tasks. Specifically, CakeTooDoo will allow us to do the following things: View all tasks in the list Add a new task to the list Edit a task to change its status View all completed tasks View all pen Delete a task A homepage that will allow access to all the features. You may think that there is a huge gap between knowing what to make and actually making it. But wait! With Cake, that's not true at all! We are just 10 minutes away from the fully functional and working CakeTooDoo. Don't believe me? Just keep reading and you will find it out yourself. Configuring Cake to Work with a Database The first thing we need to do is to create the database that our application will use. Creating database for Cake applications are no different than any other database that you may have created before. But, we just need to follow a few simple naming rules or conventions while creating tables for our database. Once the database is in place, the next step is to tell Cake to use the database. Time for Action: Creating and Configuring the Database Create a database named caketoodoo in the local machine's MySQL server. In your favourite MySQL client, execute the following code: CREATE DATABASE caketoodoo; In our newly created database, create a table named tasks, by running the following code in your MySQL client: USE caketoodoo; CREATE TABLE tasks ( id int(10) unsigned NOT NULL auto_increment, title varchar(255) NOT NULL, done tinyint(1) default NULL, created datetime default NULL, modified datetime default NULL, PRIMARY KEY (id) ); Rename the main cake directory to CakeTooDoo, if you haven't done that yet. Move inside the directory CakeTooDoo/app/config. In the config directory, there is a file named database.php.default. Rename this file to database.php. Open the database.php file with your favourite editor, and move to line number 73, where we will find an array named $default. This array contains database connection options. Assign login to the database user you will be using and password to the password of that user. Assign database to caketoodoo. If we are using the database user ahsan with password sims, the configuration will look like this: var $default = array( 'driver' => 'mysql', 'persistent' => false, 'host' => 'localhost', 'port' => '', 'login' => 'ahsan', 'password' => 'sims', 'database' => 'caketoodoo', 'schema' => '', 'prefix' => '', 'encoding' => '' ); Now, let us check if Cake is being able to connect to the database. Fire up a browser, and point to http://localhost/CakeTooDoo/. We should get the default Cake page that will have the following two lines: Your database configuration file is present and Cake is able to connect to the database, as shown in the following screen shot. If you get the lines, we have successfully configured Cake to use the caketoodoo database. What Just Happened? We just created our first database, following Cake convention, and configured Cake to use that database. Our database, which we named caketoodoo, has only one table named task. It is a convention in Cake to have plural words for table names. Tasks, users, posts, and comments are all valid names for database tables in Cake. Our table tasks has a primary key named id. All tables in Cake applications' database must have id as the primary key for the table. Conventions in CakePHPDatabase tables used with CakePHP should have plural names. All database tables should have a field named id as the primary key of the table. We then configured Cake to use the caketoodoo database. This was achieved by having a file named database.php in the configuration directory of the application. In database.php, we set the default database to caketoodoo. We also set the database username and password that Cake will use to connect to the database server. Lastly, we made sure that Cake was able to connect to our database, by checking the default Cake page. Conventions in Cake are what make the magic happen. By favoring convention over configuration, Cake makes productivity increase to a scary level without any loss to flexibility. We do not need to spend hours setting configuration values to just make the application run. Setting the database name is the only configuration that we will need, everything else will be figured out "automagically" by Cake. Throughout this article, we will get to know more conventions that Cake follows.   Writing our First Model Now that Cake is configured to work with the caketoodoo database, it's time to write our first model. In Cake, each database table should have a corresponding model. The model will be responsible for accessing and modifying data in the table. As we know, our database has only one table named tasks. So, we will need to define only one model. Here is how we will be doing it: Time for Action: Creating the Task Model Move into the directory CakeTooDoo/app/models. Here, create a file named task.php. In the file task.php, write the following code: <?php class Task extends AppModel { var $name = 'Task'; } ?> Make sure there are no white spaces or tabs before the <?php tag and after the ?> tag. Then save the file. What Just Happened? We just created our first Cake model for the database table tasks. All the models in a CakePHP application are placed in the directory named models in the app directory. Conventions in CakePHP: All model files are kept in the directory named models under the app directory. Normally, each database table will have a corresponding file (model) in this directory. The file name for a model has to be singular of the corresponding database table name followed by the .php extension. The model file for the tasks database table is therefore named task.php. Conventions in CakePHP: The model filename should be singular of the corresponding database table name. Models basically contain a PHP class. The name of the class is also singular of the database table name, but this time it is CamelCased. The name of our model is therefore Task. Conventions in CakePHP: A model class name is also singular of the name of the database table that it represents. You will notice that this class inherits another class named AppModel. All models in CakePHP must inherit this class. The AppModel class inherits another class called Model. Model is a core CakePHP class that has all the basic functions to add, modify, delete, and access data from the database. By inheriting this class, all the models will also be able to call these functions, thus we do not need to define them separately each time we have a new model. All we need to do is to inherit the AppModel class for all our models. We then defined a variable named $name in the Task'model, and assigned the name of the model to it. This is not mandatory, as Cake can figure out the name of the model automatically. But, it is a good practice to name it manually.
Read more
  • 0
  • 0
  • 2700
Visually different images

article-image-apache-geronimo-logging
Packt
16 Nov 2009
8 min read
Save for later

Apache Geronimo Logging

Packt
16 Nov 2009
8 min read
We will start by briefly looking at each of the logging frameworks mentioned above, and will then go into how the server logs events and errors and where it logs them to. After examining them, we will look into the different ways in which we can configure application logging. Apache log4j: Log4j is an open source logging framework that is developed by the Apache Software Foundation. It provides a set of loggers, appenders, and layouts, to control which messages should be logged at runtime, where they should be logged to, and in what format they should be logged. The loggers are organized in a tree hierarchy, starting with the root logger at the top of the hierarchy. All loggers except the root logger are named entities and can be retrieved by their names. The root logger can be accessed by using the Logger.getRootLogger() API, while all other loggers can be accessed by using the Logger.getLogger() API. The names of the loggers follow the rule that the name of the parent logger followed by a '.' is a prefix to the child logger's name. For example, if com.logger.test is the name of a logger, then its direct ancestor is com.logger, and the prior ancestor is com. Each of the loggers may be assigned levels. The set of possible levels in an ascending order are—TRACE, DEBUG, INFO, WARN, ERROR, and FATAL. If a logger is not assigned a level, then it inherits its level from its closest ancestor. A log statement makes a logging request to the log4j subsystem. This request is enabled only if its logging level is higher than or equal to its logger's level. If it is lower than the log, then the message is not output through the configured appenders. Log4j allows logs to be output to multiple destinations. This is done via different appenders. Currently there are appenders for the console, files, GUI components, JMS destinations, NT, and Unix system event loggers and remote sockets. Log4j is one of the most widely-used logging frameworks for Java applications, especially ones running on application servers. It also provides more features than the other logging framework that we are about to see, that is, the Java Logging API. Java Logging API: The Java Logging API, also called JUL, from the java.util.logging package name of the framework, is another logging framework that is distributed with J2SE from version 1.4 onwards. It also provides a hierarchy of loggers such as log4j, and the inheritance of properties by child loggers from parents just like log4j. It provides handlers for handling output, and formatters for configuring the way that the output is displayed. It provides a subset of the functionality that log4j provides, but the advantage is that it is bundled with the JRE, and so does not require the application to include third-party JARS as log4j does. SLF4J: The Simple Logging Facade for Java or SLF4J is an abstraction or facade over various logging systems. It allows a developer to plug in the desired logging framework at deployment time. It also supports the bridging of legacy API calls through the slf4j API, and to the underlying logging implementation. Versions of Apache Geronimo prior to 2.0 used Apache Commons logging as the facade or wrapper. However, commons logging uses runtime binding and a dynamic discovery mechanism, which came to be the source of quite a few bugs. Hence, Apache Geronimo migrated to slf4j, which allows the developer to plug in the logging framework during deployment, thereby eliminating the need for runtime binding. Configuring Apache Geronimo logging Apache Geronimo uses slf4j and log4j for logging. The log4j configuration files can be found in the <GERONIMO_HOME>/var/log directory. There are three configuration files that you will find in this directory, namely: client-log4j.properties deployer-log4j.properties server-log4j.properties Just as they are named, these files configure log4j logging for the client container (Java EE application client), deployer system, and the server. You will also find the corresponding log files—client.log, deployer.log, and server.log. The properties files, listed above, contain the configuration of the various appenders, loggers, and layouts for the server, deployer, and client. As mentioned above, log4j provides a hierarchy of loggers with a granularity ranging from the entire server to each class on the server. Let us examine one of the configuration files: the server-log4j.properties file: This file starts with the line log4j.rootLogger=INFO, CONSOLE, FILE. This means that the log4j root logger has a level of INFO and writes log statements to two appenders, namely, the CONSOLE appender and the FILE appender. These are the appenders that write to the console and to files respectively. The console appender and file appenders are configured to write to System.out and to <GERONIMO_HOME>/var/log/geronimo.log. Below this section, there is a finer-grained configuration of loggers at class or package levels. For example, log4j.logger.openjpa.Enhance=TRACE. It configures the logger for the class log4j.logger.openjpa.Enhance to the TRACE level. Note that all of the classes that do not have a log level defined will take on the log level of their parents. This applies recursively until we reach the root logger and inherit its log level (INFO in this case). Configuring application logging We will be illustrating how applications can log messages in Geronimo by using two logging frameworks, namely, log4j and JUL. We will also illustrate how you can use the slf4j wrapper to log messages with the above two underlying implementations. We will be using a sample application, namely, the HelloWorld web application to illustrate this. Using log4j We can use log4j for logging the application log to either a separate logfile or to the geronimo.log file. We will also illustrate how the logs can be written to a separate file in the <GERONIMO_HOME>/var/log directory, by using a GBean. Logging to the geronimo.log file and the command console Logging to the geronimo.log file and the command console is the simplest way to do application logging in Geronimo. For enabling this in your application, you only need to add logging statements to your application code. The HelloWorld sample application has a servlet called HelloWorldServlet, which has the following statements for enabling logging. The servlet is shown below. package com.packtpub.hello;import java.io.*;import javax.servlet.ServletException;import javax.servlet.http.*;import org.apache.log4j.Logger;public class HelloWorldServlet extends HttpServlet{ Logger logger = Logger.getLogger(HelloWorldServlet.class.getName()); protected void service(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { res.setContentType("text/html"); PrintWriter out = res.getWriter(); out.print("<html>"); logger.info("Printing out <html>"); out.print("<head><title>Hello World Application</title></head>"); logger.info("Printing out <head><title>Hello World Application</title></head>"); out.print("<body>"); logger.info("Printing out <body>"); out.print("<b>Hello World</b><br>"); logger.info("Printing out <b>Hello World</b><br>"); out.print("</body>"); logger.info("Printing out </body>"); out.print("</html>"); logger.info("Printing out </html>"); logger.warn("Sample Warning message"); logger.error("Sample error message"); }} Deploy the sample HelloWorld-1.0.war file, and then access http://localhost:8080/HelloWorld/. This servlet will log the following messages in the command console, as shown in the image below: The geronimo.log file will have the following entries: 2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <html>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <head><title>Hello World Application</title></head>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <body>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <b>Hello World</b><br>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out </body>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out </html>2009-02-02 20:01:38,906 WARN [HelloWorldServlet] Sample Warning message2009-02-02 20:01:38,906 ERROR [HelloWorldServlet] Sample error message Notice that only the messages with a logging level of greater than or equal to WARN are being logged to the command console, while all the INFO, ERROR, and WARN messages are logged to the geronimo.log file. This is because in server-log4j.properties the CONSOLE appender's threshold is set to the value of the system property, org.apache.geronimo.log.ConsoleLogLevel, as shown below: log4j.appender.CONSOLE.Threshold=${org.apache.geronimo.log.ConsoleLogLevel} The value of this property is, by default, WARN. All of the INFO messages are logged to the logfile because the FILE appender has a lower threshold, of TRACE, as shown below: log4j.appender.FILE.Threshold=TRACE Using this method, you can log messages of different severity to the console and logfile to which the server messages are logged. This is done for operator convenience, that is, only high severity log messages, such as warnings and errors, are logged to the console, and they need the operator's attention. The other messages are logged only to a file.
Read more
  • 0
  • 0
  • 5583

article-image-geronimo-architecture-part-2
Packt
13 Nov 2009
10 min read
Save for later

Geronimo Architecture: Part 2

Packt
13 Nov 2009
10 min read
Class loader architecture This section covers the class loader architecture for Apache Geronimo. The following image shows the class loader hierarchy for an application that is deployed in Apache Geronimo: The BootStrap class loader of the JVM is followed by the Extensions class loader and then the System class loader. The j2ee-system class loader is the primary class loader of Apache Geronimo. After the j2ee-system class loader, there are multiple other layers of class loaders before reaching the application class loaders. Applications have an application class loader, which loads any required application-level libraries and EJB modules. However, the web application packaged in the EAR will have its own class loader. The Administration Console has a ClassLoader Viewer portlet that can be used to view the class loader hierarchy as well as the classes loaded by each class loader. Modifying default class loading behavior In certain situations, we will need to follow a class loading strategy that is different from the default one that is provided by Apache Geronimo. A common situation where we need this functionality is when a parent configuration uses a library that is also used by the child and the library used by the parent is a different version, which is incompatible with the child's version of the library. In this case, if we follow the default class loading behavior, then we will always get the classes loaded by the parent configuration and will never be able to reference the classes in the library present in the child configuration. Apache Geronimo provides you with the ability to modify the default class loading behavior at the configuration level to handle such scenarios. This is done by providing certain elements in the deployment plan which, if present, will change the class loading behavior. These elements and the changes in class loading behavior that they represent, are explained as follows: hidden-classes: This tag is used to hide classes that are loaded in parent class loaders, so that the child class loader loads its own copy. Similarly, we can use this tag to specify the resources that should be loaded from the configuration class loader. For example, consider the case where you have a module that needs to load its copy of log4j. The server also has its own copy used for logging that is loaded in the parent class loader. We can add the hidden-classes element in the deployment plan for that module so that it loads its own copy of log4j, and the server loaded version of log4j is hidden from it. non-overridable-classes: This element specifies the list of classes that can be loaded only from the parent configurations of this configuration. In other words, the classes specified in this element cannot be loaded by the current configuration's class loader. The non-overridable-classes element is for preventing applications from loading their own copies of classes that should always be loaded from the parent class loaders, such as the Java EE API classes. private-classes: The classes that are defined by this tag will not be visible to class loaders that are the children of the current class loader. These classes will be loaded either from the current class loader or from its parents. The same class loading behavior can be achieved by using the hidden-classes tag in all of the child class loaders. inverse-classloading: If this element is specified, then the standard class loading strategy will be reversed for this module. This in effect means that a class is first looked up from the current class loader and then from its parent. Thus, the class loader hierarchy is inverted. suppress-default-environment: This will suppress the environment that is created by the builder for this module or configuration. This is a rarely-used element and can have nasty side effects if it is used carelessly. Important modules In this section, we will list the important configurations in Apache Geronimo. We will group them according to the Apache or other open source projects that they wrap. Configurations that do not wrap any other open source project will be listed under the Geronimo section. Apache ActiveMQ   org.apache.geronimo.configs/activemqbroker/2.1.4/car Apache Axis org.apache.geronimo.configs/axis/2.1.4/car org.apache.geronimo.configs/axis-deployer/2.1.4/car Apache Axis2 org.apache.geronimo.configs/axis2-deployer/2.1.4/car org.apache.geronimo.configs/axis2-ejb/2.1.4/car org.apache.geronimo.configs/axis2-ejb-deployer/2.1.4/car Apache CXF org.apache.geronimo.configs/cxf/2.1.4/car org.apache.geronimo.configs/cxf-deployer/2.1.4/car org.apache.geronimo.configs/cxf-ejb/2.1.4/car org.apache.geronimo.configs/cxf-ejb-deployer/2.1.4/car Apache Derby org.apache.geronimo.configs/derby/2.1.4/car Apache Geronimo org.apache.geronimo.configs/client/2.1.4/car org.apache.geronimo.configs/client-deployer/2.1.4/car org.apache.geronimo.configs/client-security/2.1.4/car org.apache.geronimo.configs/client-transaction/2.1.4/car org.apache.geronimo.configs/clustering/2.1.4/car org.apache.geronimo.configs/connector-deployer/2.1.4/car org.apache.geronimo.configs/farming/2.1.4/car org.apache.geronimo.configs/hot-deployer/2.1.4/car org.apache.geronimo.configs/j2ee-deployer/2.1.4/car org.apache.geronimo.configs/j2ee-server/2.1.4/car org.apache.geronimo.configs/javamail/2.1.4/car org.apache.geronimo.configs/persistence-jpa10-deployer/2.1.4/car org.apache.geronimo.configs/sharedlib/2.1.4/car org.apache.geronimo.configs/transaction/2.1.4/car org.apache.geronimo.configs/webservices-common/2.1.4/car org.apache.geronimo.framework/client-system/2.1.4/car org.apache.geronimo.framework/geronimo-gbeandeployer/2.1.4/car org.apache.geronimo.framework/j2ee-security/2.1.4/car org.apache.geronimo.framework/j2ee-system/2.1.4/car org.apache.geronimo.framework/jee-specs/2.1.4/car org.apache.geronimo.framework/jmx-security/2.1.4/car org.apache.geronimo.framework/jsr88-cli/2.1.4/car org.apache.geronimo.framework/jsr88-deploymentfactory/2.1.4/car org.apache.geronimo.framework/offline-deployer/2.1.4/car org.apache.geronimo.framework/online-deployer/2.1.4/car org.apache.geronimo.framework/plugin/2.1.4/car org.apache.geronimo.framework/rmi-naming/2.1.4/car org.apache.geronimo.framework/server-securityconfig/2.1.4/car org.apache.geronimo.framework/shutdown/2.1.4/car org.apache.geronimo.framework/transformeragent/2.1.4/car org.apache.geronimo.framework/upgrade-cli/2.1.4/car Apache Yoko org.apache.geronimo.configs/j2ee-corba-yoko/2.1.4/car org.apache.geronimo.configs/client-corba-yoko/2.1.4/car Apache Jasper org.apache.geronimo.configs/jasper/2.1.4/car org.apache.geronimo.configs/jasper-deployer/2.1.4/car JaxWS org.apache.geronimo.configs/jaxws-deployer/2.1.4/car org.apache.geronimo.configs/jaxws-ejb-deployer/2.1.4/car JSR 88 org.apache.geronimo.configs/jsr88-earconfigurer/2.1.4/car org.apache.geronimo.configs/jsr88-jarconfigurer/2.1.4/car org.apache.geronimo.configs/jsr88-rarconfigurer/2.1.4/car org.apache.geronimo.configs/jsr88-warconfigurer/2.1.4/car Apache MyFaces org.apache.geronimo.configs/myfaces/2.1.4/car org.apache.geronimo.configs/myfaces-deployer/2.1.4/car Apache OpenEJB org.apache.geronimo.configs/openejb/2.1.4/car org.apache.geronimo.configs/openejb-corbadeployer/2.1.4/car org.apache.geronimo.configs/openejb-deployer/2.1.4/car Apache OpenJPA org.apache.geronimo.configs/openjpa/2.1.4/car Spring org.apache.geronimo.configs/spring/2.1.4/car Apache Tomcat6 org.apache.geronimo.configs/tomcat6/2.1.4/car org.apache.geronimo.configs/tomcat6-clusteringbuilder-wadi/2.1.4/car org.apache.geronimo.configs/tomcat6-clusteringwadi/2.1.4/car org.apache.geronimo.configs/tomcat6-deployer/2.1.4/car org.apache.geronimo.configs/tomcat6-no-ha/2.1.4/car Apache WADI org.apache.geronimo.configs/wadi-clustering/2.1.4/car GShell org.apache.geronimo.framework/gshell-framework/2.1.4/car org.apache.geronimo.framework/gshell-geronimo/2.1.4/car Apache XmlBeans org.apache.geronimo.framework/xmlbeans/2.1.4/car Apache Pluto org.apache.geronimo.plugins/pluto-support/2.1.4/car     If you check the configurations, then you will see that most of the components that make up Geronimo have a deployer configuration and a main configuration. The deployer configuration contains the GBeans that will deploy modules onto that component. For example, the openejb-deployer contains GBeans that implement the functionality to deploy an EJB module onto Apache Geronimo. For accomplishing this, the EJB JAR file and its corresponding deployment plan are parsed by the deployer and then converted into a format that can be understood by the OpenEJB subsystem. This is then deployed on the OpenEJB container. The main configuration will usually contain the GBeans that configure the container and also manage its lifecycle. Server directory structure It is important for a user or an administrator to understand the directory structure of a Geronimo server installation. The directory structure of a v2.1.4 server is shown in the following screenshot: Please note that the directory that we will be referring to as <GERONIMO_HOME> is the geronimo-tomcat6-javaee5-2.1.4 directory shown in the screenshot. The following are some important directories that you should be familiar with: The bin directory contains the command scripts and the JAR files required to start the server, stop the server, invoke the deployer, and start the GShell. The etc directory contains the configuration files for GShell. The schema directory contains Geronimo schemas. The var/config directory contains Geronimo configurations files. A Geronimo administrator or user can find most of the configuration information about the server here. The var/derby directory contains the database files for the embedded Derby database server. The var/log directory contains logging configuration and logfiles. The var/security directory contains user credential and grouping files. The var/security/keystores directory contains the cryptographic keystore files used for server SSL configuration. The following are some important configuration files under the Geronimo directory structure: config.xml: This file is located under the &ltGERONIMO_HOME>/var/config directory. This file preserves the information regarding GBean attributes and references that were overridden from the default values used at deployment time. config-substitutions.properties: This file is located under the &ltGERONIMO_HOME>/var/config directory. The property values specified in this file are used in expressions in config.xml. The property values in this file can be overridden by using a system property or environment variable with a property name that is prefixed with org.apache.geronimo.config.substitution. artifact_aliases.properties: This file is located under the &ltGERONIMO_HOME>/var/config directory. This file is used to substitute one module or configuration ID for another module or configuration ID. The entries in this file are of the form oldArtifactId=newArtifactId, for example default/mylib//jar=default/mylib/2.0/jar. Note that the version number in the old artifact ID may be omitted, but the version number in the new artifact ID must be specified. client_artifact_aliases.properties: This file is located under the &ltGERONIMO_HOME>/var/config directory. This file is used for artifact aliasing with application clients. server-log4j.properties: This file is located under the &ltGERONIMO_HOME>/var/log directory. This file contains the logging configuration for the server. deployer-log4j.properties: This file is located under the &ltGERONIMO_HOME>/var/log directory. This file contains the logging configuration for the deployer. client-log4j.properties: This file is located under the &ltGERONIMO_HOME>/var/log directory. This file contains the logging configuration for application clients. users.properties: This file is located under the &ltGERONIMO_HOME>/var/security directory. This file contains the authentication credentials for the server. groups.properties: This file is located under the &ltGERONIMO_HOME>/var/security directory. This file contains the grouping information for the users defined in users.properties Among the directories that contain sensitive information, such as user passwords, are var/security, var/derby, and var/config. These directories should be protected using operating system provided directory and file security.
Read more
  • 0
  • 0
  • 2561

article-image-geronimo-architecture-part-1
Packt
13 Nov 2009
8 min read
Save for later

Geronimo Architecture: Part 1

Packt
13 Nov 2009
8 min read
Inversion of Control and dependency injection Inversion of Control (IoC) is a design pattern used in software engineering that facilitates the creation of loosely-coupled systems. In an IoC system, the flow of control is inverted, that is, the program is called by the framework—unlike in normal linear systems where the program calls the libraries. This allows us to circumvent the tight coupling that arises from the control being with the calling program. Dependency injection is a specific case of IoC where the framework provides an assembler or a configurator that provides the user program with the objects that it needs through injection. The user program declares dependencies on other services (provided by the framework or other user programs), and the assembler injects the dependencies into the user program wherever they are needed. It is important that you clearly understand the concept of dependency injection before we proceed further into the Geronimo architecture, as that is the core concept behind the functioning of the Geronimo kernel and how services are loosely coupled in it. To help you understand the concept more clearly, we will provide a simple example. Consider the following two classes: package packtsamples;public class RentCalculator{ private float rentRate; private TaxCalculator tCalc; public RentCalculator(float rate, float taxRate){ rentRate = rate; tCalc = new ServiceTaxCalculator(taxRate); } public void calculateRent(int noOfDays){ float totalRent = noOfDays * rentRate; float tax = tCalc.calculateTax(totalRent); totalRent = totalRent + tax; System.out.println("Rent is:"+totalRent); }}package packtsamples;public class ServiceTaxCalculator implements TaxCalculator{ private float taxRate; public ServiceTaxCalculator(float rate){ taxRate = rate; } public float calculateTax(float amount){ return (amount * taxRate/100); }}package packtsamples;public interface TaxCalculator{ public float calculateTax(float amount);}package packtsamples;public class Main { /** * @param args. args[0] = taxRate, args[1] = rentRate, args[2] =noOfDays */ public static void main(String[] args) { RentCalculator rc = new RentCalculator(Float.parseFloat(args[1]),Float.parseFloat(args[0])); rc.calculateRent(Integer.parseInt(args[2])); }} The RentCalculator class calculates the room rent including tax, given the rent rate, and the number of days. The TaxCalculator class calculates the tax on a particular amount, given the tax rate. As you can see from the code snippet given, the RentCalculator class is dependent on the TaxCalculator interface for calculating the tax. In the given sample, the ServiceTaxCalculator class is instantiated inside the RentCalculator class. This makes the two classes tightly coupled, so that we cannot use the RentCalculator with another TaxCalculator implementation. This problem can be solved through dependency injection. If we apply this concept to the previous classes, then the architecture will be slightly different. This is shown in the following code block:> package packtsamples.di;public class RentCalculator{ private float rentRate; private TaxCalculator tCalc; public RentCalculator(float rate, TaxCalculator tCalc){ rentRate = rate; this.tCalc = tCalc; } public void calculateRent(int noOfDays){ float totalRent = noOfDays * rentRate; float tax = tCalc.calculateTax(totalRent); totalRent = totalRent + tax; System.out.println("Rent is:" +totalRent); }}package packtsamples.di;public class ServiceTaxCalculator implements TaxCalculator{ private float taxRate; public ServiceTaxCalculator(float rate){ taxRate = rate; } public float calculateTax(float amount){ return (amount * taxRate/100); }}package packtsamples.di;public interface TaxCalculator{ public float calculateTax(float amount);} Notice the difference here from the previous implementation. The RentCalculator class has a TaxCalculator argument in its constructor. The RentCalculator then uses this TaxCalculator instance to calculate tax by calling the calculateTax method. You can pass in any implementation, and its calculateTax method will be called. In the following section, we will see how to write the class that will assemble this sample into a working program. package packtsamples.di;import java.lang.reflect.InvocationTargetException;public class Assembler { private TaxCalculator createTaxCalculator(String className, float taxRate){ TaxCalculator tc = null; try { Class cls = Class.forName(className); tc = (TaxCalculator)cls.getConstructors()[0] .newInstance(taxRate); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (IllegalArgumentException e) { e.printStackTrace(); } catch (SecurityException e) { e.printStackTrace(); } catch (InstantiationException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } catch (InvocationTargetException e) { e.printStackTrace(); } return tc; } private RentCalculator createRentCalculator(float rate, TaxCalculator tCalc){ return new RentCalculator(rate,tCalc); } private void assembleAndExecute(String className, float taxRate, float rentRate, int noOfDays){ TaxCalculator tc = createTaxCalculator(className, taxRate); createRentCalculator(rentRate, tc).calculateRent(noOfDays);} /** * * @param args args[0] = className, args[1] = taxRate args[2] = rentRate args[3] = noOfDays */ public static void main(String[] args){ new Assembler().assembleAndExecute(args[0], Float.parseFloat(args[1]), Float.parseFloat(args[2]), Integer.parseInt(args[3])); }} In the given sample code, you can see that there is a new class called the Assembler. The Assembler, in its main method, invokes the implementation class of TaxCalculator that we want RentCalculator to use. The Assembler then instantiates an instance of RentCalculator, injects the TaxCalculator instance of the type we specify into it, and calls the calculateRent method. Thus the two classes are not tightly coupled and the program control lies with the assembler, unlike in the previous case. Thus there is Inversion of Control happening here, as the framework (Assembler in this case) is controlling the execution of the program. This is a very trivial sample. We can write an assembler class that is more generic and is not even coupled to the interface as in the previous case. This is an example of dependency injection. An injection of this type is called constructor injection, where the assembler injects values through the constructor. You can also have other types of dependency injection, namely setter injection and field injection. In the former, the values are injected into the object by invoking the setter methods that are provided by the class, and in the latter, the values are injected into fields through reflection or some other method. The Apache Geronimo kernel uses both setter injection and constructor injection for resolving dependencies between the different modules or configurations that are deployed in it. The code for these examples is provided under di-sample in the samples. To build the sample, use the following command: mvn clean install To run the sample without dependency injection, use the following command: java –cp di-sample-1.0.jar packtsamples.Main <taxRate> <rentRate><noOfDays> To run the sample with dependency injection, use the following command: java –cp di-sample-1.0.jar packtsamples.Assembler packtsamples.di.ServiceTaxCalculator <taxRate> <rentRate> <noOfDays> GBeans A GBean is the basic unit in Apache Geronimo. It is a wrapper that is used to wrap or implement different services that are deployed in the kernel. GBeans are similar to MBeans from JMX. A GBean has attributes that store its state and references to other GBeans, and can also register dependencies on other GBeans. GBeans also have lifecycle callback methods and metadata. The Geronimo architects decided to invent the concept of GBeans instead of using MBeans in order to keep the Geronimo architecture independent from JMX. This ensured that they did not need to push in all of the functionality required for the IoC container (that forms Geronimo kernel) into the JMX implementation. Even though GBeans are built on top of MBeans, they can be moved to some other framework as well. A user who is writing a GBean has to follow certain conventions. A sample GBean is shown below: import org.apache.geronimo.gbean.GBeanInfo;import org.apache.geronimo.gbean.GBeanInfoBuilder;import org.apache.geronimo.gbean.GBeanLifecycle;public class TestGBean implements GBeanLifecycle{ private String name; public TestGBean(String name){ this.name = name; } public void doFail() { System.out.println("Failed............."); } public void doStart() throws Exception { System.out.println("Started............"+name); } public void doStop() throws Exception { System.out.println("Stopped............"+name); } public static final GBeanInfo GBEAN_INFO; static { GBeanInfoBuilder infoBuilder = GBeanInfoBuilder .createStatic(TestGBean .class, "TestGBean"); infoBuilder.setPriority(2); infoBuilder.addAttribute("name", String.class, true); infoBuilder.setConstructor(new String[]{"name"}); GBEAN_INFO = infoBuilder.getGBeanInfo(); } public static GBeanInfo getGBeanInfo() { return GBEAN_INFO; }} You will notice certain characteristics that this GBean has from the previous section. We will list these characteristics as follows: All GBeans should have a static getGBeanInfo method, which returns aGBeanInfo object that describes the attributes and references of GBean as well as the interfaces it can implement. All GBeans will have a static block where a GBeanInfoBuilder object is created and linked to that GBean. All of the metadata that is associated with this GBean is then added to the GBeanInfoBuilder object. The metadata includes descriptions of the attributes, references, interfaces, and constructors of GBean. We can add GBeans to configurations either programmatically, using methods exposed through the configuration manager and kernel, or by making an entry in the plan for the GBean, as follows: <gbean name="TestGBean" class="TestGBean"> <attribute name="name">Nitya</attribute></gbean> We need to specify the attribute values in the plan, and the kernel will inject those values into the GBean at runtime. There are three attributes for which we need not specify values. These are called the magic attributes, and the kernel will automatically inject these values when the GBeans are being started. These attributes are abstractName, kernel, and classLoader. As there is no way to specify the values of these attributes in the deployment plan (an XML file in which we provide Geronimo specific information while deploying a configuration), we need not specify them there. However, we should declare these attributes in the GBeanInfo and in the constructor. If the abstractName attribute is declared, then the Geronimo kernel will inject the abstractName of the GBean into it. If it is the kernel attribute, then a reference to the kernel that loaded this GBean is injected. If we declare classLoader, then the class loader for that configuration is injected.
Read more
  • 0
  • 0
  • 2750
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-geronimo-plugins
Packt
12 Nov 2009
11 min read
Save for later

Geronimo Plugins

Packt
12 Nov 2009
11 min read
Developing a plugin In this section, we will develop our very own plugin, the World Clock plugin. This is a very simple plugin that provides the time in different locales. We will go through all of the steps required to develop it from scratch. These steps are as follows: Creating the plugin project Generating the plugin project, using maven2 Writing the plugin interface and implementation Creating a deployment plan Installing the plugin Creating a plugin project There are many ways in which you can develop plugins. You can manually create all of the plugin artifacts and package them. We will use the easiest method, that is, by using Maven's geronimo-plugin-archetype. This will generate the plugin project with all of the artifacts with the default values filled in. To generate the plugin project, run the following command: mvn archetype:create -DarchetypeGroupId=org.apache.geronimo.buildsupport -DarchetypeArtifactId=geronimo-plugin-archetype -DarchetypeVersion=2.1.4 -DgroupId=com.packt.plugins -DartifactId=WorldClock This will create a plugin project called WorldClock. A directory called WorldClock will be created, with the following artifacts in it: pom.xml pom.sample.xml src/main/plan/plan.xml src/main/resources In the same directory in which the WorldClock directory is created, you will need to create a java project that will contain the source code of the plugin. We can create this by using the following command: mvn archetype:create -DgroupId=com.packt.plugins -DartifactId=WorldClockModule This will create a java project with the same groupId and artifactId in a directory called WorldClockModule. This directory will contain the following artifacts: pom.xml src/main/java/com/packt/plugins/App.java src/test/java/com/packt/plugins/AppTest.java You can safely remove the second and third artifacts, as they are just sample stubs generated by the archetype. In this project, we will need to modify the pom.xml to have a dependency on the Geronimo kernel, so that we can compile the GBean that we are going to create and include in this module. The modified pom.xml is shown below: <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>WorldClockModule</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.geronimo.framework</groupId> <artifactId>geronimo-kernel</artifactId> <version>2.1.4</version> </dependency> </dependencies></project> For simplicity, we have only one GBean in our sample. In a real world scenario, there may be many GBeans that you will need to create. Now we need to create the GBean that forms the core functionality of our plugin. Therefore, we will create two classes, namely, Clock and ClockGBean. These classes are shown below: package com.packt.plugins;import java.util.Date;import java.util.Locale;public interface Clock { public void setTimeZone(String timeZone); public String getTime();} and package com.packt.plugins;import java.text.DateFormat;import java.text.SimpleDateFormat;import java.util.Calendar;import java.util.Date;import java.util.GregorianCalendar;import java.util.Locale;import java.util.TimeZone;import org.apache.geronimo.gbean.GBeanInfo;import org.apache.geronimo.gbean.GBeanInfoBuilder;import org.apache.geronimo.gbean.GBeanLifecycle;import sun.util.calendar.CalendarDate;public class ClockGBean implements GBeanLifecycle, Clock{ public static final GBeanInfo GBEAN_INFO; private String name; private String timeZone; public String getTime() { GregorianCalendar cal = new GregorianCalendar(TimeZone. getTimeZone(timeZone)); int hour12 = cal.get(Calendar.HOUR); // 0..11 int minutes = cal.get(Calendar.MINUTE); // 0..59 int seconds = cal.get(Calendar.SECOND); // 0..59 boolean am = cal.get(Calendar.AM_PM) == Calendar.AM; return (timeZone +":"+hour12+":"+minutes+":"+seconds+":"+((am)? "AM":"PM")); } public void setTimeZone(String timeZone) { this.timeZone = timeZone; } public ClockGBean(String name){ this.name = name; timeZone = TimeZone.getDefault().getID(); } public void doFail() { System.out.println("Failed............."); } public void doStart() throws Exception { System.out.println("Started............"+name+" "+getTime()); } public void doStop() throws Exception { System.out.println("Stopped............"+name); } static { GBeanInfoBuilder infoFactory = GBeanInfoBuilder.createStatic ("ClockGBean",ClockGBean.class); infoFactory.addAttribute("name", String.class, true); infoFactory.addInterface(Clock.class); infoFactory.setConstructor(new String[] {"name"}); GBEAN_INFO = infoFactory.getBeanInfo(); } public static GBeanInfo getGBeanInfo() { return GBEAN_INFO; }} As you can see, Clock is an interface and ClockGBean  is a GBean that implements this interface. The Clock interface exposes the functionality that is provided by the ClockGBean. The doStart(),   doStop(), and doFail()  methods are provided by the GBeanLifeCycle interface, and provide lifecycle callback functionality. The next step is to run Maven to build this module. Go to the command prompt, and change the directory to the WorldClockModule directory. To build the module, run the following command: mvn clean install Once the build completes, you will find a WorldClockModule-1.0-SNAPSHOT.jar in the WorldClockModule/target directory. Now change the directory to WorldClock, and open the generated pom.xml file. You will need to uncomment the deploymentConfigs for the gbeanDeployer, and add the following module that you want to include in the plugin: <module> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <version>1.0</version> <type>jar</type></module> You will notice that we are using the car-maven-plugin in the pom.xml file. The car-maven-plugin is used to build Apache Geronimo configuration archives without starting the server. The final step is to create the deployment plan in order to deploy the module that we just created into the Apache Geronimo server. This deployment plan will be used by the car-maven-plugin to actually create the artifacts that will be created during deployment to Apache Geronimo. The deployment plan is shown below: <module > <environment> <moduleId> <groupId>com.packt.plugins</groupId> <artifactId>WorldClock</artifactId> <version>1.0</version> <type>car</type> </moduleId> <dependencies/> <hidden-classes/> <non-overridable-classes/> <private-classes/> </environment> <gbean name="ClockGBean" class="com.packt.clock.ClockGBean"> <attribute name="name">ClockGBean</attribute> </gbean></module> Once the plan is ready, go to the command prompt and change the directory to the WorldClock directory. Run the following command to build the plugin: mvn clean install You will notice that the car-maven-plugin is invoked and a WorldClock-1.0-SNAPSHOT.car file is created in the WorldClock/target directory. We have now completed the steps required to create an Apache Geronimo plugin. In the next section, we will see how we can install the plugin in Apache Geronimo. Installing a plugin We can install a plugin in three different ways. One way is to   use the deploy.bat or deploy.sh script, another way is to use the install-plugin command in GShell, and the third way is to use the Administration Console to  install a plugin from a plugin repository. We will discuss each of these methods: Using deploy.bat or deploy.sh file: The deploy.bat or deploy.sh script is found in the <GERONIMO_HOME>/bin directory. It has an option install-plugin, which can be used to install plugins onto the server. The command syntax is shown below: deploy install-plugin <path to the plugin car file> Running this command, and passing the path to the plugin .car archive on the disk, will result in the plugin being installed onto the Geronimo server. Once the installation has finished, an Installation Complete message will be displayed, and the command will exit. Using GShell: Invoke  the gsh command from the command prompt, after changing the current directory to <GERONIMO_HOME>/bin. This will bring up the GShell prompt. In the GShell prompt, type the following command to install the plugin: deploy/install-plugin <path to the plugin car file> Please note that, you should escape special characters in the path by using a leading "" (back slash) before the character. Another way to install plugins that are available in remote plugin repository is by using the list-plugins command. The syntax of this command is as given below: deploy/list-plugins <URI of the remote repository> If a remote repository is not specified, then the one configured in Geronimo will be used instead. Once this command has been invoked, the list of available plugins in the remote repository is shown, along with their serial numbers, and you will be prompted to enter a comma separated list of the serial numbers of the plugins that you want to install. Using the Administration Console: The Administration Console has a Plugins portlet that can be used to list the plugins available in a repository specified by the user. You can use the Administration Console to select and install the plugins that you want from this list. This portlet also has the capability to export applications or services in your server instance as Geronimo plugins, so that they can be installed on other server instances. See the Plugin portlet section for details of the usage of this portlet. Available plugins The  web  site http://geronimoplugins.com/ hosts Apache Geronimo plugins. It has many plugins listed for Apache Geronimo. There are plugins for Quartz, Apache Directory Server, and many other popular software packages. However, they are not always available for the latest versions of Apache Geronimo. A couple of fairly up-to-date plugins that are available for Apache Geronimo are the Windows Service Wrapper plugin and the Apache Tuscany plugin for Apache Geronimo. The Windows Service Wrapper provides the ability for Apache Geronimo to be registered as a windows service. The Tuscany plugin is an implementation of the SCA Java EE Integration specification by integrating Apache Tuscany as an Apache Geronimo plugin. Both of these plugins are available from the Apache Geronimo web site. Pluggable Administration Console Older versions of Apache Geronimo came with a monolithic Administration Console. However, the server was extendable through plugins. This introduced a problem: How to administer the new plugins that were added to the server? To resolve this problem, the Apache Geronimo developers rewrote the Administration Console to be extensible through console plugins called Administration Console Extensions. In this section, we will look into how to create an Administration Console portlet for the World Clock plugin that we developed in the previous section. Architecture The pluggable Administration Console functionality is based on the support provided by the Apache Pluto portlet container for dynamically adding and removing portlets and pages without requiring a restart. Apache Geronimo exposes this functionality through two GBeans, namely, the Administration Console Extension (ACE) GBean  (org.apache.geronimo.pluto.AdminConsoleExtensionGBean) and the Portal Container Services GBean (org.apache.geronimo.pluto. PortalContainerServicesGBean). The PortalContainerServicesGBean exposes the features of the Pluto container in order to add and remove portlets and pages at runtime. The ACE GBean invokes these APIs to add and remove the portlets or pages. The  ACE GBean should be specified in the Geronimo-specific deployment plan of your web application or plugin, that is, geronimo-web.xml. The architecture is shown in the following figure: Developing an Administration Console extension We will now go through the steps to develop an Administration Console Extension for the World Clock plugin that we created in the previous section. We will use Maven WAR archetype to create a web application project. To create the project, run the following command from the command-line console: mvn archetype:create -DgroupId=com.packt.plugins -DartifactId=ClockWebApp -DarchetypeArtifactId=maven-archetype-webapp This will result in the Maven web project being created, named ClockWebApp. A default pom.xml will be created. This will need to be edited to add dependencies to the two modules, as shown in the following code snippet: <dependency> <groupId>org.apache.geronimo.framework</groupId> <artifactId>geronimo-kernel</artifactId> <version>2.1.4</version></dependency><dependency> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <version>1.0</version></dependency> We add these dependencies because the portlet that we are going to write will use the classes mentioned in the above two modules.
Read more
  • 0
  • 0
  • 2834

article-image-using-business-rules-define-decision-points-oracle-soa-suite-part-1
Packt
28 Oct 2009
11 min read
Save for later

Using Business Rules to Define Decision Points in Oracle SOA Suite: Part 1

Packt
28 Oct 2009
11 min read
The advantage of separating out decision points as external rules is that we not only ensure that each rule is used in a consistent fashion, but in addition make it simpler and quicker to modify; that is we only have to modify a rule once and can do this with almost immediate effect, thus increasing the agility of our solution. Business Rule concepts Before we implement our first rule, let's briefly introduce the key components which make up a Business Rule. These are: Facts: Represent the data or business objects that rules are applied to. Rules: A rule consists of two parts, an IF part which consists of one or more tests to be applied to fact(s), and a THEN part, which lists the actions to be carried out should the test to evaluate to true Rule Set: As the name implies, it is just a set of one or more related rules that are designed to work together . Dictionary: A dictionary is the container of all components that make up a business rule, it holds all the facts, rule sets, and rules for a business rule. In addition, a dictionary may also contain functions, variables, and constraints. We will introduce these in more detail later in this article. To execute a business rule, you submit one or more facts to the rules engine. It will apply the rules to the facts, that is each fact will be tested against the IF part of the rule and if it evaluates to true, then it will perform the specified actions for that fact. This may result in the creation of new facts or the modification of existing facts (which may result in further rule evaluation). Leave approval rule To begin with, we will write a simple rule to automatically approve a leave request that is of type Vacation and only for 1 day's duration. A pretty trivial example, but once we've done this we will look at how to extend this rule to handle more complex examples. Using the Rule Author In SOA Suite 10.1.3 you use the Rule Author, which is a browser based interface for defining your business rules. To launch the Rule Author within your browser go to the following URL: http://<host name>:<port number>/ruleauthor/ This will bring up the Rule Author Log In screen. Here you need to log in as user that belongs to the rule-administrators role. You can either log in as the user oc4jadmin (default password Welcome1), which automatically belongs to this group, or define your own user. Creating a Rule Repository Within Oracle Business Rules, all of our definitions (that is facts, constraints, variables, and functions) and rule sets are defined within a dictionary. A dictionary is held within a Repository. A repository can contain multiple dictionaries and can also contain multiple versions of a dictionary. So, before we can write any rules, we need to either connect to an existing repository, or create a new one. Oracle Business Rules supports two types of repository—File based and WebDAV. For simplicity we will use a File based repository, though typically in production you want to use a WebDAV based repository as this makes it simpler to share rules between multiple BPEL Processes. WebDAV is short for Web-based Distributed Authoring and Versioning. It is an extension to HTTP that allows users to collaboratively edit and manage files (that is business rules in our case) over the Web. To create a File based repository click on the Repository tab within the Rule Author, this will display the Repository Connect screen as shown in the following screenshot: From here we can either connect to an existing repository (WebDAV or File based) or create and connect to a new file-based repository. For our purposes, select a Repository Type of File, and specify the full path name of where you want to create the repository and then click Create. To use a WebDAV repository, you will first need to create this externally from the Rule Author. Details on how to do this can be found in Appendix B of the Oracle Business Rules User Guide (http://download.oracle.com/docs/cd/B25221_04/web.1013/b15986/toc.htm). From a development perspective it can often be more convenient to develop your initial business rules in a file repository. Once complete, you can then export the rules from the file repository and import them into a WebDAV repository. Creating a dictionary Once we have connected to a repository, the next step is to create a dictionary. Click on the Create tab, circled in the following screenshot, and this will bring up the Create Dictionary screen. Enter a New Dictionary Name (for example LeaveApproval) and click Create. This will create and load the dictionary so it's ready to use. Once you have created a dictionary, then next time you connect to the repository you will select the Load tab (next to the Create tab) to load it. Defining facts Before we can define any rules, we first need to define the facts that the rules will be applied to. Click on the Definitions tab, this will bring up the page which summarizes all the facts defined within the current dictionary. You will see from this that the rule engine supports three types of facts: Java Facts, XML Facts, and RL Facts. The type of fact that you want to use really depends on the context in which you will be using the rules engine. For example, if you are calling the rule engine from Java, then you would work with Java Facts as this provides a more integrated way of combining the two components. As we are using the rule engine with BPEL then it makes sense to use XML Facts. Creating XML Facts The Rule Author uses XML Schemas to generate JAXB 1.0 classes, which are then imported to generate the corresponding XML Facts. For our example we will use the Leave Request schema, shown as follows for convenience: <?xml version="1.0" encoding="windows-1252"?> <xsd:schema targetNamespace="http://schemas.packtpub.com/LeaveRequest" elementFormDefault="qualified" > <xsd:element name="leaveRequest" type="tLeaveRequest"/> <xsd:complexType name="tLeaveRequest"> <xsd:sequence> <xsd:element name="employeeId" type="xsd:string"/> <xsd:element name="fullName" type="xsd:string" /> <xsd:element name="startDate" type="xsd:date" /> <xsd:element name="endDate" type="xsd:date" /> <xsd:element name="leaveType" type="xsd:string" /> <xsd:element name="leaveReason" type="xsd:string"/> <xsd:element name="requestStatus" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:schema> Using JAXB, particularly when used in conjunction with BPEL, places a number of constraints on how we define our XML Schemas, including: When defining rules, the Rule Author can only work with globally defined types. This is because it's unable to introspect the properties (i.e. attributes and elements) of global elements. Within BPEL you can only define variables based on globally defined elements. The net result is that any facts we want to pass from BPEL to the rules engine (or vice versa) must be defined as global elements for BPEL and have a corresponding global type definition so that we can define rules against it. The simplest way to achieve this is to define a global type (for example tLeaveRequest in the above schema) and then define a corresponding global element based on that type (for example, leaveRequest in the above schema). Even though it is perfectly acceptable with XML Schemas to use the same name for both elements and types, it presents problems for JAXB, hence the approach taken above where we have prefixed every type definition with t as in tLeaveRequest. Fortunately this approach corresponds to best practice for XML Schema design. The final point you need to be aware of is that when creating XML facts the JAXB processor maps the type xsd:decimal to java.lang.BigDecimal and xsd:integer to java.lang.BigInteger. This means you can't use the standard operators (for example >, >=, <=, and <) within your rules to compare properties of these types. To simplify your rules, within your XML Schemas use xsd:double in place of xsd:decimal and xsd:int in place of xsd:integer. To generate XML facts, from the XML Fact Summary screen (shown previously), click Create, this will display the XML Schema Selector page as shown: Here we need to specify the location of the XML Schema, this can either be an absolute path to an xsd file containing the schema or can be a URL. Next we need to specify a temporary JAXB Class Directory in which the generated JAXB classes are to be created. Finally, for the Target Package Name we can optionally specify a unique name that will be used as the Java package name for the generated classes. If we leave this blank, the package name will be automatically generated based on the target namespace of the XML Schema using the JAXB XML-to-Java mapping rules. For example, our leave request schema has a target namespace of http://schemas.packtpub.com/LeaveRequest; this will result in a package name of com.packtpub.schemas.leaverequest. Next click on Add Schema; this will cause the Rule Author to generate the JAXB classes for our schema in the specified directory. This will update the XML Fact Summary screen to show details of the generated classes; expand the class navigation tree until you can see the list of all the generated classes, as shown in the following screenshot: Select the top level node (that is com) to specify that we want to import all the generated classes. We need to import the TLeaveRequest class as this is the one we will use to implement rules and the LeaveRequest class as we need this to pass this in as a fact from BPEL to the rules engine. The ObjectFactory class is optional, but we will need this if we need to generate new LeaveRequest facts within our rule sets. Although we don't need to do this at the moment it makes sense to import it now in case we do need it in the future. Once we have selected the classes to be imported, click Import (circled in previous screenshot) to load them into the dictionary. The Rule Author will display a message to confirm that the classes have been successfully imported. If you check the list of generated JAXB classes, you will see that the imported classes are shown in bold. In the process of importing your facts, the Rule Author will assign default aliases to each fact and a default alias to all properties that make up a fact, where a property corresponds to either an element or an attribute in the XML Schema. Using aliases Oracle Business Rules allows you to specify your own aliases for facts and properties in order to define more business friendly names which can then be used when writing rules. For XML facts if you have followed standard naming conventions when defining your XML Schemas, we typically find that the default aliases are clear enough and that if you start defining aliases it can actually cause more confusion unless applied consistently across all facts. Hiding facts and properties The Rule Author lets you hide facts and properties so that they don't appear in the drop downs within the Rule Author. For facts which have a large number of properties, hiding some of these can be worth while as it can simplify the creation of rules. Another obvious use of this might be to hide all the facts based on elements, since we won't be implementing any rules directly against these. However, any facts you hide will also be hidden from BPEL, so you won't be able to pass facts of these types from BPEL to the rules engine (or vice versa). In reality, the only fact you will typically want to hide will be the ObjectFactory (as you will have one of these per XML Schema that you import). Saving the rule dictionary As you define your business rules, it makes sense to save your work at regular intervals. To save the dictionary, click on the Save Dictionary link in the top right hand corner of the Rule Author page. This will bring up the Save Dictionary page. Here either click on the Save button to update the current version of the dictionary with your changes or, if you want to save the dictionary as a new version or under a new dictionary name, then click on the Save As link and amend the dictionary name and version as appropriate.
Read more
  • 0
  • 0
  • 1724

article-image-working-complex-associations-using-cakephp
Packt
28 Oct 2009
6 min read
Save for later

Working with Complex Associations using CakePHP

Packt
28 Oct 2009
6 min read
Defining Many-To-Many Relationship in Models In the previous article in this series on Working with Simple Associations using CakePHP, we assumed that a book can have only one author. But in real life scenario, a book may also have more than one author. In that case, the relation between authors and books is many-to-many. We are now going to see how to define associations for a many-to-many relation. We will modify our existing code-base that we were working on in the previous article to set up the associations needed to represent a many-to-many relation. Time for Action: Defining Many-To-Many Relation Empty the database tables: TRUNCATE TABLE `authors`;TRUNCATE TABLE `books`; Remove the author_id field from the books table: ALTER TABLE `books` DROP `author_id` Create a new table, authors_books:; CREATE TABLE `authors_books` (`author_id` INT NOT NULL ,`book_id` INT NOT NULL Modify the Author (/app/models/author.php) model: <?phpclass Author extends AppModel{ var $name = 'Author'; var $hasAndBelongsToMany = 'Book';}?> Modify the Book (/app/models/book.php) model: <?phpclass Book extends AppModel{ var $name = 'Book'; var $hasAndBelongsToMany = 'Author';}?> Modify the AuthorsController (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?> Modify the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; var $scaffold;}?> Now, visit the following URLs and add some test data into the system:http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We first emptied the database and then dropped the field author_id from the books table. Then we added a new join table authors_books that will be used to establish a many-to-many relation between authors and books. The following diagram shows how a join table relates two tables in many-to-many relation: In a many-to-many relation, one record of any of the tables can be related to multiple records of the other table. To establish this link, a join table is used—a join table contains two fields to hold the primary-keys of both of the records in relation. CakePHP has certain conventions for naming a join table—join tables should be named after the tables in relation, in alphabetical order, with underscores in between. The join table between authors and books tables should be named authors_books, not books_authors. Also by Cake convention, the default value for the foreign keys used in the join table must be underscored, singular name of the models in relation, suffixed with _id. After creating the join table, we defined associations in the models, so that our models also know about the new relationship that they have. We added hasAndBelongsToMany (HABTM) associations in both of the models. HABTM is a special type of association used to define a many-to-many relation in models. Both the models have HABTM associations to define the many-to-many relationship from both ends. After defining the associations in the models, we created two controllers for these two models and put in scaffolding in them to see the association working. We could also use an array to set up the HABTM association in the models. Following code segment shows how to use an array for setting up an HABTM association between authors and books in the Author model: var $hasAndBelongsToMany = array( 'Book' => array( 'className' => 'Book', 'joinTable' => 'authors_books', 'foreignKey' => 'author_id', 'associationForeignKey' => 'book_id' ) ); Like, simple relationships, we can also override default association characteristics by adding/modifying key/value pairs in the associative array. The foreignKey key/value pair holds the name of the foreign-key found in the current model—default is underscored, singular name of the current model suffixed with _id. Whereas, associationForeignKey key/value pair holds the foreign-key name found in the corresponding table of the other model—default is underscored, singular name of the associated model suffixed with _id. We can also have conditions, fields, and order key/value pairs to customize the relationship in more detail. Retrieving Related Model Data in Many-To-Many Relation Like one-to-one and one-to-many relations, once the associations are defined, CakePHP will automatically fetch the related data in many-to-many relation. Time for Action: Retrieving Related Model Data Take out scaffolding from both of the controllers—AuthorsController (/app/controllers/authors_controller.php) and BooksController (/app/controllers/books_controller.php). Add an index() action inside the AuthorsController (/app/controllers/authors_controller.php), like the following: <?phpclass AuthorsController extends AppController { var $name = 'Authors'; function index() { $this->Author->recursive = 1; $authors = $this->Author->find('all'); $this->set('authors', $authors); }}?> Create a view file for the /authors/index action (/app/views/authors/index.ctp): <?php foreach($authors as $author): ?><h2><?php echo $author['Author']['name'] ?></h2><hr /><h3>Book(s):</h3><ul><?php foreach($author['Book'] as $book): ?><li><?php echo $book['title'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Write down the following code inside the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; function index() { $this->Book->recursive = 1; $books = $this->Book->find('all'); $this->set('books', $books); }}?> Create a view file for the action /books/index (/app/views/books/index.ctp): <?php foreach($books as $book): ?><h2><?php echo $book['Book']['title'] ?></h2><hr /><h3>Author(s):</h3><ul><?php foreach($book['Author'] as $author): ?><li><?php echo $author['name'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Now, visit the following URLs:http://localhost/relationship/authors/http://localhost/relationship/books/ What Just Happened? In both of the models, we first set the value of $recursive attributes to 1 and then we called the respective models find('all') functions. So, these subsequent find('all') operations return all associated model data that are related directly to the respective models. These returned results of the find('all') requests are then passed to the corresponding view files. In the view files, we looped through the returned results and printed out the models and their related data. In the BooksController, this returned data from find('all') is stored in a variable $books. This find('all') returns an array of books and every element of that array contains information about one book and its related authors. Array ( [0] => Array ( [Book] => Array ( [id] => 1 [title] => Book Title ... ) [Author] => Array ( [0] => Array ( [id] => 1 [name] => Author Name ... ) [1] => Array ( [id] => 3 ... 54 54 ... ...) Same for the Author model, the returned data is an array of authors. Every element of that array contains two arrays: one contains the author information and the other contains an array of books related to this author. These arrays are very much like what we got from a find('all') call in case of the hasMany association.
Read more
  • 0
  • 0
  • 5573

article-image-using-business-rules-define-decision-points-oracle-soa-suite-part-2
Packt
28 Oct 2009
6 min read
Save for later

Using Business Rules to Define Decision Points in Oracle SOA Suite: Part 2

Packt
28 Oct 2009
6 min read
To invoke a rule we need to go through a number of steps. First we must create a session with the rules engine, then we can assert one or more facts, before executing the rule set and finally we can retrieve the results. We do this in BPEL via a Decision Service; this is essentially a web service wrapper around a rules dictionary, which takes cares of managing the session with the rules engine as well as governing which rule set we wish to apply. The wrapper allows a BPEL process to assert one or more facts, execute a rule set against the asserted facts, retrieve the results and then reset the session. This can be done within a single invocation of an operation, or over multiple operations. Creating a Rule Engine Connection Before you can create a Decision Service you need to create a connection to the repository in which the required rule set is stored. In the Connections panel within JDeveloper, right-click on the Rule Engines folder and select New Rule Engine Connection… as shown in the following screenshot: This will launch the Create Rule Engine Connection dialogue; first you need to specify whether the connection is for a file repository or WebDAV repository. Using a file based repository If you are using a file repository, all we need to specify is the location of the actual file. Once the connection has been created, we can use this to create a decision service for any of the rule sets contained within that repository. However, it is important to realize that when you create a decision service based on this connection, JDeveloper will take a copy of the repository and copy this into the BPEL project. When you deploy the BPEL process, then the copy of this repository will be deployed with the BPEL process. This has a number of implications; first if you want to modify the rule set used by the BPEL Process you need to modify the copy of the repository deployed with the BPEL Process. To modify the rule set deployed with a BPEL Process, log onto the BPEL console, from here click on the BPEL Processes tab, and then select the process that uses the decision service. Next click on the Descriptor tab; this will list all the Partner Links for that process, including the Decision Service (for example LeaveApprovalDecisionServicePL) as shown in the following screenshot: This PartnerLink will have the property decisionServiceDetails, with the link Rule Service Details (circled in the previous screenshot); click on this and the console will display details of the decision service. From here click on the link Open Rule Author; this will open the Rule Author complete with a connection to the file based rule repository. The second implication is that if you use the same rule set within multiple BPEL Processes, each process will have its own copy of the rule set. You can work round this by either wrapping each rule set with a single BPEL process, which is then invoked by any other process wishing to use that rule set. Or once you have deployed the rule set for one process, then you can access it directly via the WSDL for the deployed rule set, for example LeaveApprovalDecisionService.wsdl in the above screenshot. Using a WebDAV repository For the reasons mentioned above, it often makes sense to use a WebDAV based repository to hold your rules. This makes it far simpler to share a rule set between multiple clients, such as BPEL and Java. Before you can create a Rule Engine Connection to a WebDAV repository, you must first define a WebDAV connection to JDeveloper, which is also created from the Connections palette. Creating a Decision Service To create a decision service within our BPEL process, select the Services page from the Component Palette and drag a Decision Service onto your process, as shown in the following screenshot: This will launch the Decision Service Wizard dialogue, as shown: Give the service a name, and then select Execute Ruleset as the invocation pattern. Next click on the flashlight next to Ruleset to launch the Rule Explorer. This allows us to browse any previously defined rule engine connection and select the rule set we wish to invoke via the decision service. For our purposes, select the LeaveApprovalRules as shown below, and click OK. This will bring us back to the Decision Service Wizard which will be updated to list the facts that we can exchange with the Rule Engine, as shown in the following screenshot: This dialogue will only list XML Facts that map to global elements in the XML Schema. Here we need to define which facts we want to assert, that is which facts we pass as inputs to the rule engine from BPEL, and which facts we want to watch, that is which facts we want to return in the output from the rules engine back to our BPEL process. For our example, we will pass in a single leave request. The rule engine will then apply the rule set we defined earlier and update the status of the request to Approved if appropriate. So we need to specify that Assert and Watch facts of type LeaveRequest. Finally, you will notice the checkbox Check here to assert all descendants from the top level element; this is important when an element contains nested elements (or facts) to ensure that nested facts are also evaluated by the rules engine. For example if we had a fact of type LeaveRequestList which contained a list of multiple LeaveRequests, if we wanted to ensure the rules engine evaluated these nested facts, then we would need to check this checkbox. Once you have specified the facts to Assert and Watch, click Next and complete the dialogue; this will then create a decision service partner link within your BPEL process. Adding a Decide activity We are now ready to invoke our rule set from within our BPEL process. From the Component Palette, drag a Decide activity onto our BPEL process (at the point before we execute the LeaveRequest Human Task). This will open up the Edit Decide window (shown in the following screenshot). Here we need to specify a Name for the activity, and select the Decision Service we want to invoke (that is the LeaveApprovalDecisionService that we just created). Once we've specified the service, we need to specify how we want to interact with it. For example, whether we want to incrementally assert a number of facts over a period of time, before executing the rule set and retrieving the result or whether we want to assert all the facts, execute the rule set and get the result within a single invocation. We specify this through the Operation attribute. For our purpose we just need to assert a single fact and run the rule set, so select the value of Assert facts, execute rule set, retrieve results. Once we have selected the operation to invoke on the decision service, the Decision Service Facts will be updated to allow you to assign input and output facts as appropriate.  
Read more
  • 0
  • 0
  • 1525
article-image-designing-your-very-own-aspnet-mvc-application
Packt
28 Oct 2009
8 min read
Save for later

Designing your very own ASP.NET MVC Application

Packt
28 Oct 2009
8 min read
When downloading and installing the ASP.NET MVC framework SDK, a new project template is installed in Visual Studio—the ASP.NET MVC project template. This article by Maarten Balliauw describes how to use this template. We will briefly touch all aspects of ASP.NET MVC by creating a new ASP.NET MVC web application based on this Visual Studio template. Besides view, controller, and model, new concepts including ViewData—a means of transferring data between controller and view, routing—the link between a web browser URL and a specific action method inside a controller, and unit testing of a controller are also illustrated in this article. (For more resources on .NET, see here.) Creating a new ASP.NET MVC web application project Before we start creating an ASP.NET MVC web application, make sure that you have installed the ASP.NET MVC framework SDK from http://www.asp.net/mvc. After installation, open Visual Studio 2008 and select menu option File | New | Project. The following screenshot will be displayed. Make sure that you select the .NET framework 3.5 as the target framework. You will notice a new project template called ASP.NET MVC Web Application. This project template creates the default project structure for an ASP.NET MVC application. After clicking on OK, Visual Studio will ask you if you want to create a test project. This dialog offers the choice between several unit testing frameworks that can be used for testing your ASP.NET MVC application. You can decide for yourself if you want to create a unit testing project right now—you can also add a testing project later on. Letting the ASP.NET MVC project template create a test project now is convenient because it creates all of the project references, and contains an example unit test, although this is not required. For this example, continue by adding the default unit test project. What's inside the box? After the ASP.NET MVC project has been created, you will notice a default folder structure. There's a Controllers folder, a Models folder, a Views folder, as well as a Content folder and a Scripts folder. ASP.NET MVC comes with the convention that these folders (and namespaces) are used for locating the different blocks used for building the ASP.NET MVC framework. The Controllers folder obviously contains all of the controller classes; the Models folder contains the model classes; while the Views folder contains the view pages. Content will typically contain web site content such as images and stylesheet files, and Scripts will contain all of the JavaScript files used by the web application. By default, the Scripts folder contains some JavaScript files required for the use of Microsoft AJAX or jQuery. Locating the different building blocks is done in the request life cycle. One of the first steps in the ASP.NET MVC request life cycle is mapping the requested URL to the correct controller action method. This process is referred to as routing. A default route is initialized in the Global.asax file and describes to the ASP.NET MVC framework how to handle a request. Double-clicking on the Global.asax file in the MvcApplication1 project will display the following code: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Mvc;using System.Web.Routing;namespace MvcApplication1{ public class GlobalApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } }} In the Application_Start() event handler, which is fired whenever the application is compiled or the web server is restarted, a route table is registered. The default route is named Default, and responds to a URL in the form of http://www.example.com/{controller}/{action}/{id}. The variables between { and } are populated with actual values from the request URL or with the default values if no override is present in the URL. This default route will map to the Home controller and to the Index action method, according to the default routing parameters. We won't have any other action with this routing map. By default, all the possible URLs can be mapped through this default route. It is also possible to create our own routes. For example, let's map the URL http://www.example.com/Employee/Maarten to the Employee controller, the Show action, and the firstname parameter. The following code snippet can be inserted in the Global.asax file we've just opened. Because the ASP.NET MVC framework uses the first matching route, this code snippet should be inserted above the default route; otherwise the route will never be used. routes.MapRoute( "EmployeeShow", // Route name "Employee/{firstname}", // URL with parameters new { // Parameter defaults controller = "Employee", action = "Show", firstname = "" } ); Now, let's add the necessary components for this route. First of all, create a class named EmployeeController in the Controllers folder. You can do this by adding a new item to the project and selecting the MVC Controller Class template located under the Web | MVC category. Remove the Index action method, and replace it with a method or action named Show. This method accepts a firstname parameter and passes the data into the ViewData dictionary. This dictionary will be used by the view to display data. The EmployeeController class will pass an Employee object to the view. This Employee class should be added in the Models folder (right-click on this folder and then select Add | Class from the context menu). Here's the code for the Employee class: namespace MvcApplication1.Models{ public class Employee { public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } }} After adding the EmployeeController and Employee classes, the ASP.NET MVC project now appears as shown in the following screenshot: The EmployeeController class now looks like this: using System.Web.Mvc;using MvcApplication1.Models;namespace MvcApplication1.Controllers{ public class EmployeeController : Controller { public ActionResult Show(string firstname) { if (string.IsNullOrEmpty(firstname)) { ViewData["ErrorMessage"] = "No firstname provided!"; } else { Employee employee = new Employee { FirstName = firstname, LastName = "Example", Email = firstname + "@example.com" }; ViewData["FirstName"] = employee.FirstName; ViewData["LastName"] = employee.LastName; ViewData["Email"] = employee.Email; } return View(); } }} The action method we've just created can be requested by a user via a URL—in this case, something similar to http://www.example.com/Employee/Maarten. This URL is mapped to the action method by the route we've created before. By default, any public action method (that is, a method in a controller class) can be requested using the default routing scheme. If you want to avoid a method from being requested, simply make it private or protected, or if it has to be public, add a [NonAction] attribute to the method. Note that we are returning an ActionResult (created by the View() method), which can be a view-rendering command, a page redirect, a JSON result, a string, or any other custom class implementation inheriting the ActionResult that you want to return. Returning an ActionResult is not necessary. The controller can write content directly to the response stream if required, but this would be breaking the MVC pattern—the controller should never be responsible for the actual content of the response that is being returned. Next, create a Show.aspx page in the Views | Employee folder. You can create a view by adding a new item to the project and selecting the MVC View Content Page template, located under the Web | MVC category, as we want this view to render in a master page (located in Views | Shared). There is an alternative way to create a view related to an action method, which will be covered later in this article. In the view, you can display employee information or display an error message if an employee is not found. Add the following code to the Show.aspx page: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" Inherits=" System.Web.Mvc.ViewPage" %><asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <% if (ViewData["ErrorMessage"] != null) { %> <h1><%=ViewData["ErrorMessage"]%></h1> <% } else { %> <h1><%=ViewData["FirstName"]%> <%=ViewData["LastName"]%></h1> <p> E-mail: <%=ViewData["Email"]%> </p> <% } %></asp:Content> If the ViewData, set by the controller, is given an ErrorMessage, then the ErrorMessage is displayed on the resulting web page. Otherwise, the employee details are displayed. Press the F5 button on your keyboard to start the development web server. Alter the URL in your browser to something ending in /Employee/Your_Name_Here, and see the action method and the view we've just created in action.
Read more
  • 0
  • 0
  • 5277

article-image-enabling-spring-faces-support
Packt
28 Oct 2009
9 min read
Save for later

Enabling Spring Faces support

Packt
28 Oct 2009
9 min read
The main focus of the Spring Web Flow Framework is to deliver the infrastructure to describe the page flow of a web application. The flow itself is a very important element of a web application, because it describes its structure, particularly the structure of the implemented business use cases. But besides the flow which is only in the background, the user of your application is interested in the Graphical User Interface (GUI). Therefore, we need a solution of how to provide a rich user interface to the users. One framework which offers components is JavaServer Faces (JSF). With the release of Spring Web Flow 2, an integration module to connect these two technologies, called Spring Faces has been introduced. This article is no introduction to the JavaServer Faces technology. It is only a description about the integration of Spring Web Flow 2 with JSF. If you have never previously worked with JSF, please refer to the JSF reference to gain knowledge about the essential concepts of JavaServer Faces. JavaServer Faces (JSF)—a brief introductionThe JavaServer Faces (JSF) technology is a web application framework with the goal to make the development of user interfaces for a web application (based on Java EE) easier. JSF uses a component-based approach with an own lifecycle model, instead of a request-driven approach used by traditional MVC web frameworks. The version 1.0 of JSF is specified inside JSR (Java Specification Request) 127 (http://jcp.org/en/jsr/detail?id=127). To use the Spring Faces module, you have to add some configuration to your application. The diagram below depicts the single configuration blocks. These blocks are described in this article. The first step in the configuration is to configure the JSF framework itself. That is done in the deployment descriptor of the web application—web.xml. The servlet has to be loaded at the startup of the application. This is done with the <load-on-startup>1</load-on-startup> element. <!-- Initialization of the JSF implementation. The Servlet is not used at runtime --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.faces</url-pattern> </servlet-mapping> For the work with the JavaServer Faces, there are two important classes. These are the javax.faces.webapp.FacesServlet and the javax.faces.context.FacesContext classes.You can think of FacesServlet as the core base of each JSF application. Sometimes that servlet is called an infrastructure servlet. It is important to mention that each JSF application in one web container has its own instance of the FacesServlet class. This means that an infrastructure servlet cannot be shared between many web applications on the same JEE web container.FacesContext is the data container which encapsulates all information that is necessary around the current request.For the usage of Spring Faces, it is important to know that FacesServlet is only used to instantiate the framework. A further usage inside Spring Faces is not done. To be able to use the components from Spring Faces library, it's required to use Facelets instead of JSP. Therefore, we have to configure that mechanism. If you are interested in reading more about the Facelets technology, visit the Facelets homepage from java.net with the following URL: https://facelets.dev.java.net. A good introduction inside the Facelets technology is the http://www.ibm.com/developerworks/java/library/j-facelets/ article, too. The configuration process is done inside the deployment descriptor of your web application—web.xml. The following sample shows the configuration inside the mentioned file. <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value></context-param> As you can see in the above code, the configuration parameter is done with a context parameter. The name of the parameter is javax.faces.DEFAULT_SUFFIX. The value for that context parameter is .xhtml. Inside the Facelets technology To present the separate views inside a JSF context, you need a specific view handler technology. One of those technologies is the well-known JavaServer Pages (JSP) technology. Facelets are an alternative for the JSP inside the JSF context. Instead, to define the views in JSP syntax, you will use XML. The pages are created using XHTML. The Facelets technology offers the following features: A template mechanism, similar to the mechanism which is known from the Tiles framework The composition of components based on other components Custom logic tags Expression functions With the Facelets technology, it's possible to use HTML for your pages. Therefore, it's easy to create the pages and view them directly in a browser, because you don't need an application server between the processes of designing a page The possibility to create libraries of your components The following sample shows a sample XHTML page which uses the component aliasing mechanism of the Facelets technology. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html > <body> <form jsfc="h:form"> <span jsfc="h:outputText" value="Welcome to our page: #{user.name}" disabled="#{empty user}" /> <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> <input type="submit" jsfc="h:commandButton" value="OK" action="#{bean.doIt}" /> </form> </body></html> The sample code snippet above uses the mentioned expression language (for example, the #{user.name} expression accesses the name property from the user instance) of the JSF technology to access the data. What is component aliasingOne of the mentioned features of the Facelets technology is that it is possible to view a page directly in a browser without that the page is running inside a JEE container environment. This is possible through the component aliasing feature. With this feature, you can use normal HTML elements, for example an input element. Additionally, you can refer to the component which is used behind the scenes with the jsfc attribute. An example for that is <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> . If you open this inside a browser, the normal input element is used. If you use it inside your application, the h:inputText element of the component library is used     The ResourceServlet One main part of the JSF framework are the components for the GUI. These components often consist of many files besides the class files. If you use many of these components, the problem of handling these files arises. To solve this problem, the files such as JavaScript and CSS (Cascading Style Sheets) can be delivered inside the JAR archive of the component. If you deliver the file inside the JAR file, you can organize the components in one file and therefore it is easier for the deployment and maintenance of your component library. Regardless of the framework you use, the result is HTML. The resources inside the HTML pages are required as URLs. For that, we need a way to access these resources inside the archive with the HTTP protocol. To solve that problem, there is a servlet with the name ResourceServlet (package org.springframework.js.resource). The servlet can deliver the following resources: Resources which are available inside the web application (for example, CSS files) Resources inside a JAR archive The configuration of the servlet inside web.xml is shown below: <servlet> <servlet-name>Resource Servlet</servlet-name> <servlet-class>org.springframework.js.resource.ResourceServlet</servlet-class> <load-on-startup>0</load-on-startup></servlet> <servlet-mapping> <servlet-name>Resource Servlet</servlet-name> <url-pattern>/resources/*</url-pattern></servlet-mapping> It is important that you use the correct url-pattern inside servlet-mapping. As you can see in the sample above, you have to use /resources/*. If a component does not work (from the Spring Faces components), first check if you have the correct mapping for the servlet. All resources in the context of Spring Faces should be retrieved through this Servlet. The base URL is /resources. Internals of the ResourceServlet ResourceServlet can only be accessed via a GET request. The ResourceServlet servlet implements only the GET method. Therefore, it's not possible to serve POST requests. Before we describe the separate steps, we want to show you the complete process, illustrated in the diagram below: For a better understanding, we choose an example for the explanation of the mechanism which is shown in the previous diagram. Let us assume that we have registered the ResourcesServlet as mentioned before and we request a resource by the following sample URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css. How to request more than one resource with one requestFirst, you can specify the appended parameter. The value of the parameter is the path to the resource you want to retrieve. An example for that is the following URL: http://localhost:8080/ flowtracweb-jsf/resources/css/test1.css?appended=/css/test2.css. If you want to specify more than one resource, you can use the delimiter comma inside the value for the appended parameter. A simple example for that mechanism is the following URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css?appended=/css/test2.css, http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css?appended=/css/test3.css. Additionally, it is possible to use the comma delimiter inside the PathInfo. For example: http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css,/css/test2.css. It is important to mention that if one resource of the requested resources is not available, none of the requested resources is delivered. This mechanism can be used to deliver more than one CSS in one request. From the view of development, it can make sense to modularize your CSS files to get more maintainable CSS files. With that concept, the client gets one CSS, instead of many CSS files. From the view of performance optimization, it is better to have as few requests for rendering a page as possible. Therefore, it makes sense to combine the CSS files of a page. Internally, the files are written in the same sequence as they are requested. To understand how a resource is addressed, we separate the sample URL into the specific parts. The example URL is a URL on a local servlet container which has an HTTP connector at port 8080. See the following diagram for the mentioned separation: The table below describes the five sections of the URL that are shown in the previous diagram:
Read more
  • 0
  • 1
  • 3442

article-image-working-xml-flex-3-and-java-part1
Packt
28 Oct 2009
10 min read
Save for later

Working with XML in Flex 3 and Java-part1

Packt
28 Oct 2009
10 min read
In today's world, many server-side applications make use of XML to structure data because XML is a standard way of representing structured information. It is easy to work with, and people can easily read, write, and understand XML without the need of any specialized skills. The XML standard is widely accepted and used in server communications such as Simple Object Access Protocol (SOAP) based web services. XML stands for eXtensible Markup Language. The XML standard specification is available at http://www.w3.org/XML/. Adobe Flex provides a standardized ECMAScript-based set of API classes and functionality for working with XML data. This collection of classes and functionality provided by Flex are known as E4X. You can use these classes provided by Flex to build sophisticated Rich Internet Applications using XML data. XML basics XML is a standard way to represent categorized data into a tree structure similar to HTML documents. XML is written in plain-text format, and hence it is very easy to read, write, and manipulate its data. A typical XML document looks like this: <book>    <title>Flex 3 with Java</title>    <author>Satish Kore</author>    <publisher>Packt Publishing</publisher>    <pages>300</pages> </book> Generally, XML data is known as XML documents and it is represented by tags wrapped in angle brackets (< >). These tags are also known as XML elements. Every XML document starts with a single top-level element known as the root element. Each element is distinguished by a set of tags known as the opening tag and the closing tag. In the previous XML document, <book> is the opening tag and </book> is the closing tag. If an element contains no content, it can be written as an empty statement (also called self-closing statement). For example, <book/> is as good as writing <book></book>. XML documents can also be more complex with nested tags and attributes, as shown in the following example: <book ISBN="978-1-847195-34-0">   <title>Flex 3 with Java</title>   <author country="India" numberOfBooks="1">    <firstName>Satish</firstName>    <lastName>Kore</lastName> </author>   <publisher country="United Kingdom">Packt Publishing</publisher>   <pages>300</pages> </book> Notice that the above XML document contains nested tags such as <firstName> and <lastName> under the <author> tag. ISBN, country, and numberOfBooks, which you can see inside the tags, are called XML attributes. To learn more about XML, visit the W3Schools' XML Tutorial at http://w3schools.com/xml/. Understanding E4X Flex provides a set of API classes and functionality based on the ECMAScript for XML (E4X) standards in order to work with XML data. The E4X approach provides a simple and straightforward way to work with XML structured data, and it also reduces the complexity of parsing XML documents. Earlier versions of Flex did not have a direct way of working with XML data. The E4X provides an alternative to DOM (Document Object Model) interface that uses a simpler syntax for reading and querying XML documents. More information about other E4X implementations can be found at http://en.wikipedia.org/wiki/E4X. The key features of E4X include: It is based on standard scripting language specifications known as ECMAScript for XML. Flex implements these specifications in the form of API classes and functionality for simplifying the XML data processing. It provides easy and well-known operators, such as the dot (.) and @, to work with XML objects. The @ and dot (.) operators can be used not only to read data, but also to assign data to XML nodes, attributes, and so on. The E4X functionality is much easier and more intuitive than working with the DOM documents to access XML data. ActionScript 3.0 includes the following E4X classes: XML, XMLList, QName, and Namespace. These classes are designed to simplify XML data processing into Flex applications. Let's see one quick example: Define a variable of type XML and create a sample XML document. In this example, we will assign it as a literal. However, in the real world, your application might load XML data from external sources, such as a web service or an RSS feed. private var myBooks:XML =   <books publisher="Packt Pub">    <book title="Book1" price="99.99">    <author>Author1</author>    </book>    <book title="Book2" price="59.99">    <author>Author2</author>    </book>    <book title="Book3" price="49.99">    <author>Author3</author>    </book> </books>; Now, we will see some of the E4X approaches to read and parse the above XML in our application. The E4X uses many operators to simplify accessing XML nodes and attributes, such as dot (.) and attribute identifier (@), for accessing properties and attributes. private function traceXML():void {    trace(myBooks.book.(@price < 50.99).@title); //Output: Book3    trace(myBooks.book[1].author); //Output: Author2    trace(myBooks.@publisher); //Output: Packt Pub    //Following for loop outputs prices of all books    for each(var price in myBooks..@price) {    trace(price);    } } In the code above, we are using a conditional expression to extract the title of the book(s) whose price is set below 50.99$ in the first trace statement. If we have to do this manually, imagine how much code would have been needed to parse the XML. In the second trace, we are accessing a book node using index and printing its author node's value. And in the third trace, we are simply printing the root node's publisher attribute value and finally, we are using a for loop to traverse through prices of all the books and printing each price. The following is a list of XML operators: Operator Name Description    @   attribute identifier Identifies attributes of an XML or XMLList object.     { }     braces(XML) Evaluates an expression that is used in an XML or XMLList initializer.   [ ]     brackets(XML) Accesses a property or attribute of an XML or XMLList object, for example myBooks.book["@title"].     + concatenation(XMLList) Concatenates (combines) XML or XMLList values into an XMLList object.     += concatenation assignment (XMLList) Assigns expression1 The XML object An XML class represents an XML element, attribute, comment, processing instruction, or a text element. We have used the XML class in our example above to initialize the myBooks variable with an XML literal. The XML class is included into an ActionScript 3.0 core class, so you don't need to import a package to use it. The XML class provides many properties and methods to simplify XML processing, such as ignoreWhitespace and ignoreComments properties, used for ignoring whitespaces and comments in XML documents respectively. You can use the prependChild() and appendChild() methods to prepend and append XML nodes to existing XML documents. Methods such as toString() and toXMLString() allow you to convert XML to a string. An example of an XML object: private var myBooks:XML = <books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="120.00"> <author>Author2</author> </book> </books>;   In the above example, we have created an XML object by assigning an XML literal to it. You can also create an XML object from a string that contains XML data, as shown in the following example: private var str:String = "<books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="59.99"> <author>Author2</author> </book> </books>"; private var myBooks:XML = new XML(str); trace(myBooks.toXMLString()); //outputs formatted xml as string If the XML data in string is not well-formed (for example, a closing tag is missing), then you will see a runtime error. You can also use binding expressions in the XML text to extract contents from a variable data. For example, you could bind a node's name attribute to a variable value, as in the following line: private var title:String = "Book1" var aBook:XML = <book title="{title}">; To read more about XML class methods and properties, go through Flex 3 LiveDocs at http://livedocs.adobe.com/flex/3/langref/XML.html. The XMLList object As the class name indicates, XMLList contains one or more XML objects. It can contain full XML documents, XML fragments, or the results of an XML query. You can typically use all of the XML class's methods and properties on the objects from XMLList. To access these objects from the XMLList collection, iterate over it using a for each… statement. The XMLList provides you with the following methods to work with its objects: child(): Returns a specified child of every XML object children(): Returns specified children of every XML object descendants(): Returns all descendants of an XML object elements(): Calls the elements() method of each XML object in the XMLList. Returns all elements of the XML object parent(): Returns the parent of the XMLList object if all items in the XMLList object have the same parent attribute(attributeName): Calls the attribute() method of each XML object and returns an XMLList object of the results. The results match the given attributeName parameter attributes(): Calls the attributes() method of each XML object and returns an XMLList object of attributes for each XML object contains(): Checks if the specified XML object is present in the XMLList copy(): Returns a copy of the given XMLList object length(): Returns the number of properties in the XMLList object valueOf(): Returns the XMLList object For details on these methods, see the ActionScript 3.0 Language Reference. Let's return to the example of the XMLList: var xmlList:XMLList = myBooks.book.(@price == 99.99); var item:XML; for each(item in xmlList) { trace("item:"+item.toXMLString()); } Output: item:<book title="Book1" price="99.99"> <author>Author1</author> </book> In the example above, we have used XMLList to store the result of the myBooks.book.(@price == 99.99); statement. This statement returns an XMLList containing XML node(s) whose price is 99.99$. Working with XML objects The XML class provides many useful methods to work with XML objects, such as the appendChild() and prependChild() methods to add an XML element to the beginning or end of an XML object, as shown in the following example: var node1:XML = <middleInitial>B</middleInitial> var node2:XML = <lastName>Kore</lastName> var root:XML = <personalInfo></personalInfo> root = root.appendChild(node1); root = root.appendChild(node2); root = root.prependChild(<firstName>Satish</firstName>); The output is as follows: <personalInfo> <firstName>Satish</firstName> <middleInitial>B</middleInitial> <lastName>Kore</lastName> </personalInfo> You can use the insertChildBefore() or insertChildAfter() method to add a property before or after a specified property, as shown in the following example: var x:XML = <count> <one>1</one> <three>3</three> <four>4</four> </count>; x = x.insertChildBefore(x.three, "<two>2</two>"); x = x.insertChildAfter(x.four, "<five>5</five>"); trace(x.toXMLString()); The output of the above code is as follows: <count> <one>1</one> <two>2</two> <three>3</three> <four>4</four> <five>5</five> </count>
Read more
  • 0
  • 0
  • 2175
article-image-working-xml-flex-3-and-java-part2
Packt
28 Oct 2009
7 min read
Save for later

Working with XML in Flex 3 and Java-part2

Packt
28 Oct 2009
7 min read
  Loading external XML documents You can use the URLLoader class to load external data from a URL. The URLLoader class downloads data from a URL as text or binary data. In this section, we will see how to use the URLLoader class for loading external XML data into your application. You can create a URLLoader class instance and call the load() method by passing URLRequest as a parameter and register for its complete event to handle loaded data. The following code snippet shows how exactly this works: private var xmlUrl:String = "http://www.foo.com/rssdata.xml";private var request:URLRequest = new URLRequest(xmlUrl);private var loader:URLLoader = new URLLoader(;private var rssData:XML;loader.addEventListener(Event.COMPLETE, completeHandler);loader.load(request);private function completeHandler(event:Event):void { rssData = XML(loader.data); trace(rssData);} Let's see one quick complete sample of loading RSS data from the Internet: <?xml version="1.0" encoding="utf-8"?><mx:Application creationComplete="loadData();"> <mx:Script> <![CDATA[ import mx.collections.XMLListCollection; private var xmlUrl:String = "http://sessions.adobe.com/360FlexSJ2008/feed.xml"; private var request:URLRequest = new URLRequest(xmlUrl); private var loader:URLLoader = new URLLoader(request); [Bindable] private var rssData:XML; private function loadData():void { loader.addEventListener(Event.COMPLETE, completeHandler); loader.load(request); } private function completeHandler(event:Event):void { rssData = new XML(loader.data); } ]]></mx:Script><mx:Panel title="RSS Feed Reader" width="100%" height="100%"> <mx:DataGrid id="dgGrid" dataProvider="{rssData.channel.item}" height="100%" width="100%"> <mx:columns> <mx:DataGridColumn headerText="Title" dataField="title"/> <mx:DataGridColumn headerText="Link" dataField="link"/> <mx:DataGridColumn headerText="pubDate" dataField="pubDate"/> <mx:DataGridColumn headerText="Description" dataField="description"/> </mx:columns></mx:DataGrid><mx:TextArea width="100%" height="80" text="{dgGrid.selectedItem.description}"/></mx:Panel></mx:Application> In the code above, we are loading RSS feed from an external URL and displaying it in DataGrid by using data binding. Output: An example: Building a book explorer In this section, we will build something more complicated and interesting by using many features, including custom components, events, data binding, E4X, loading external XML data, and so on. We will build a sample books explorer, which will load a books catalog from an external XML file and allow the users to explore and view details of books. We will also build a simple shopping cart component, which will list books that a user would add to cart by clicking on the Add to cart button. Create a new Flex project using Flex Builder. Once the project is created, create an assetsimages folder under its src folder. This folder will be used to store images used in this application. Now start creating the following source files into its source folder. Let's start by creating a simple book catalog XML file as follows: bookscatalog.xml:<books> <book ISBN="184719530X"> <title>Building Websites with Joomla! 1.5</title> <author> <lastName>Hagen</lastName> <firstName>Graf</firstName> </author>  <pageCount>363</pageCount> <price>Rs.1,247.40</price> <description>The best-selling Joomla! tutorial guide updated for the latest 1.5 release </description> </book> <book ISBN="1847196160"> <title>Drupal 6 JavaScript and jQuery</title> <author> <lastName>Matt</lastName> <firstName>Butcher</firstName> </author>  <pageCount>250</pageCount> <price>Rs.1,108.80</price> <description>Putting jQuery, AJAX, and JavaScript effects into your Drupal 6 modules and themes</description> </book> <book ISBN="184719494X"> <title>Expert Python Programming</title> <author> <lastName>Tarek</lastName> <firstName>Ziadé</firstName> </author>  <pageCount>350</pageCount> <price>Rs.1,247.4</price> <description>Best practices for designing, coding, and distributing your Python software</description> </book> <book ISBN="1847194885"> <title>Joomla! Web Security</title> <author> <lastName>Tom</lastName> <firstName>Canavan</firstName> </author>  <pageCount>248</pageCount> <price>Rs.1,108.80</price> <description>Secure your Joomla! website from common security threats with this easy-to-use guide</description> </book></books> The above XML file contains details of individual books in an XML form. You can also deploy this file on your web server and specify its URL into URLRequest while loading it. Next, we will create a custom event which we will be dispatching from our custom component. Make sure you create an events package under your src folder in Flex Builder called events, and place this file in it. AddToCartEvent.as:package events{ import flash.events.Event; public class AddToCartEvent extends Event { public static const ADD_TO_CART:String = "addToCart"; public var book:Object; public function AddToCartEvent(type:String, bubbles_Boolean=false, cancelable_Boolean=false) { super(type, bubbles, cancelable); } }} This is a simple custom event created by inheriting the flash.events.Event class. This class defines the ADD_TO_CART string constant, which will be used as the name of the event in the addEventListener() method. You will see this in the BooksExplorer.mxml code. We have also defined an object to hold the reference of the book which the user can add into the shopping cart. In short, this object will hold the XML node of a selected book. Next, we will create the MXML custom component called BookDetailItemRenderer.mxml. Make sure that you create a package under your src folder in Flex Builder called components, and place this file in it and copy the following code in it: <?xml version="1.0" encoding="utf-8"?><mx:HBox cornerRadius="8" paddingBottom="2" paddingLeft="2"paddingRight="2" paddingTop="2"><mx:Metadata>[Event(name="addToCart", type="flash.events.Event")]</mx:Metadata><mx:Script><![CDATA[import events.AddToCartEvent;import mx.controls.Alert;[Bindable][Embed(source="../assets/images/cart.gif")]public var cartImage:Class;private function addToCardEventDispatcher():void {var addToCartEvent:AddToCartEvent = new AddToCartEvent("addToCart", true, true);addtoCartEvent.book = data;dispatchEvent(addtoCartEvent);}]]></mx:Script><mx:HBox width="100%" verticalAlign="middle" paddingBottom="2"paddingLeft="2" paddingRight="2" paddingTop="2" height="100%"borderStyle="solid" borderThickness="2" borderColor="#6E6B6B"cornerRadius="4"><mx:Image id="bookImage" source="{data.image}" height="109"width="78" maintainAspectRatio="false"/><mx:VBox height="100%" width="100%" verticalGap="2"paddingBottom="0" paddingLeft="0" paddingRight="0"paddingTop="0" verticalAlign="middle"><mx:Label id="bookTitle" text="{data.title}"fontSize="12" fontWeight="bold"/><mx:Label id="bookAuthor" text="By: {data.author.lastName},{data.author.firstName}" fontWeight="bold"/><mx:Label id="coverPrice" text="Price: {data.price}"fontWeight="bold"/><mx:Label id="pageCount" text="Pages: {data.pageCount}"fontWeight="bold"/><mx:HBox width="100%" backgroundColor="#3A478D"horizontalAlign="right" paddingBottom="0" paddingLeft="0"paddingRight="5" paddingTop="0" height="22"verticalAlign="middle"><mx:Label text="Add to cart " color="#FFFFFF"fontWeight="bold"/><mx:Button icon="{cartImage}" height="20" width="20"click="addToCardEventDispatcher();"/></mx:HBox></mx:VBox></mx:HBox></mx:HBox>
Read more
  • 0
  • 0
  • 980

article-image-using-spring-faces
Packt
28 Oct 2009
13 min read
Save for later

Using Spring Faces

Packt
28 Oct 2009
13 min read
Using Spring Faces Module The following section shows you how to use the Spring Faces module. Overview of all tags of the Spring Faces tag library The Spring Faces module comes with a set of components, which are provided through a tag library. If you want more detailed information about the tag library, look at the following files inside the Spring Faces source distribution: spring-faces/src/main/resources/META-INF/spring-faces.tld spring-faces/src/main/resources/META-INF/springfaces.taglib.xml spring-faces/src/main/resources/META-INF/faces-config.xml If you want to see the source code of a specific tag, refer to faces-config.xml and springfaces.taglib.xml to get the name of the class of the component. The spring-faces.tld file can be used for documentation issues. The following table should give you a short description about the available tags from the Spring Faces component library: Name of the tag Description includeStyles The includeStyles tag renders the necessary CSS stylesheets which are essential for the components from Spring Faces. The usage of this tag in the head section is recommended for performance optimization. If the tag isn't included, the necessary stylesheets are rendered on the first usage of the tag. If you are using a template for your pages, it's a good pattern to include the tag in the header of that template. For more information about performance optimization, refer to the Yahoo performance guidelines, which are available at the following URL: http://developer.yahoo.com/performance. Some tags (includeStyles, resource, and resourceGroup) of the Spring Faces tag library are implementing patterns to optimize the performance on client side. resource The resource tag loads and renders a resource with ResourceServlet. You should prefer this tag instead of directly including a CSS stylesheet or a JavaScript file because ResourceServlet sets the proper response headers for caching the resource file. resourceGroup With the resourceGroup tag, it is possible to combine all resources which are inside the tag. It is important that all resources are the same type. The tag uses ResourceServlet with the appended parameter to create one resource file which is sent to the client. clientTextValidator With the clientTextValidator tag, you can validate a child inputText element. For the validation, you have an attribute called regExp where you can provide a regular expression. The validation is done on client side. clientNumberValidator With the clientNumberValidator tag, you can validate a child inputText element. With the provided validation methods, you can check whether the text is a number and check some properties for the number, e.g. range. The validation is done on client side. clientCurrencyValidator With the clientCurrencyValidator tag, you can validate a child inputText element. This tag should be used if you want to validate a currency. The validation is done on client side. clientDateValidator With the clientDateValidator tag, you can validate a child inputText element. The tag should be used to validate a date. The field displays a pop-up calendar. The validation is done on client side. validateAllOnClick With the validateAllOnClick tag, you can execute all client-side validation on the click of a specific element. That can be useful for a submit button. commandButton With the commandButton tag, it is possible to execute an arbitrary method on an instance. The method itself has to be a public method with no parameters and a java.lang.Object instance as the return value. commandLink The commandLink tag renders an AJAX link. With the processIds attribute, you can provide the ids of components which should be processed through the process. ajaxEvent The ajaxEvent tag creates a JavaScript event listener. This tag should only be used if you can ensure that the client has JavaScript enabled. A complete example After we have shown the main configuration elements and described the Spring Faces components, the following section shows a complete example in order to get a good understanding about how to work with the Spring Faces module in your own web application. The following diagram shows the screen of the sample application. With the shown screen, it is possible to create a new issue and save it to the bug database. It is not part of this example to describe the bug database or to describe how to work with databases in general. The sample uses the model classes. The diagram has three required fields. These fields are: Name: The name of the issue Description: A short description for the issue Fix until: The fixing date for the issue Additionally, there are the following two buttons: store: With a click on the store button, the system tries to create a new issue that includes the provided information cancel: With a click on the cancel button, the system ignores the data which is entered and navigates to the overview page. Now, the first step is to create the implementation of that input page. That implementation and its description are shown in the section below. Creating the input page As we described above, we use Facelets as a view handler technology. Therefore, the pages have to be defined with XHTML, with .xhtml as the file extension. The name for the input page will be add.xhtml. For the description, we separate the page into the following five parts: Header Name Description Fix until The Buttons This separation is shown in the diagram below: The Header part The first step in the header is to define that we have an XHTML page. This is done through the definition of the correct doctype. <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> An XHTML page is described as an XML page. If you want to use special tags inside the XML page, you have to define a namespace for that. For a page with Facelets and Spring Faces, we have to define more than one namespace. The following table describes those namespaces: Namespace Description http://www.w3.org/1999/xhtml The namespace for XHTML itself. http://java.sun.com/jsf/facelets The Facelet defines some components (tags). These components are available under this namespace. http://java.sun.com/jsf/html The user interface components of JSF are available under this namespace. http://java.sun.com/jsf/core The core tags of JSF, for example converter, can be accessed under this namespace. http://www.springframework.org/tags/faces The namespace for the Spring Faces component library. For the header definition, we use the composition component of the Facelets components. With that component, it is possible to define a template for the layout. This is very similar to the previously mentioned Tiles framework. The following code snippet shows you the second part (after the doctype) of the header definition: A description and overview of the JSF tags is available at: http://developers.sun.com/jscreator/archive/learning/bookshelf/pearson/corejsf/standard_jsf_tags.pdf. <ui:composition template="/WEB-INF/layouts/standard.xhtml"> With the help of the template attribute, we refer to the used layout template. In our example, we refer to /WEB-INF/layouts/standard.xhtml. The following code shows the complete layout file standard.xhtml. This layout file is also described with the Facelets technology. Therefore, it is possible to use Facelets components inside that page, too. Additionally, we use Spring Faces components inside that layout page. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <f:view contentType="text/html"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>flow.tracR</title> <sf:includeStyles/> <sf:resourceGroup> <sf:resource path="/css-framework/css/tools.css"/> <sf:resource path="/css-framework/css/typo.css"/> <sf:resource path="/css-framework/css/forms.css"/> <sf:resource path="/css-framework/css/layout-navtop-localleft. css"/> <sf:resource path="/css-framework/css/layout.css"/> </sf:resourceGroup> <sf:resource path="/css/issue.css"/> <ui:insert name="headIncludes"/> </head> <body class="tundra spring"> <div id="page"> <div id="content"> <div id="main"> <ui:insert name="content"/> </div> </div> </div> </body> </f:view> </html> The Name part The first element in the input page is the section for the input of the name. For the description of that section, we use elements from the JSF component library. We access this library with h as the prefix, which we have defined in the header section. For the general layout, we use standard HTML elements, such as the div element. The definition is shown below. <div class="field"> <div class="label"> <h:outputLabel for="name">Name:</h:outputLabel> </div> <div class="input"> <h:inputText id="name" value="#{issue.name}" /> </div> </div> The Description part The next element in the page is the Description element. The definition is very similar to the Name part. Instead of the definition of the Name part, we use the element description inside the h:inputText element—the required attribute with true as its value. This attribute tells the JSF system that the issue.description value is mandatory. If the user does not enter a value, the validation fails. <div class="field"> <div class="label"> <h:outputLabel for="description">Description:</h:outputLabel> </div> <div class="input"> <h:inputText id="description" value="#{issue.description}" required="true"/> </div> </div> The Fix until part The last input section is the Fix until part. This field is a very common field in web applications, because there is often the need to input a date. Internally, a date is often represented through an instance of the java.util.Date class. The text which is entered by the user has to be validated and converted in order to get a valid instance. To help the user with the input, a calendar for the input is often used. The Spring Faces library offers a component which shows a calendar and adds client-side validation. The complete definition of the Fix until part is shown below. The name of the component is clientDateValidator. The clientDateValidator component is used with sf as the prefix. This prefix is defined in the namespace definition in the shown header of the add.xhtml page. <div class="field"> <div class="label"> <h:outputLabel for="checkinDate">Fix until:</h:outputLabel> </div> <div class="input"> <sf:clientDateValidator required="true" invalidMessage="pleaseinsert a correct fixing date. format: dd.MM.yyyy"promptMessage="Format: dd.MM.yyyy, example: 01.01.2020"> <h:inputText id="checkinDate" value="#{issue.fixingDate}" required="true"> <f:convertDateTime pattern="dd.MM.yyyy" timeZone="GMT"/> </h:inputText> </sf:clientDateValidator> </div> </div> In the example above, we use the promptMessage attribute to help the user with the format. The message is shown when the user sets the cursor on the input element. If the validation fails, the message of the invalidMessage attribute is used to show the user that a wrong formatted input has been entered. The Buttons part The last element in the page are the buttons. For these buttons, the commandButton component from Spring Faces is used. The definition is shown below: <div class="buttonGroup"> <sf:commandButton id="store" action="store" processIds="*" value="store"/> <sf:commandButton id="cancel" action="cancel" processIds="*" value="cancel"/> </div> It is worth mentioning that JavaServer Faces binds an action to the action method of a backing bean. Spring Web Flow binds the action to events. Handling of errors It's possible to have validation on the client side or on the server side. For the Fix until element, we use the previously mentioned clientDateValidator component of the Spring Faces library. The following figure shows how this component shows the error message to the user: Reflecting the actions of the buttons into the flow definition file Clicking the buttons executes an action that has a transition as the result. The name of the action is expressed in the action attribute of the button component which is implemented as commandButton from the Spring Faces component library. If you click on the store button, the validation is executed first. If you want to prevent that validation, you have to use the bind attribute and set it to false. This feature is used for the cancel button, because in this state it is necessary to ignore the inputs. <view-state id="add" model="issue"> <transition on="store" to="issueStore" > <evaluate expression="persistenceContext.persist(issue)"/> </transition> <transition on="cancel" to="cancelInput" bind="false"> </transition> </view-state> Showing the results To test the implemented feature, we implement an overview page. We have the choice to implement the page as a flow with one view state or implement it as a simple JSF view. Independent from that choice, we will use Facelets to implement that overview page, because Facelets does not depend on the Spring Web Flow Framework as it is a feature of JSF. The example uses a table to show the entered issues. If no issue is entered, a message is shown to the user. The figure below shows this table with one row of data. The Id is a URL. If you click on this link, the input page is shown with the data of that issue. With data, we execute an update. The indicator for that is the valid ID of the issue. If your data is available, the No issues in database message is shown to the user. This is done with a condition on the outputText component. See the code snippet below: <h:outputText id="noIssuesText" value="No Issues in the database" rendered="#{empty issueList}"/> For the table, we use the dataTable component. <h:dataTable id="issues" value="#{issueList}" var="issue"rendered="#{not empty issueList}" border="1"> <h:column> <f:facet name="header">Id</f:facet> <a href="add?id=#{issue.id}">#{issue.id}</a> </h:column> <h:column> <f:facet name="header">Name</f:facet> #{issue.name} </h:column> <h:column> <f:facet name="header">fix until</f:facet> #{issue.fixingDate} </h:column> <h:column> <f:facet name="header">creation date</f:facet> #{issue.creationDate} </h:column> <h:column> <f:facet name="header">last modified</f:facet> #{issue.lastModified} </h:column> </h:dataTable>
Read more
  • 0
  • 0
  • 1948
Modal Close icon
Modal Close icon