Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-getting-your-hands-dirty-jpdl-part-2
Packt
06 Jan 2010
4 min read
Save for later

Getting Your Hands Dirty with jPDL: Part 2

Packt
06 Jan 2010
4 min read
Describing how the job position is requested In the first part, we find all the answers to our questions; however, a few remain unanswered: How is the first part of the process represented? How can we track when the new job position is discovered, the request for that job position is created, and when this job position is fulfilled? Why can't we add more activities to the current defined process? What happens when we add the create request, find a candidate, and job position fulfilled activities inside the interview process? The answers for these questions are simple. We cannot add these proposed nodes to the same process definition, because the interview process needs to be carried out (needs to be instantiated) once for each candidate that the recruiting team finds. Basically, we need to decouple all these activities into two processes. As the MyIT Inc. manager said, the relationship between these activities is that a job request will be associated with the N-interviews' process. The other important thing to understand here, is that both the processes can be decoupled without using a parent/child relationship. In this case, we need to create a new interview's process instance when a new candidate is found. In other words, we don't know how many interviews' process instances are created when the request is created. Therefore, we need to be able to make these creations dynamically. We will introduce a new process that will define these new activities. We need to have a separate concept that will create an on-demand new candidate interviews' process, based on the number of candidates found by the human resources team. This new process will be called "Request Job Position" and will include the following activities: Create job request: Different project leaders can create different job requests based on their needs. Each time that a project leader needs to hire a new employee, a new instance of this process will be created where the first activity of this process is the creation of the request. Finding a candidate: This activity will cover the phase when the research starts. Each time the human resources team finds a new candidate inside this activity, they will create a new instance of the candidate interviews' process. When an instance of the candidate interviews' process finds a candidate who fulfills all the requirements for that job position, all the remaining interviews need to be aborted. We can see the two process relationships in the following figure: If we express the Request Job Position process in jPDL, we will obtain something like this: In the following section, we will see two different environments in which we can run our process. We need to understand the differences between them in order to be able to know how the process will behave in the runtime stage. Environment possibilities Based on the way we choose to embed the framework in our application, it's the configuration that we need. We have three main possibilities: Standalone applications Web applications Enterprise application Standalone application with jBPM embedded In Java Standard Edition (J2SE) applications, we can embed jBPM and connect it directly to a database in order to store our processes. This scenario will look like the following image: In this case, we need to include the jBPMJARs in our application classpath in order to work. This is because our application will use the jBPM directly in our classes. In this scenario, the end users will interact with a desktop application that includes the jbpm-jpdl.jar file. This will also mean that in the development process, the developers will need to know the jBPM APIs in order to interact with different business processes. It's important for you to know that the configuration files, such as hibernate.cfg.xml and jbpm.cfg.xml will be configured to access the database with a direct JDBC connection. Web application with jBPM dependency This option varies, depending on whether your application will run on an application server or just inside a servlet container. This scenario will look like: In this case, we can choose whether our application will include the jBPMJARs inside it, or whether the container will have these libraries. But once again, our application will use the jBPM APIs directly. In this scenario, the end user will interact with the process using a web page that will be configured to access a database by using a JDBC driver directly or between a DataSource configuration.
Read more
  • 0
  • 0
  • 986

article-image-getting-your-hands-dirty-jpdl-part-1
Packt
06 Jan 2010
9 min read
Save for later

Getting Your Hands Dirty with jPDL: Part 1

Packt
06 Jan 2010
9 min read
This example will introduce us to all the basic jPDL nodes used in common situations for modeling real world scenarios. That's why this article will cover the following topics: Introduction to the recruiting example Analyzing the example requirements Modeling a formal description Adding technical details to our formal description Running our processes The idea of this article is to show you a real process implementation. We will try to cover every technical aspect involved in development in order to clarify not only your doubts about modeling, but also about the framework behavior. How is this example structured? In this article, we will see a real case where a company has some requirements to improve an already existing, but not automated process. The current process is being handled without a software solution, practically we need to see how the process works everyday to find out the requirements for our implementation. The textual/oral description of the process will be our first input, and we will use it to discover and formalize our business process definition. Once we have a clear view about the situation that we are modeling, we will draw the process using GPD, and analyze the most important points of the modeling phase. Once we have a valid jPDL process artifact, we will need to analyze what steps are required for the process to be able to run in an execution environment. So, we will add all the technical details in order to allow our process to run. At last, we will see how the process behaves at runtime, how we can improve the described process, how we can adapt the current process to future changes, and so on. Key points that you need to remember In these kind of examples, you need to be focused on the translation that occurs from the business domain to the technical domain. You need to carefully analyze how the business requirements are transformed to a formal model description that can be optimized. Another key point here, is how this formal description of our business scenario needs to be configured (by adding technical details) in order to run and guide the organization throughout its processes. I also want you to focus on the semantics of each node used to model our process. If you don't know the exact meaning of the provided nodes, you will probably end up describing your scenario with the wrong words. You also need to be able to distinguish between a business analyst model, which doesn't know about the jPDL language semantics and a formal jPDL process definition. At the same time, you have to be able to do the translations needed between these two worlds. If you have business analysts trained in jPDL, you will not have to do these kind of translations and your life will be easier. Understanding the nodes' semantics will help you to teach the business analysts the correct meaning of jPDL processes. Analyzing business requirements Here we will describe the requirements that need to be covered by the recruiting team inside an IT company. These requirements will be the first input to be analyzed in order to discover the business process behind them. These requirements are expressed in a natural language, just plain English. We will get these requirements by talking to our clients—in this case, we will talk to the manager of an IT company called MyIT Inc. in order to find out what is going on in the recruiting process of the company. In most cases, this will be a business analyst's job, but you need to be aware of the different situations that the business scenario can present as a developer. This is very important, because if you don't understand how the real situation is sub-divided into different behavioral patterns, you will not be able to find the best way to model it. You will also start to see how iterative this approach is. This means that you will first view a big picture about what is going on in the company, and then in order to formalize this business knowledge, you will start adding details to represent the real situation in an accurate way. Business requirements In this section, we will see a transcription about our talk with the MyIT Inc. manager. However, we first need to know the company's background and, specifically, how it is currently working. Just a few details to understand the context of our talk with the company manager would be sufficient. The recruiting department of the MyIT Inc. is currently managed without any information system. They just use some simple forms that the candidates will have to fill in at different stages during the interviews. They don't have the recruiting process formalized in any way, just an abstract description in their heads about how and what tasks they need to complete in order to hire a new employee when needed. In this case, the MyIT Inc. manager tells us the following functional requirements about the recruiting process that is currently used in the company: We have a lot of demanding projects, that's why we need to hire new employees on a regular basis. We already have a common way to handle these requests detected by project leaders who need to incorporate new members into their teams. When a project leader notices that he needs a new team member, he/she will generate a request to the human resources department of the company. In this request, he/she will specify the main characteristics needed by the new team member and the job position description. When someone in the human resources team sees the request, they will start looking for candidates to fulfill the request. This team has two ways of looking for new candidates: By publishing the job position request in IT magazines By searching the resume database that is available to the company When a possible candidate is found through these methods, a set of interviews will begin. The interviews are divided into four stages that the candidate needs to go through in order to be hired. These stages will contain the following activities that need to be performed in the prescribed order: Initial interview: The human resources team coordinates an initial interview with each possible candidate found. In this interview, a basic questionnaire about the candidate's previous jobs and some personal data is collected. Technical interview: During the technical interview stage, each candidate is evaluated only with the technical aspects required for this particular project. That is why a project member will conduct this interview. Medical checkups: Some physical and psychological examinations need to be done in order to know that the candidate is healthy and capable to do the required job. This stage will include multiple checkups which the company needs to determine if the candidate is apt for the required task. Final acceptance: In this last phase the candidate will meet the project manager. The project manager is in charge of the final resolution. He will decide if the candidate is the correct one for that job position. If the outcome of this interview is successful, the candidate is hired and all the information needed for that candidate to start working is created. If a candidate reaches the last phase and is successfully accepted, we need to inform the recruiting team that all the other candidate's interviews need to be aborted, because the job position is already fulfilled. At this point, we need to analyze and evaluate the manager's requirements and find a graphical way to express these stages in order to hire a new employee. Our first approach needs to be simple and we need to validate it with the MyIT Inc. manager. Let's see the first draft of our process: With this image, we were able to describe the recruiting process. This is our first approach that obviously can be validated with the MyIT Inc. manager. This is our first draft that tells us how our process will appear and it's the first step in order to define which activities will be included in our model and which will not. In real implementations, these graphs can be made with Microsoft Visio, DIA (Open Source project), or just by hand. The main idea of the first approach is to first have a description that can be validated and understood by every MyIT Inc. employee. This image is only a translation of the requirements that we hear from the manager using common sense and trying to represent how the situation looks in real life. In this case, we can say that the manager of the MyIT Inc. can be considered as the stakeholder and the Subject Matter Expert (SME), who know how things happen inside the company. Once the graph is validated and understood by the stakeholder, we can use our formal language jPDL to create a formal model about this discovered process. The idea at this point, is to create a jPDL process definition and discard the old graph. From now on we will continue with the jPDL graphic representation of the process. Here you can explain to the manager that all the new changes that affect your process will go directly to the jPDL defined process. Until now our artifact has suffered the following transformations: The final artifact (the jPDL process definition) will let us begin the implementation of all the technical details needed by the process in order to run in an execution environment. So, let's analyze how the jPDL representation will look for this first approach in the following figure: At this point we don't add any technical details, we just draw the process. One key point to bear in mind in this phase is that we need to understand which node we will use to represent each activity in our process definition. Remember that each node provided by jPDL has its own semantics and meanings. You also need to remember that this graph needs to be understood by the manager, so you will use it in the activity name business language. For this first approach we use state nodes to represent that each activity will happen outside the process execution. In other words, we need to inform the process when each activity ends. This will mean that the next activity in the chain will be executed. From the process perspective, it only needs to wait until the human beings in the company do their tasks.
Read more
  • 0
  • 0
  • 1439

article-image-configuring-jboss-application-server-5
Packt
05 Jan 2010
7 min read
Save for later

Configuring JBoss Application Server 5

Packt
05 Jan 2010
7 min read
JBoss Web Server currently uses the Apache Tomcat 6.0 release and it is ships as service archive (SAR) application in the deploy folder. The location of the embedded web server has changed at almost every new release of JBoss. The following table could be a useful reference if you are using different versions of JBoss: JBoss release Location of Tomcat 5.0.0 GA deploy/jbossweb.sar 4.2.2 GA deploy/jboss-web.deployer 4.0.5 GA deploy/jbossweb-tomcat55.sar 3.2.X deploy/jbossweb-tomcat50.sar The main configuration file is server.xml which, by default, has the following minimal configuration: <Server><Listener className="org.apache.catalina.core.AprLifecycleListener"SSLEngine="on" /><Listener className="org.apache.catalina.core.JasperListener" /><Service name="jboss.web"><Connector protocol="HTTP/1.1" port="8080"address="${jboss.bind.address}"connectionTimeout="20000" redirectPort="8443" /><Connector protocol="AJP/1.3" port="8009"address="${jboss.bind.address}"redirectPort="8443" /><Engine name="jboss.web" defaultHost="localhost"><Realm className="org.jboss.web.tomcat.security.JBossWebRealm"certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /><Host name="localhost"><Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve"cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager"transactionManagerObjectName="jboss:service=TransactionManager" /></Host></Engine></Service></Server> Following is a short description for the key elements of the configuration: Element Description Server The Server is Tomcat itself, that is, an instance of the web application server and is a top-level component. Service An Engine is a request-processing component that represents the Catalina servlet engine. It examines the HTTP headers to determine the virtual host or context to which requests should be passed. Connector It's the gateway to Tomcat Engine. It ensures that requests are received from clients and are assigned to the Engine. Engine Engine handles all requests. It examines the HTTP headers to determine the virtual host or context to which requests should be passed. Host One virtual host. Each virtual host is differentiated by a fully qualified hostname. Valve A component that will be inserted into the request processing pipeline for the associated Catalina container. Each Valve has distinct processing capabilities. Realm It contains a set of users and roles. As you can see, all the elements are organized in a hierarchical structure where the Server element acts as top-level container: The lowest elements in the configuration are Valve and Realm, which can be nested into Engine or Host elements to provide unique processing capabilities and role management. Customizing connectors Most of the time when you want to customize your web container, you will have to change some properties of the connector. <Connector protocol="HTTP/1.1" port="8080"address="${jboss.bind.address}"connectionTimeout="20000" redirectPort="8443" /> A complete list of the connector properties can be found on the Jakarta Tomcat site (http://tomcat.apache.org/). Here, we'll discuss the most useful connector properties: port: The TCP port number on which this connector will create a server socket and await incoming connections. Your operating system will allow only one server application to listen to a particular port number on a particular IP address. acceptCount: The maximum queue length for incoming connection requests, when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 10. connectionTimeout: The number of milliseconds the connector will wait after accepting a connection for the request URI line to be presented. The default value is 60000 (that is, 60 seconds). address: For servers with more than one IP address, this attribute specifies which address will be used for listening on the specified port. By default, this port will be used on all IP addresses associated with the server. enableLookups: Set to true if you want to perform DNS lookups in order to return the actual hostname of the remote client and to false in order to skip the DNS lookup and return the IP address in string form instead (thereby improving performance). By default, DNS lookups are enabled. maxHttpHeaderSize: The maximum size of the request and response HTTP header, specified in bytes. If not specified, this attribute is set to 4096 (4 KB). maxPostSize: The maximum size in bytes of the POST, which will be handled by the container FORM URL parameter parsing. The limit can be disabled by setting this attribute to a value less than or equal to zero. If not specified, this attribute is set to 2097152 (2 megabytes). maxThreads: The maximum number of request processing threads to be created by this connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200. The new Apache Portable Runtime connector Apache Portable Runtime (APR) is a core Apache 2.x library designed to provide superior scalability, performance, and better integration with native server technologies. The mission of the Apache Portable Runtime (APR) project is to create and maintain software libraries that provide a predictable and consistent interface to underlying platform-specific implementations. The primary goal is to provide an API to which software developers may code and be assured of predictable if not identical behaviour regardless of the platform on which their software is built, relieving them of the need to code special-case conditions to work around or take advantage of platform-specific deficiencies or features. The high-level performance of the new APR connector is made possible by the introduction of socket pollers for persistent connections (keepalive). This increases the scalability of the server, and by using sendfile system calls, static content is delivered faster and with lower CPU utilization. Once you have set up the APR connector, you are allowed to use the following additional properties in your connector: keepAliveTimeout: The number of milliseconds the APR connector will wait for another HTTP request, before closing the connection. If not set, this attribute will use the default value set for the connectionTimeout attribute. pollTime: The duration of a poll call; by default it is 2000 (5 ms). If you try to decrease this value, the connector will issue more poll calls, thus reducing latency of the connections. Be aware that this will put slightly more load on the CPU as well. pollerSize: The number of sockets that the poller kept alive connections can hold at a given time. The default value is 768, corresponding to 768 keepalive connections. useSendfile: Enables using kernel sendfile for sending certain static files. The default value is true. sendfileSize: The number of sockets that the poller thread dispatches for sending static files asynchronously. The default value is 1024. If you want to consult the full documentation of APR, you can visit http://apr.apache.org/. Installing the APR connector In order to install the APR connector, you need to add some native libraries to your JBoss server. The native libraries can be found at http://www.jboss.org/jbossweb/downloads/jboss-native/. Download the version that is appropriate for your OS. Once you are ready, you need to simply unzip the content of the archive into your JBOSS_HOME directory. As an example, Unix users (such as HP users) would need to perform the following steps: cd jboss-5.0.0.GAtar tvfz jboss-native-2.0.6-hpux-parisc2-ssl.tar.gz Now, restart JBoss and, from the console, verify that the connector is bound to Http11AprProtocol. A word of caution!At the time of writing, the APR library still has some open issues that prevent it from loading correctly on some platforms, particularly on the 32-bit Windows. Please consult the JBoss Issue Tracker (https://jira.jboss.org/jira/secure/IssueNavigator.jspa?) to verify that there are no open issues for your platform.
Read more
  • 0
  • 0
  • 6403
Visually different images

article-image-getting-started-spring-mvc-developing-mvc-components
Packt
31 Dec 2009
5 min read
Save for later

Getting Started With Spring MVC - Developing the MVC components

Packt
31 Dec 2009
5 min read
In the world of networked applications, thin clients (also known as web applications) are more in demand than thick clients. Due to this demand, every language is providing frameworks that try to make web-application development simpler. The simplicity is not provided just through setting up the basic application structure or generating boiler plate code. These frameworks are trying to provide simplicity through plug-ability of the frameworks i.e. the components of different frameworks could be brought together without much difficulty. Among such frameworks, Spring Framework is one of the most used. With its support to multiple Data Access frameworks/libraries and light-weight IoC container makes it suitable for scenarios where one would like mix-and-match multiple frameworks, a different one for each layer. This aspect of Spring Framework becomes more suitable for development of web-applications where the UI does not need to know with which framework it is dealing for business process or data access. The component of the Spring Framework stack that caters to the web UI is Spring MVC. In this discussion, we will focus on the basics of Spring MVC. First section will deal with the terms and terminologies related with Spring MVC and MVC. The second section will detail the steps for developing components of a web-application using Spring MVC. That is the agenda for this discussion. Spring MVC Spring MVC, as the name suggests, is a framework based on Model (M), View (V), Controller (C) pattern. Currently there are more than seven well known web-application frameworks that implement MVC pattern. Then what are the features of Spring MVC that sets it apart from other frameworks? The two main features are: Pluggable View technology Injection of services into controllers The former provides a way to use different UI frameworks instead of Spring MVC’s UI library and the latter removes the need to develop a new way to access functionality of business layer. Pluggable View technologyVarious View technologies are available in the market (including Tiles, Velocity, etc) with which Spring MVC can quite easily be integrated. In other words, JSP is not the only template engine supported. The pluggable feature is not limited to the templating technologies. By using common configuration functionality, other frameworks such as JSF can be integrated with Spring MVC applications. Thus, it is possible to mix-and-match different View technologies by using Spring MVC. Injection of Services into ControllersThis feature comes into picture when the Spring Framework is used to implement the business layer. Using the IoC capabilities of Spring Framework, the business layer services and/or objects can be injected into the Controller without explicitly setting up the call to the service or mirroring the business layer objects in controller. This helps in reduction of code duplication between Web UI/process layer and business process layer. The next important aspect of Spring MVC is its components. They are: Model (M) View (V) Controller (C) Model deals with the data that the application has to present, View contains the logic to present the data and Controller takes care of the flow of navigation and application logic. Following are the details. ModelModel is an object that holds the data to be displayed. It can be any Java object – from simple POJO to any type of Collection object. It can also be a combination of both – an instance of POJO to hold the detailed data and a collection object to hold all the instances of the POJO which, in reality, is most commonly used Model in Spring MVC. Also, the framework has its own way to hold the data. It holds the data using the Model object that is an instance of org.springframework.ui.ModelMap. Internally, whichever collection class object is used, the framework maps it to the ModelMap class. ViewIn MVC, it is the View that presents the data to the user. Spring MVC, just as many other JEE frameworks, uses a combination of JSP and tag libraries to implement View. Apart from using JSP, many kinds of View technologies like Tiles, Velocity, and Jasper Reports can be plugged into the Framework. The main class behind this plug ability is the org.springframework.web.servlet.View. The View class achieves the plug-in functionality by presenting the View as Logical View instead of actual/physical View. Physical view corresponds to the page developed using any of the templating technologies. The Logical View corresponds to the name of the View to be used. The name is then mapped to the actual View in the configuration file. One important point to remember about how Spring MVC uses Logical View is that Logical View and Model are treated as one entity named Model And View represented by org.springframework.web.servlet.ModelAndView class. ControllerThe flow of application and navigation is directed by the controller. It also processes the user input and transforms it into the Model. In Spring MVC, controllers are developed either by extending the out-of-the-box Controller classes or implementing the Controller interface. Following comes under the former category SimpleFormController AbstractController AbstractCommandController CancellableFormController AbstractCommandController MultiActionController ParameterizableViewController ServletForwardingController ServletWrappingController UrlFilenameViewController Of these most commonly used are AbstractController, AbstractCommandController, SimpleFormController and CancellableFormController. That wraps up this section. Let us move onto the next section – steps for developing an application using Spring MVC.
Read more
  • 0
  • 0
  • 2026

article-image-starting-tomcat-6-part-1
Packt
31 Dec 2009
9 min read
Save for later

Starting Up Tomcat 6: Part 1

Packt
31 Dec 2009
9 min read
Using scripts The Tomcat startup scripts are found within your project under the bin folder. Each script is available either as a Windows batch file (.bat), or as a Unix shell script (.sh). The behavior of either variant is very similar, and so I'll focus on their general structure and responsibilities, rather than on any operating system differences. However, it is to be noted that the Unix scripts are often more readable than the Windows batch files. In the following text, the specific extension has been omitted, and either .bat or .sh should be substituted as appropriate. Furthermore, while the Windows file separator '' has been used, it can be substituted with a '/' as appropriate. The overall structure of the scripts is as shown—you will most often invoke the startup script. Note that the shutdown script has a similar call structure. However, given its simplicity, it lends itself to fairly easy investigation, and so I leave it as an exercise for the reader. Both startup and shutdown are simple convenience wrappers for invoking the catalina script. For example, invoking startup.bat with no command line arguments calls catalina.bat with an argument of start. On the other hand, running shutdown.bat calls catalina.bat with a command line argument of stop. Any additional command line arguments that you pass to either of these scripts are passed right along to catalina.bat. The startup script has the following three main goals: If the CATALINA_HOME environment variable has not been set, it is set to the Tomcat installation's root folder. The Unix variant defers this action to the catalina script. It looks for the catalina script within the CATALINA_HOMEbin folder. The Unix variant looks for it in the same folder as the startup script. If the catalina script cannot be located, we cannot proceed, and the script aborts. It invokes the catalina script with the command line argument start followed by any other arguments supplied to the startup script. The catalina script is the actual workhorse in this process. Its tasks can be broadly grouped into two categories. First, it must ensure that all the environment variables needed for it to function have been set up correctly, and second it must execute the main class file with the appropriate options. Setting up the environment In this step, the catalina script sets the CATALINA_HOME, CATALINA_BASE, and CATALINA_TMPDIR environment variables, sets variables to point to various Java executables, and updates the CLASSPATH variable to limit the repositories scanned by the System class loader. It ensures that the CATALINA_HOME environment variable is set appropriately. This is necessary because catalina can be called independently of the startup script. Next, it calls the setenv script to give you a chance to set any installation-specific environment variables that affect the processing of this script. This includes variables that set the path of your JDK or JRE installation, any Java runtime options that need to be used, and so on. If CATALINA_BASE is set, then the CATALINA_BASEbinsetenv script is called. Else, the version under CATALINA_HOME is used. If the CATALINA_HOMEbinsetclasspath does not exist, processing aborts. Else, the BASEDIR environment variable is set to CATALINA_HOME and the setclasspath script is invoked. This script performs the following activities: It verifies that either a JDK or a JRE is available. If both the JAVA_HOME and JRE_HOME environment variables are not set, it aborts processing after warning the user. If we are running Tomcat in debug mode, that is, if '–debug' has been specified as a command line argument, it verifies that a JDK (and not just a JRE) is available. If the JAVA_ENDORSED_DIRS environment variable is not set, it is defaulted to BASEDIRendorsed. This variable is fed to the JVM as the value of the –Djava.endorsed.java.dirs system property. The CLASSPATH is then truncated to point at just JAVA_HOMElibtools.jar. This is a key aspect of the startup process, as it ensures that any CLASSPATH set in your environment is now overridden.Note that tools.jar contains the classes needed to compile and run Java programs, and to support tools such as Javadoc and native2ascii. For instance, the class com.sun.tools.javac.main.Main that is found in tools.jar represents the javac compiler. A Java program could dynamically create a Java class file and then compile it using an instance of this compiler class. Finally, variables are set to point to various Java executables, such as java, javaw (identical to java, but without an associated console window), jdb (the Java debugger), and javac (the Java compiler). These are referred to using the _RUNJAVA, _RUNJAVAW, _RUNJDB, and _RUNJAVAC environment variables respectively. The CLASSPATH is updated to also include CATALINA_HOMEbinbootstrap.jar, which contains the classes that are needed by Tomcat during the startup process. In particular, this includes the org.apache.catalina.startup.Bootstrap class. Note that including bootstrap.jar on the CLASSPATH also automatically includes commons-daemon.jar, tomcat-juli.jar, and tomcat-coyote.jar because the manifest file of bootstrap.jar lists these dependencies in its Class-Path attribute. If the JSSE_HOME environment variable is set, additional Java Secure Sockets Extension JARs are also appended to the CLASSPATH. Secure Sockets Layer (SSL) is a technology that allows clients and servers to communicate over a secured connection where all data transmissions are encrypted by the sender. SSL also allows clients and servers to determine whether the other party is indeed who they say they are, using certificates. The JSSE API allows Java programs to create and use SSL connections. Though this API began life as a standalone extension, the JSSE classes have been integrated into the JDK since Java 1.4. If the CATALINA_BASE variable is not set, it is defaulted to CATALINA_HOME. Similarly, if the Tomcat work directory location, CATALINA_TMPDIR is not specified, then it is set to CATALINA_BASEtemp. Finally, if the file CATALINA_BASEconflogging.properties exists, then additional logging related system properties are appended to the JAVA_OPTS environment variable. All the CLASSPATH machinations described above have effectively limited the repository locations monitored by the System class loader. This is the class loader responsible for finding classes located on the CLASSPATH. At this point, our execution environment has largely been validated and configured. The script notifies the user of the current execution configuration by writing out the paths for CATALINA_BASE, CATALINA_HOME, and the CATALINA_TMPDIR to the console. If we are starting up Tomcat in debug mode, then the JAVA_HOME variable is also written, else the JRE_HOME is emitted instead. These are the lines that we've grown accustomed to seeing when starting up Tomcat. C:tomcatTOMCAT_6_0_20outputbuildbin>startupUsing CATALINA_BASE: C:tomcatTOMCAT_6_0_20outputbuildUsing CATALINA_HOME: C:tomcatTOMCAT_6_0_20outputbuildUsing CATALINA_TMPDIR: C:tomcatTOMCAT_6_0_20outputbuildtempUsing JRE_HOME: C:javajdk1.6.0_14 With all this housekeeping done, the script is now ready to actually start the Tomcat instance. Executing the requested command This is where the actual action begins. This script can be invoked with the following commands: debug [-security], which is used to start Catalina in a debugger jpda start, which is used to start Catalina under a JPDA debugger run [-security], which is used to start Catalina in the current window start [-security], which starts Catalina in a separate window stop, which is used to stop Catalina version, which prints the version of Tomcat The use of a security manager, as determined by the optional –security argument, is out of scope for this article. The easiest way to understand this part of catalina.bat is to deconstruct the command line that is executed to start up the Tomcat instance. This command takes this general form (all in one line): _EXECJAVAJAVA_OPTSCATALINA_OPTSJPDA_OPTSDEBUG_OPTS-Djava.endorsed.dirs="JAVA_ENDORSED_DIRS"-classpath "CLASSPATH"-Djava.security.manager-Djava.security.policy=="SECURITY_POLICY_FILE"-Dcatalina.base="CATALINA_BASE"-Dcatalina.home="CATALINA_HOME"-Djava.io.tmpdir="CATALINA_TMPDIR"MAINCLASSCMD_LINE_ARGSACTION Where: _EXECJAVA is the executable that should be used to execute our main class. This defaults to the Java application launcher, _RUNJAVA. However, if debug was supplied as a command-line argument to the script, this is set to _RUNJDB instead. MAINCLASS is set to org.apache.catalina.startup.Bootstrap ACTION defaults to start, but is set to stop if the Tomcat instance is being stopped. CMD_LINE_ARGS are any arguments specified on the command line that follow the arguments that are consumed by catalina. SECURITY_POLICY_FILE defaults to CATALINA_BASEconfcatalina.policy. JAVA_OPTS and CATALINA_OPTS are used to carry arguments, such as maximum heap memory settings or system properties, which are intended for the Java launcher. The difference between the two is that CATALINA_OPTS is cleared out when catalina is invoked with the stop command. In addition, as indicated by its name, the latter is targeted primarily at options for running a Tomcat instance. JPDA_OPTS sets the Java Platform Debugger Architecture (JPDA) options to support remote debugging of this Tomcat instance. The default options are set in the script. It chooses TCP/IP as the protocol used to connect to the debugger (transport=dt_socket), marks this JVM as a server application (server=y), sets the host and port number on which the server should listen for remote debugging requests (address=8000), and requires the application to run until the application encounters a breakpoint (suspend=n). DEBUG_OPTS sets the -sourcepath flag when the Java Debugger is used to launch the Tomcat instance. The other variables are set as seen in the previous section. At this point, control passes to the main() method in Bootstrap.java. This is where the steps that are unique to script-based startup end. The rest of this article follows along with the logic coded into Bootstrap.java and Catalina.java.
Read more
  • 0
  • 0
  • 2243

article-image-editing-datagrids-popup-windows-flex
Packt
31 Dec 2009
4 min read
Save for later

Editing DataGrids with Popup Windows in Flex

Packt
31 Dec 2009
4 min read
To start with we'll create a DataGrid which will contain a first name and a last name. We will add a Panel container to place our DataGrid into. Here is what our initial app looks like: <?xml version="1.0" encoding="utf-8"?> <mx:Application layout="absolute"> <mx:Panel title="Pop Up Window" > <mx:DataGrid width="100%" height="100%"> <mx:columns> <mx:DataGridColumn headerText="First Name"/> <mx:DataGridColumn headerText="Last Name"/> </mx:columns> </mx:DataGrid> </mx:Panel> </mx:Application> You'll notice the common Application tag along with the Panel. Immediately inside the Panel is our all important DataGrid. Note, if you are not familiar with the basics of Flex then stop here and head over to the Adobe site for an introduction. Although this tutorial is not complex, I wont be able to focus on the fundamentals. I've set the DataGrid width and height attibutes to 100%. This will force it to expand to the size and width of the panel. You can make this application full screen by setting the same attributes on the Panel tag. Inside our DataGrid tag is the columns tag. Here we can describe what each column of our DataGrid will contain. In this case, one column for first name and one column for the last name. Here is a first look at our app: Adding Data The ease of Flex has allowed us to create this simple user interface with little effort. Now we come to a point where we need to add data to the DataGrid. The easiest way to do this is to use the dataProvider attribute. We will add an ArrayCollection Object to the script portion of our application to hold all the names which will appear in our DataGrid. The DataGrid and the accompanying ArrayCollection will look something like this: <mx:DataGrid id="names" width="100%" height="100%" dataProvider="{namesDP}"> <mx:columns> <mx:DataGridColumn headerText="First Name" dataField="firstName"/> <mx:DataGridColumn headerText="Last Name" dataField="lastName"/> </mx:columns> </mx:DataGrid> [Bindable] public var namesDP:ArrayCollection = new ArrayCollection ([ {firstName:'Keith',lastName:'Lee'}, {firstName:'Ira',lastName:'Glass'}, {firstName:'Christopher',lastName:'Rossin'}, {firstName:'Mary',lastName:'Little'}, {firstName:'Charlie',lastName:'Wagner'}, {firstName:'Cali',lastName:'Gonia'}, {firstName:'Molly',lastName:'Ivans'}, {firstName:'Amber',lastName:'Johnson'}]); The dataProvider attribute binds the DataGrid to the nameDP ArrayCollection. The ArrayCollection constructor's argument is an array of objects. Each object has a firstName and lastName property. If you wanted to expand on this application, you can add more data to this Array. You should also notice that the DataGridColumn tags also have an addition. We've set the dataField equal to the property in the dataProvider. The dataField is the real key in populating the DataGrid. Here is what our populated DataGrid looks like: We've now created the foundation of our application. Activating the DoubleClick Event By default the DataGrid doubleclick event is turned off. This means when a user doubleclicks an entry nothing will happen. In order to tell the DataGrid to trigger the doublelcick event we need to set the doubleClick and doubleClickEnabled attributes of the DataGrid tag. It looks like this: <mx:DataGrid width="100%" height="100%" doubleClick="showPopUp(event)" doubleClickEnabled="true"> I've set the doubleClick attribute to a method called showPopUp. We've not written this method yet, but it will be responsible for displaying the PopUp window which will allow editing of the DataGrid data. For now, let's add the Script tag and an empty showPopUp method: <mx:Script> <![CDATA[ import mx.events.ItemClickEvent; public function showPopUp(event:MouseEvent):void{ // show the popup } ]]> </mx:Script> The Script tag is a child of the Application tag.
Read more
  • 0
  • 0
  • 2082
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-integrating-spring-framework-hibernate-orm-framework-part-1
Packt
31 Dec 2009
6 min read
Save for later

Integrating Spring Framework with Hibernate ORM Framework: Part 1

Packt
31 Dec 2009
6 min read
Spring is a general-purpose framework that plays different roles in many areas of application architecture. One of these areas is persistence. Spring does not provide its own persistence framework. Instead, it provides an abstraction layer over JDBC, and a variety of O/R mapping frameworks, such as iBATIS SQL Maps, Hibernate, JDO, Apache OJB, and Oracle TopLink. This abstraction allows consistent, manageable data-access implementation. Spring's abstraction layer abstracts the application from the connection factory, the transaction API, and the exception hierarchies used by the underlying persistence technology. Application code always uses the Spring API to work with connection factories, utilizes Spring strategies for transaction management, and involves Spring's generic exception hierarchy to handle underlying exceptions. Spring sits between the application classes and the O/R mapping tool, undertakes transactions, and manages connection objects. It translates the underlying persistence exceptions thrown by Hibernate to meaningful, unchecked exceptions of type DataAccessException. Moreover, Spring provides IoC and AOP, which can be used in the persistence layer. Spring undertakes Hibernate's transactions and provides a more powerful, comprehensive approach to transaction management. The Data Access Object pattern Although you can obtain a Session object and connect to Hibernate anywhere in the application, it's recommended that all interactions with Hibernate be done only through distinct classes. Regarding this, there is a JEE design pattern, called the DAO pattern. According to the DAO pattern, all persistent operations should be performed via specific classes, technically called DAO classes. These classes are used exclusively for communicating with the data tier. The purpose of this pattern is to separate persistence-related code from the application's business logic, which makes for more manageable and maintainable code, letting you change the persistence strategy flexibly, without changing the business rules or workflow logic. The DAO pattern states that we should define a DAO interface corresponding to each DAO class. This DAO interface outlines the structure of a DAO class, defines all of the persistence operations that the business layer needs, and (in Spring-based applications) allows us to apply IoC to decouple the business layer from the DAO class. Service Facade Pattern In implementation of data access tier, the Service Facade Pattern is always used in addition to the DAO pattern. This pattern indicates using an intermediate object, called service object, between all business tier objects and DAO objects. The service object assembles the DAO methods to be managed as a unit of work. Note that only one service class is created for all DAOs that are implemented in each use case. The service class uses instances of DAO interfaces to interact with them. These instances are instantiated from the concrete DAO classes by the IoC container at runtime. Therefore, the service object is unaware of the actual DAO implementation details. Regardless of the persistence strategy your application uses (even if it uses direct JDBC), applying the DAO and Service Facade patterns to decouple application tiers is highly recommended. Data tier implementation with Hibernate Let's now see how the discussed patterns are applied to the application that directly uses Hibernate. The following code shows a sample DAO interface: package com.packtpub.springhibernate.ch13;import java.util.Collection;public interface StudentDao { public Student getStudent(long id); public Collection getAllStudents(); public Collection getGraduatedStudents(); public Collection findStudents(String lastName); public void saveStudent(Student std); public void removeStudent(Student std);} The following code shows a DAO class that implements this DAO interface: package com.packtpub.springhibernate.ch13;import org.hibernate.Session;import org.hibernate.SessionFactory;import org.hibernate.Transaction;import org.hibernate.HibernateException;import org.hibernate.Query;import java.util.Collection;public class HibernateStudentDao implements StudentDao { SessionFactory sessionFactory; public Student getStudent(long id) { Student student = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); student = (Student) session.get(Student.class, new Long(id)); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return student; } public Collection getAllStudents(){ Collection allStudents = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std order by std.lastName, std.firstName"); allStudents = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return allStudents; } public Collection getGraduatedStudents(){ Collection graduatedStudents = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std where std.status=1"); graduatedStudents = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return graduatedStudents; } public Collection findStudents(String lastName) { Collection students = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std where std.lastName like ?"); query.setString(1, lastName + "%"); students = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return students; } public void saveStudent(Student std) { Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); session.saveOrUpdate(std); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } } public void removeStudent(Student std) { Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); session.delete(std); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } } public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; }} As you can see, all implemented methods do routines. All obtain a Session object at first, get a Transaction object, perform a persistence operation, commit the transaction, rollback the transaction if exception occurs, and finally close the Session object. Each method contains much boilerplate code that is very similar to the other methods. Although applying the DAO pattern to the persistence code leads to more manageable and maintainable code, the DAO classes still include much boilerplate code. Each DAO method must obtain a Session instance, start a transaction, perform the persistence operation, and commit the transaction. Additionally, each DAO method should include its own duplicated exception-handling implementation. These are exactly the problems that motivate us to use Spring with Hibernate. Template Pattern: To clean the code and provide more manageable code, Spring utilizes a pattern called Template Pattern. By this pattern, a template object wraps all of the boilerplate repetitive code. Then, this object delegates the persistence calls as a part of functionality in the template. In the Hibernate case, HibernateTemplate extracts all of the boilerplate code, such as obtaining a Session, performing transaction, and handing exceptions.
Read more
  • 0
  • 0
  • 2565

article-image-starting-tomcat-6-part-2
Packt
31 Dec 2009
8 min read
Save for later

Starting Up Tomcat 6: Part 2

Packt
31 Dec 2009
8 min read
Bootstrapping the embedded container As we saw earlier, Bootstrap is simply a convenience class that is used to run the Embedded class, or rather to run Catalina, which subclasses Embedded. The Catalina class is intended to add the ability to process a server.xml file to its parent class. It even exposes a main() method, so you can invoke it directly with appropriate command-line arguments. Bootstrap uses its newly constructed serverLoader to load the Catalina class, which is then instantiated. It delegates the loading process to this Catalina instance's load() method. This method updates the catalina.base and catalina.home system properties to absolute references, verifies that the working directory is set appropriately, and initializes the naming system, which is Tomcat's implementation of the JNDI API. For now, all we need to note is that it indicates that JNDI is enabled by setting the catalina.useNaming system property to true, and prefixing the Context.URL_PKG_PREFIXES system property with the package org.apache.naming using a colon delimiter. The Context.URL_PKG_PREFIXES property indicates a list of fully qualified package prefixes for URL context factories. Setting org.apache.naming as the first entry makes it the first URL context factory implementation that will be located. For the java:comp/env Environment Naming Context (ENC), the actual class name for the URL context factory implementation is generated as org.apache.naming.java.javaURLContextFactory. If the Context.INITIAL_CONTEXT_FACTORY is currently not set for this environment, then this is set as the default INITIAL_CONTEXT_FACTORY to be used. Bootstrapping the Tomcat component hierarchy The configuration for a Tomcat instance is found in the confserver.xml file. This file is now processed, converting each element found into a Java object. The net result at the end of this processing is a Java object tree that mirrors this configuration file. This conversion process is facilitated by the use of the Apache Commons Digester project (http://commons.apache.org/digester/), an open source Commons project that allows you to harness the power of a SAX parser while at the same time avoiding the complexity that comes with event driven parsing. Commons Digester The Digester project was originally devised as a way of unmarshalling the struts-config.xml configuration file for Struts, but was moved out to a Commons project due to its general purpose usefulness. The basic principle behind the Digester is very simple. It takes an XML document and a RuleSet document as inputs, and generates a graph of Java objects that represents the structure that is defined in the XML instance document. There are three key concepts that come into play when using the Digester—a pattern, a rule, and an object stack. The pattern As the digester parses the input XML instance document, it keeps track of the elements it visits. Each element is identified by its parent's name followed by a forward slash ('/') and then by its name. For instance, in the example document below, the root element is represented by the pattern rolodex. Two <contact> elements are represented by the pattern rolodex/contact, the <company> elements are represented by the pattern rolodex/contact/company, and so on. <rolodex type=paperSales><contact id="1"><firstname>Damodar</firstname><lastname>Chetty</lastname><company>Software Engineering Solutions, Inc.</company></contact><contact id="2"><firstname>John</firstname><lastname>Smith</lastname><company>Ingenuitix, Inc.</company></contact></rolodex> The rule A rule specifies the action(s) that the Digester should take when a particular pattern is encountered. The common rules you will encounter are: Creational actions (create an instance of a given class to represent this XML element) Property setting actions (call setters on the Java object representing this XML element, passing in the value of either a child element or an attribute) Method invocation actions (call the specified method on the Java object representing this element, passing in the specified parameters) Object linking actions (set an object reference by calling a setter on one object while passing in the other as an argument) The object stack As objects are created, using the creational actions discussed above, Digester pushes them to the top of its internal stack. All actions typically affect the object at the top of the stack. A creational action will automatically pop the element on the top of the stack when the end tag for the pattern is detected. Using the Digester The typical sequence of actions is to create an object using a creational action, set its properties using a property setting action, and once the object is fully formed, to pop it off the top of the stack by linking it to its parent, which is usually just below it on the stack. Once the child has been popped off, the parent is once again at the top of the stack. This repeats as additional children objects are created, initialized, linked, and popped. Once all the children are processed and the parent object is fully initialized, the parent itself is popped off the stack, and we are done. You instantiate an org.apache.commons.digester.Digester by invoking the createDigester() method of org.apache.commons.digester.xmlrules.DigesterLoader and passing it the URL for the file containing the patterns and rules. Patterns and rules can also be specified programmatically by calling methods directly on the digester instance. However, defining them in a separate XML RuleSet instance document is much more modular, as it extracts rule configuration out of program code, making the code more readable and maintainable. Then, you invoke the parse() method of a Digester instance and pass it the actual XML instance document. The digester uses its configured rules to convert elements in the instance document into Java objects. The server.xml Digester The Catalina instance creates a Digester to process the server.xml file. Every element in this file is converted into an instance of the appropriate class, its properties are set based on configuration information in this file, and connections between the objects are set up, until what you are left with is a functioning framework of classes. This ability to configure the structure of cooperating classes using a declarative approach makes it easy to customize a Tomcat installation with very little effort. The createStartDigester() method in Catalina does the work of instantiating a new Digester and registering patterns and rules with it. The Catalina instance is then pushed to the top of the Digester stack, making it the root ancestor for all the elements parsed from the server.xml document. The rules can be described as follows: Pattern Rule Server Creational action: Instantiates an org.apache.catalina.core.StandardServer Set properties action: Copies attribute values over to the topmost object of the stack using mutator methods that are named similarly to the attribute Object linking action: Invokes setServer()to set this newly minted Server instance on the Catalina instance found on the stack. Server/ GlobalNamingResources Creational action: Instantiate an org.apache.catalina.deploy.NamingResources Set properties action: Copies attribute values from this element over to the topmost object on the stack Object linking action: Sets this newly instantiated object on the StandardServer instance at the top of the stack, by invoking its setGlobalNamingResources(). Server/Listener Creational action: Instantiate the class specified by the fully qualified class name provided as an attribute. Set properties action: Copy attributes from this element. Object linking action: Sets this instance on the StandardServer instance at the top of the stack, by invoking its addLifecycleListener() method with this new instance. Server/Service Creational action: Instantiates an org.apache.catalina.core.StandardService. Set properties action: Copy attributes from this element Object linking action: Invokes addService()on the StandardServer instance at the top of the stack passing in this newly minted instance Server/Service/Listener Creational action: Instantiate the class specified by the fully qualified class name provided as the className attribute Set properties action: Copy attributes from this element Object linking action: Invokes addLifecycleListener() on the StandardService instance at the top of the stack, passing in this listener instance Server/Service/Executor   Creational action: Instantiate the class org.apache.catalina.core.StandardThreadExecutor Set properties action: Copy attributes for this element Object linking action: Invokes addExecutor() with this instance, on the StandardService instance at the top of the stack Server/Service/Connector   Creational action: Instantiate the class org.apache.catalina.startup.ConnectorCreateRule Set properties action: Copy all attributes for this element except for the executor property Object linking action: Invokes addConnector(), passing in this instance, on the StandardService instance at the top of the stack Server/Service/Connector/Listener   Creational action: Instantiate the class specified by the fully qualified class name provided as the className attribute Set properties action: Copy attributes from this element Object linking action: Invokes addLifecycleListener(), passing in this instance, on the Connector instance at the top of the stack Server/Service/Engine   Set the Engine instance's parent class loader to the serverLoader
Read more
  • 0
  • 0
  • 1068

article-image-creating-web-application-jboss-5
Packt
31 Dec 2009
7 min read
Save for later

Creating a Web Application on JBoss AS 5

Packt
31 Dec 2009
7 min read
Wonder what was the first message sent through Internet? At 22:30 hours on October 29, 1969, a message was transmitted using ARPANET (the predecessor of the global Internet) on a host-to-host connection. It was meant to transmit "login". However, it transmitted just "lo" and crashed. Developing web layout The basic component of any Java web application is the servlet. Born in the middle of the 90s, servlets quickly gained success against their competitors, the CGI scripts. This was because of some innovative features, especially the ability to execute requests concurrently, without the overhead of creating a new process for each request. However, a few things were missing, for example, the servlet API did not address any APIs specifically for creating the client GUI. This resulted in multiple ways of creating the presentation tier, generally with tag libraries that differed from job to job and from individual developers. The second thing that was missing in the servlet specification was a clear distinction between the presentation tier and the backend. A plethora of web frameworks tried to fill this gap; particularly the Struts framework effectively realized a clean separation of the model (application logic that interacts with a database) from the view (HTML pages presented to the client) and the controller (instance that passes information between view and model). However, the limitation of these frameworks was that even if they realized a complete modular abstraction, they still failed as they always exposed theHttpServletRequest and HttpServletSessionobjects to their action(s). Their actions, in turn, needed to accept the interface contracts such as ActionForm, ActionMapping, and so on. The JavaServer Faces that emerged on the stage a few years later pursued a different approach. Unlike request-driven Model–View–Controller (MVC) web frameworks, JSF chose a component-based approach that ties the user interface component to a well-defined request processing lifecycle. This greatly simplifies the development of web applications. The JSF specification allows you to have presentation components be POJOs. This creates a cleaner separation from the servlet layer and makes it easier to do testing by not requiring the POJOs to be dependent on the servlet classes. In the following sections, we will describe how to create a web layout for our application store using the JSF technology. For an exhaustive explanation of the JSF framework, we suggest you to surf the JSF homepage at http://java.sun.com/javaee/javaserverfaces/. Installing JSF on JBoss AS JBoss AS already ships with the JSF libraries, so the good news is that you don't need to download or install them in the application server. There are different implementations of the JSF libraries. Earlier JBoss releases adopted the Apache MyFaces library. JBoss AS 4.2 and 5.x ship with the Common Development and Distribution License (CDDL) implementation (now called "Project Mojarra") of the JSF 1.2 specification that is available from the java.net open source community. Switching to another JSF implementation is anyway possible. All you have to do is package your JSF libraries with your web application and configure your web.xml to ignore the JBoss built-in implementation: <context-param><param-name>org.jboss.jbossfaces.WAR_BUNDLES_JSF_IMPL</param-name><param-value>true</param-value></context-param> We will start by creating a new JSF project. From the File menu, select New | Other | JBoss Tools Web | JSF | JSF Web project. The JSF applet wizard will display, requesting the Project Name, the JSF Environment, and the default starting Template. Choose AppStoreWeb as the project name, and check that the JSF Environment used is JSF 1.2. You can leave all other options to the defaults and click Finish. Eclipse will now suggest that you switch to the Web Projects view that logically assembles all JSF components. (It seems that the current release of the plugin doesn't understand your choice, so you have to manually click on the Web Projects tab.) The key configuration file of a JSF application is faces-config.xml contained in the Configuration folder. Here you declare all navigation rules of the application and the JSF managed beans. Managed beans are simple POJOs that provide the logic for initializing and controlling JSF components, and for managing data across page requests, user sessions, or the application as a whole. Adding JSF functionalities also requires adding some information to your web.xml file so that all requests ending with a certain suffix are intercepted by the Faces Servlet. Let's have a look at the web.xml configuration file: <?xml version="1.0"?><web-app version="2.5" xsi:schemaLocation="http://java.sun.com/xml/ns/javaeehttp://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"><display-name>AppStoreWeb</display-name><context-param><param-name>javax.faces.STATE_SAVING_METHOD</param-name><param-value>server</param-value></context-param><context-param> [1]<param-name>com.sun.faces.enableRestoreView11Compatibility</param-name><param-value>true</param-value></context-param><listener><listener-class>com.sun.faces.config.ConfigureListener</listener-class></listener><!-- Faces Servlet --><servlet><servlet-name>Faces Servlet</servlet-name><servlet-class>javax.faces.webapp.FacesServlet</servlet-class><load-on-startup>1</load-on-startup></servlet><!-- Faces Servlet Mapping --><servlet-mapping><servlet-name>Faces Servlet</servlet-name><url-pattern>*.jsf</url-pattern></servlet-mapping><login-config><auth-method>BASIC</auth-method></login-config></web-app> The context-param pointed out here [1] is not added by default when you create a JSF application. However, it needs to be added, else you'll stumble into an annoying ViewExpiredException when your session expires (JSF 1.2). Setting up navigation rules In the first step, we will define the navigation rules for our AppStore. A minimalist approach would require a homepage that displays the orders, along with two additional pages for inserting new customers and new orders respectively. Let's add the following navigation rule to the faces-config.xml: <faces-config><navigation-rule><from-view-id>/home.jsp</from-view-id> [1]<navigation-case><from-outcome>newCustomer</from-outcome> [2]<to-view-id>/newCustomer.jsp</to-view-id></navigation-case><navigation-case><from-outcome>newOrder</from-outcome> [3]<to-view-id>/newOrder.jsp</to-view-id></navigation-case></navigation-rule><navigation-rule><from-view-id></from-view-id> [4]<navigation-case><from-outcome>home</from-outcome><to-view-id>/home.jsp</to-view-id></navigation-case></navigation-rule></faces-config> In a navigation rule, you can have one from-view-id that is the (optional) starting page, and one or more landing pages that are tagged as to-view-id. The from-outcome determines the navigation flow. Think about this parameter as a Struts forward, that is, instead of embedding the landing page in the JSP/servlet, you'll simply declare a virtual path in your JSF beans. Therefore, our starting page will be home.jsp [1] that has two possible links—the newCustomer.jsp form [2] and the newOrder.jsp form [3]. At the bottom, there is a navigation rule that is valid across all pages [4]. Every page requesting the home outcome will be redirected to the homepage of the application. The above JSP will be created in a minute, so don't worry if Eclipse validator complains about the missing pages. This configuration can also be examined from the Diagram tab of your faces-config.xml: The next piece of code that we will add to the confi guration is the JSF managed bean declaration. You need to declare each bean here that will be referenced by JSF pages. Add the following code snippet at the top of your faces-config.xml (just before navigation rules): <managed-bean><managed-bean-name>manager</managed-bean-name> [1]<managed-bean-class>com.packpub.web.StoreManagerJSFBean</managed-bean-class> [2]<managed-bean-scope>request</managed-bean-scope> [3]</managed-bean> The <managed-bean-name> [1] element will be used by your JSF page to reference your beans. The <managed-bean-class> [2] is obviously the corresponding class. The managed beans can then be stored within the request, session, or application scopes, depending on the value of the <managed-bean-scope> element [3].
Read more
  • 0
  • 0
  • 4716

article-image-introduction-jsf-part-1
Packt
30 Dec 2009
6 min read
Save for later

An Introduction to JSF: Part 1

Packt
30 Dec 2009
6 min read
While the main focus of this article is learning how to use JSF UI components, and not to cover the JSF framework in complete detail, a basic understanding of fundamental JSF concepts is required before we can proceed. Therefore, by way of introduction, let's look at a few of the building blocks of JSF applications: the Model-View-Controller architecture, the JSF request processing lifecycle, managed beans, EL expressions, UI components, converters, validators, and internationalization (I18N). The Model-View-Controller architecture Like many other web frameworks, JSF is based on the Model-View-Controller (MVC) architecture. The MVC pattern promotes the idea of “separation of concerns”, or the decoupling of the presentation, business, and data access tiers of an application. The Model in MVC represents “state” in the application. This includes the state of user interface components (for example: the selection state of a radio button group, the enabled state of a button, and so on) as well as the application’s data (the customers, products, invoices, orders, and so on). In a JSF application, the Model is typically implemented using Plain Old Java Objects (POJOs) based on the JavaBeans API. These classes are also described as the “domain model” of the application, and act as Data Transfer Objects (DTOs) to transport data between the various tiers of the application. JSF enables direct data binding between user interface components and domain model objects using the Expression Language (EL), greatly simplifying data transfer between the View and the Model in a Java web application. The View in MVC represents the user interface of the application. The View is responsible for rendering data to the user, and for providing user interface components such as labels, text fields, buttons, radios, and checkboxes that support user interaction. As users interact with components in the user interface, events are fired by these components and delivered to Controller objects by the MVC framework. In this respect, JSF has much in common with a desktop GUI toolkit such as Swing or AWT. We can think of JSF as a GUI toolkit for building web applications. JSF components are organized in the user interface declaratively using UI component tags in a JSF view (typically a JSP or Facelets page). The Controller in MVC represents an object that responds to user interface events and to query or modify the Model. When a JSF page is displayed in the browser, the UI components declared in the markup are rendered as HTML controls. The JSF markup supports the JSF Expression Language (EL), a scripting language that enables UI components to bind to managed beans for data transfer and event handling. We use value expressions such as #{backingBean.name} to connect UI components to managed bean properties for data binding, and we use method expressions such as #{backingBean.sayHello} to register an event handler (a managed bean method with a specific signature) on a UI component. In a JSF application, the entity classes in our domain model act as the Model in MVC terms, a JSF page provides the View, and managed beans act as Controller objects. The JSF EL provides the scripting language necessary to tie the Model, View, and Controller concepts together. There is an important variation of the Controller concept that we should discuss before moving forward. Like the Struts framework, JSF implements what is known as the “Front Controller” pattern, where a single class behaves like the primary request handler or event dispatcher for the entire system. In the Struts framework, the ActionServlet performs the role of the Front Controller, handling all incoming requests and delegating request processing to application-defined Action classes. In JSF, the FacesServlet implements the Front Controller pattern, receiving all incoming HTTP requests and processing them in a sophisticated chain of events known as the JSF request processing lifecycle. The JSF Request Processing Lifecycle In order to understand the interplay between JSF components, converters, validators, and managed beans, let’s take a moment to discuss the JSF request processing lifecycle. The JSF lifecycle includes six phases: Restore/create view – The UI component tree for the current view is restored from a previous request, or it is constructed for the first time. Apply request values – The incoming form parameter values are stored in server-side UI component objects. Conversion/Validation – The form data is converted from text to the expected Java data types and validated accordingly (for example: required fields, length and range checks, valid dates, and so on). Update model values – If conversion and validation was successful, the data is now stored in our application’s domain model. Invoke application – Any event handler methods in our managed beans that were registered with UI components in the view are executed. Render response – The current view is re-rendered in the browser, or another view is displayed instead (depending on the navigation rules for our application). To summarize the JSF request handling process, the FacesServlet (the Front Controller) first handles an incoming request sent by the browser for a particular JSF page by attempting to restore or create for the first time the server-side UI component tree representing the logical structure of the current View (Phase 1). Incoming form data sent by the browser is stored in the components such as text fields, radio buttons, checkboxes, and so on, in the UI component tree (Phase 2). The data is then converted from Strings to other Java types and is validated using both standard and custom converters and validators (Phase 3). Once the data is converted and validated successfully, it is stored in the application’s Model by calling the setter methods of any managed beans associated with the View (Phase 4). After the data is stored in the Model, the action method (if any) associated with the UI component that submitted the form is called, along with any other event listener methods that were registered with components in the form (Phase 5). At this point, the application’s logic is invoked and the request may be handled in an application-defined way. Once the Invoke Application phase is complete, the JSF application sends a response back to the web browser, possibly displaying the same view or perhaps another view entirely (Phase 6). The renderers associated with the UI components in the view are invoked and the logical structure of the view is transformed into a particular presentation format or markup language. Most commonly, JSF views are rendered as HTML using the framework’s default RenderKit, but JSF does not require pages to be rendered only in HTML. In fact, JSF was designed to be a presentation technology neutral framework, meaning that views can be rendered according to the capabilities of different client devices. For example, we can render our pages in HTML for web browsers and in WML for PDAs and wireless devices.
Read more
  • 0
  • 0
  • 4729
article-image-introduction-jsf-part-2
Packt
30 Dec 2009
7 min read
Save for later

An Introduction to JSF: Part 2

Packt
30 Dec 2009
7 min read
Standard JSF Validators The JSF Core tag library also includes a number of built-in validators. These validator tags can also be registered with UI components to verify that required fields are completed by the user-that numeric values are within an acceptable range,and that text values are a certain length. For more specific validation scenarios, we can also write our own custom validators. User input validation happens immediately after data conversion during the JSF request lifecycle. Validating the Length of a Text Value JSF includes a built-in validator that can be used to ensure a text value entered by the user is between an expected minimum and maximum length. The following example demonstrates using the <f:validatelength> tag’s minimum and maximum attributes to check that the password entered by the user in the password field is exactly 8 characters long. It also demonstrates how to use the label attribute of certain JSF input components (introduced in JSF 1.2) to render a localizable validation message. JSF Validation Messages The JSF framework includes predefined validation messages for different input components and validation scenarios. These messages are defined in a message bundle (properties file) including in the JSF implementation jar file. Many of these messages are parameterized, meaning that since JSF 1.2 a UI component’s label can be inserted into these messages to provide more detailed information to the user. The default JSF validation messages can be overridden by specifying the same message bundle keys in the application’s message bundle. We will see an example of customizing JSF validation messages below. Notice that we also set the maxlength attribute of the <h:inputsecret> tag to limit the input to 8 characters. This does not, however, ensure that the user enters a minimum of 8 characters. Therefore, the <f:validatelength> validator tag is required.   <f:view> <h:form> <h:outputLabel value="Please enter a password (must be 8 characters): " /> <h:inputSecret maxlength="8" id="password" value="#{backingBean.password}" label="Password"> <f:validateLength minimum="8" maximum="8" /> </h:inputSecret> <h:commandButton value="Submit" /><br /> <h:message for="password" errorStyle="color:red" /> </h:form> </f:view> Validating a Required Field The following example demonstrates how to use the built-in JSF validators to ensure that a text field is filled out before the form is processed, and that the numeric value is between 1 and 10: <f:view> <h:form> <h:outputLabel value="Please enter a number: " /> <h:inputText id="number" label="Number" value="#{backingBean.price}" required="#{true}" /> <h:commandButton value="Submit" /><br /> <h:message for="number" errorClass="error" /> </h:form> </f:view> The following screenshot demonstrates the result of submitting a JSF form containing a required field that was not filled out. We render the validation error message using an <h:message> tag with a for attribute set to the ID of the text field component. We have also overridden the default JSF validation message for required fields by specifying the following message key in our message bundle. We will discuss message bundles and internationalization (I18N) shortly.   javax.faces.component.UIInput.REQUIRED=Required field. javax.faces.component.UIInput.REQUIRED_detail=Please fill in this field. Validating a numeric range The JSF Core <f:validatelongrange> and </f:validatedoublerange> tags can be used to validate numeric user input. The following example demonstrates how to use the <f:validatelongrange> tag to ensure an integer value entered by the user is between 1 and 10. <f:view> <h:form> <h:outputLabel value="Please enter a number between 1 and 10: " /> <h:inputText id="number" value="#{backingBean.number}" label="Number"> <f:validateLongRange minimum="1" maximum="10" /> </h:inputText> <h:commandButton value="Submit" /><br /> <h:message for="number" errorStyle="color:red" /> <h:outputText value="You entered: #{backingBean.number}" rendered="#{backingBean.number ne null}" /> </h:form> </f:view> The following screenshot shows the result of entering an invalid value into the text field. Notice that the value of the text field’s label attribute is interpolated with the standard JSF validation message. Validating a floating point number is similar to validating an integer. The following example demonstrates how to use the value to ensure that a floating point number is between 0.0 and 1.0.   <f:view> <h:form> <h:outputLabel value="Please enter a floating point number between 0 and 1: " /> <h:inputText id="number" value="#{backingBean.percentage}" label="Percent"> <f:validateDoubleRange minimum="0.0" maximum="1.0" /> </h:inputText> <h:commandButton value="Submit" /><br /> <h:message for="number" errorStyle="color:red" /> <h:outputText value="You entered: " rendered="#{backingBean.percentage ne null}" /> <h:outputText value="#{backingBean.percentage}" rendered="#{backingBean.percentage ne null}" > <f:convertNumber type="percent" maxFractionDigits="2" /> </h:outputText> </h:form> </f:view> Registering a custom validator JSF also supports defining custom validation classes to provide more specialized user input validation. To create a custom validator, first we need to implement the javax.faces.validator.Validator interface. Implementing a custom validator in JSF is straightforward. In this example, we check if a date supplied by the user represents a valid birthdate. As most humans do not live more than 120 years, we reject any date that is more than 120 years ago. The important thing to note from this code example is not the validation logic itself, but what to do when the validation fails. Note that we construct a FacesMessage object with an error message and then throw a ValidatorException.   package chapter1.validator; import java.util.Calendar; import java.util.Date; import javax.faces.application.FacesMessage; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.faces.validator.Validator; import javax.faces.validator.ValidatorException; public class CustomDateValidator implements Validator { public void validate(FacesContext context, UIComponent component, Object object) throws ValidatorException { if (object instanceof Date) { Date date = (Date) object; Calendar calendar = Calendar.getInstance(); calendar.roll(Calendar.YEAR, -120); if (date.before(calendar.getTime())) { FacesMessage msg = new FacesMessage(); msg.setSummary("Invalid birthdate: " + date); msg.setDetail("The date entered is more than 120 years ago."); msg.setSeverity(FacesMessage.SEVERITY_ERROR); throw new ValidatorException(msg); } } } } We have to declare our custom validators in faces-config.xml as follows, giving the validator an ID of customDateValidator: <validator> <description>This birthdate validator checks a date to make sure it is within the last 120 years.</description> <display-name>Custom Date Validator</display-name> <validator-id>customDateValidator</validator-id> <validator-class> chapter1.validator.CustomDateValidator </validator-class> </validator> Next, we would register our custom validator on a JSF UI component using the tag. This tag has a converterId attribute that expects the ID of a custom converter declared in faces-config.xml. Notice in the following example that we are also registering the standard JSF <f:convertdatetime></f:convertdatetime> converter on the tag. This is to ensure that the value entered by the user is first converted to a java.util.Date object before it is passed to our custom validator. <h:inputText id="name" value="#{backingBean.date}"> <f:convertDateTime type="date" pattern="M/d/yyyy" /> <f:validator validatorId="customDateValidator" /> </h:inputText> Many JSF UI component tags have both a converter and validator attribute that accept EL method expressions. These attributes provides another way to register custom converters and validators implemented in managed beans on UI components.
Read more
  • 0
  • 0
  • 4247

article-image-introduction-hibernate-and-spring-part-1
Packt
29 Dec 2009
4 min read
Save for later

An Introduction to Hibernate and Spring: Part 1

Packt
29 Dec 2009
4 min read
This article by Ahmad Seddighi, introduces Spring and Hibernate, explaining what persistence is, why it is important, and how it is implemented in Java applications. It provides a theoretical discussion of Hibernate and how Hibernate solves problems related to persistence. Finally, we take a look at Spring and the role of Spring in persistence. Hibernate and Spring are open-source Java frameworks that simplify developing Java/JEE applications from simple, stand-alone applications running on a single JVM, to complex enterprise applications running on full-blown application servers. Hibernate and Spring allow developers to produce scalable, reliable, and effective code. Both frameworks support declarative configuration and work with a POJO (Plain Old Java Object) programming model (discussed later in this article), minimizing the dependence of application code on the frameworks, and making development more productive and portable. Although the aim of these frameworks partially overlap, for the most part, each is used for a different purpose. The Hibernate framework aims to solve the problems of managing data in Java: those problems which are not fully solved by the Java persistence API, JDBC (Java Database Connectivity), persistence providers, DBMS (Database Management Systems), and their mediator language, SQL (Structured Query Language). In contrast, Spring is a multitier framework that is not dedicated to a particular area of application architecture. However, Spring does not provide its own solution for issues such as persistence, for which there are already good solutions. Rather, Spring unifies preexisting solutions under its consistent API and makes them easier to use. As mentioned, one of these areas is persistence. Spring can be integrated with a persistence solution, such as Hibernate, to provide an abstraction layer over the persistence technology, and produce more portable, manageable, and effective code. Furthermore, Spring provides other services spread over the application architecture, such as inversion of control and aspect-oriented programming (explained later in this article), decoupling the application's components, and modularizing common behaviors. This article looks at the motivation and goals for Hibernate and Spring. The article begins with an explanation of why Hibernate is needed, where it can be used, and what it can do. We'll take a quick look at Hibernates alternatives, exploring their advantages and disadvantages. I'll outline the valuable features that Hibernate offers and explain how it can solve the problems of the traditional approach to Java persistence. The discussion continues with Spring. I'll explain what Spring is, what services it offers, and how it can help to develop a high-quality data-access layer with Hibernate. Persistence management in Java Persistence has long been a challenge in the enterprise community. Many persistence solutions from primitive, file-based approaches, to modern, object-oriented databases have been presented. For any of these approaches, the goal is to provide reliable, efficient, flexible, and scalable persistence. Among these competing solutions, relational databases (because of certain advantages) have been most widely accepted in the IT world. Today, almost all enterprise applications use relational databases. A relational database is an application that provides the persistence service. It provides many persistence features, such as indexing data to provide speedy searches; solves the relevant problems, such as protecting data from unauthorized access; and handles many complications, such as preserving relationships among data. Creating, modifying, and accessing relational databases is fairly simple. All such databases present data in two-dimensional tables and support SQL, which is relatively easy to learn and understand. Moreover, they provide other services, such as transactions and replication. These advantages are enough to ensure the popularity of relational databases. To provide support for relational databases in Java, the JDBC API was developed. JDBC allows Java applications to connect to relational databases, express their persistence purpose as SQL expressions, and transmit data to and from databases. The following screenshot shows how this works: Using this API, SQL statements can be passed to the database, and the results can be returned to the application, all through a driver. The mismatch problem JDBC handles many persistence issues and problems in communicating with relational databases. It also provides the needed functionality for this purpose. However, there remains an unsolved problem in Java applications: Java applications are essentially object-oriented programs, whereas relational databases store data in a relational form. While applications use object-oriented forms of data, databases represent data in two-dimensional table forms. This situation leads to the so-called object-relational paradigm mismatch, which (as we will see later) causes many problems in communication between object-oriented and relational environments. For many reasons, including ease of understanding, simplicity of use, efficiency, robustness, and even popularity, we may not discard relational databases. However, the mismatch cannot be eliminated in an effortless and straightforward manner.
Read more
  • 0
  • 0
  • 4263

article-image-integrating-spring-framework-hibernate-orm-framework-part-2
Packt
29 Dec 2009
5 min read
Save for later

Integrating Spring Framework with Hibernate ORM Framework: Part 2

Packt
29 Dec 2009
5 min read
Configuring Hibernate in a Spring context Spring provides the LocalSessionFactoryBean class as a factory for a SessionFactory object. The LocalSessionFactoryBean object is configured as a bean inside the IoC container, with either a local JDBC DataSource or a shared DataSource from JNDI. The local JDBC DataSource can be configured in turn as an object of org.apache.commons.dbcp.BasicDataSource in the Spring context: <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"> <property name="driverClassName"> <value>org.hsqldb.jdbcDriver</value> </property> <property name="url"> <value>jdbc:hsqldb:hsql://localhost/hiberdb</value> </property> <property name="username"> <value>sa</value> </property> <property name="password"> <value></value> </property></bean> In this case, the org.apache.commons.dbcp.BasicDataSource (the Jakarta Commons Database Connection Pool) must be in the application classpath. Similarly, a shared DataSource can be configured as an object of org.springframework.jndi.JndiObjectFactoryBean. This is the recommended way, which is used when the connection pool is managed by the application server. Here is the way to configure it: <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>java:comp/env/jdbc/HiberDB</value> </property></bean> When the DataSource is configured, you can configure the LocalSessionFactoryBean instance upon the configured DataSource as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> ...</bean> Alternatively, you may set up the SessionFactory object as a server-side resource object in the Spring context. This object is linked in as a JNDI resource in the JEE environment to be shared with multiple applications. In this case, you need to use JndiObjectFactoryBean instead of LocalSessionFactoryBean: <bean id="sessionFactory" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>java:comp/env/jdbc/hiberDBSessionFactory</value> </property></bean> JndiObjectFactoryBean is another factory bean for looking up any JNDI resource. When you use JndiObjectFactoryBean to obtain a preconfigured SessionFactory object, the SessionFactory object should already be registered as a JNDI resource. For this purpose, you may run a server-specific class which creates a SessionFactory object and registers it as a JNDI resource. LocalSessionFactoryBean uses three properties: datasource, mappingResources, and hibernateProperties. These properties are as follows: datasource refers to a JDBC DataSource object that is already defined as another bean inside the container. mappingResources specifies the Hibernate mapping files located in the application classpath. hibernateProperties determines the Hibernate configuration settings. We have the sessionFactory object configured as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> <property name="mappingResources"> <list> <value>com/packtpub/springhibernate/ch13/Student.hbm.xml</value> <value>com/packtpub/springhibernate/ch13/Teacher.hbm.xml</value> <value>com/packtpub/springhibernate/ch13/Course.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect </prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.max_fetch_depth">2</prop> </props> </property></bean> The mappingResources property loads mapping definitions in the classpath. You may use mappingJarLocations, or mappingDirectoryLocations to load them from a JAR file, or from any directory of the file system, respectively. It is still possible to configure Hibernate with hibernate.cfg.xml, instead of configuring Hibernate as just shown. To do so, configure sessionFactory with the configLocation property, as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> <property name="configLocation"> <value>/conf/hibernate.cfg.xml</value> </property></bean> Note that hibernate.cfg.xml specifies the Hibernate mapping definitions in addition to the other Hibernate properties. When the SessionFactory object is configured, you can configure DAO implementations as beans in the Spring context. These DAO beans are the objects which are looked up from the Spring IoC container and consumed by the business layer. Here is an example of DAO configuration: <bean id="studentDao" class="com.packtpub.springhibernate.ch13.HibernateStudentDao"> <property name="sessionFactory"> <ref local="sessionFactory"/> </property></bean> This is the DAO configuration for a DAO class that extends HibernateDaoSupport, or directly uses a SessionFactory property. When the DAO class has a HibernateTemplate property, configure the DAO instance as follows: <bean id="studentDao" class="com.packtpub.springhibernate.ch13.HibernateStudentDao"> <property name="hibernateTemplate"> <bean class="org.springframework.orm.hibernate3.HibernateTemplate"> <constructor-arg> <ref local="sessionFactory"/> </constructor-arg> </bean> </property></bean> According to the preceding declaration, the HibernateStudentDao class has a hibernateTemplate property that is configured via the IoC container, to be initialized through constructor injection and a SessionFactory instance as a constructor argument. Now, any client of the DAO implementation can look up the Spring context to obtain the DAO instance. The following code shows a simple class that creates a Spring application context, and then looks up the DAO object from the Spring IoC container: package com.packtpub.springhibernate.ch13; public class DaoClient { public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext("com/packtpub/springhibernate/ch13/applicationContext.xml"); StudentDao stdDao = (StudentDao)ctx.getBean("studentDao"); Student std = new Student(); //set std properties //save std stdDao.saveStudent(std); }}
Read more
  • 0
  • 0
  • 3937
article-image-introduction-hibernate-and-spring-part-2
Packt
29 Dec 2009
6 min read
Save for later

An Introduction to Hibernate and Spring: Part 2

Packt
29 Dec 2009
6 min read
Object relational mapping As the previous discussion shows, we are looking for a solution that enables applications to work with the object representation of the data in database tables, rather than dealing directly with that data. This approach isolates the business logic from any relational issues that might arise in the persistence layer. The strategy to carry out this isolation is generally called object/relational mapping (O/R Mapping, or simply ORM). A broad range of ORM solutions have been developed. At the basic level, each ORM framework maps entity objects to JDBC statement parameters when the objects are persisted, and maps the JDBC query results back to the object representation when they are retrieved. Developers typically implement this framework approach when they use pure JDBC. Furthermore, ORM frameworks often provide more sophisticated object mappings, such as the mapping of inheritance hierarchy and object association, lazy loading, and caching of the persistent objects. Caching enables ORM frameworks to hold repeatedly fetched data in memory, instead of being fetched from the database in the next requests, causing deficiencies and delayed responses, the objects are returned to the application from memory. Lazy loading, another great feature of ORM frameworks, allows an object to be loaded without initializing its associated objects until these objects are accessed. ORM frameworks usually use mapping definitions, such as metadata, XML files, or Java annotations, to determine how each class and its persistent fields should be mapped onto database tables and columns. These frameworks are usually configured declaratively, which allows the production of more flexible code. Many ORM solutions provide an object query language, which allows querying the persistent objects in an object-oriented form, rather than working directly with tables and columns through SQL. This behavior allows the application to be more isolated from the database properties. Hibernate as an O/R Mapping solution For a long time, Hibernate has been the most popular persistence framework in the Java community. Hibernate aims to overcome the already mentioned impedance mismatch between object-oriented applications and relational databases. With Hibernate, we can treat the database as an object-oriented store, thereby eliminating mapping of the object-oriented and relational environments. Hibernate is a mediator that connects the object-oriented environment to the relational environment. It provides persistence services for an application by performing all of the required operations in the communication between the object-oriented and relational environments. Storing, updating, removing, and loading can be done regardless of the objects persistent form. In addition, Hibernate increases the application's effectiveness and performance, makes the code less verbose, and allows the code to be more focused on business rules than persistence logic. The following screenshot depicts Hibernates role in persistence: Hibernate fully supports object orientation, meaning all aspects of objects, such as association and inheritance, are properly persisted. Hibernate can also persist object navigation, that is, how an object is navigable through its associated objects. It caches data that is fetched repeatedly and provides lazy loading, which notably enhances database performance. As you will see, Hibernate provides caches in two levels: first-level built-in, and second-level pluggable cache strategies. Th e first-level cache is a required property for any ORM to preserve object consistency. It guaranties that the application always works with consistent objects. This is originated from the fact that many threads in the application use the ORM to persist the objects which might potentially be associated to the same table rows in the database. The following screenshot depicts the role of a cache when using Hibernate: Hibernate provides its own query language, which is Hibernate Query Language (HQL). At runtime, HQL expressions are transformed to their corresponding SQL statements, based on the database used. Because databases may use different versions of SQL and may expose different features, Hibernate presents a new concept, called an SQL dialect, t o distinguish how databases differ. Furthermore, Hibernate allows SQL expressions to be used either declaratively or programmatically, which is useful in specific situations when Hibernate does not satisfy application persistence requirements. Hibernate keeps track of object changes through snapshot comparisons to prevent unnecessary updating. Other O/R Mapping solutions Although Hibernate is the most popular persistence framework, many other frameworks do exist. Some of these are explained as follows: Enterprise JavaBeans (EJB): It is a standard J2EE (J ava 2 Enterprise Edition) technology that defines a different type of persistence by presenting entity beans. Mostly, for declarative middleware services that are provided by the application server, such as transactions, EJB may be preferred for architecture. However, due to its complexity, nontransparent persistence, and need for a container (all of which make it difficult to implement, test, and maintain), EJB is less often used than other persistence frameworks. iBatis SQL Map: It is a result set–mapping framework which works at the SQL level, allowing SQL string definitions with parameter placeholders in XML files. At runtime, the placeholders are filled with runtime values, either from simple parameter objects, JavaBeans properties, or a parameter map. To their advantage, SQL maps allow SQL to be fully customized for a specific database. To their disadvantage, however, these maps do not provide an abstraction from the specific features of the target database. Java Data Objects (JDO): It is a specification for general object persistence in any kind of data store, including relational databases and object-oriented databases. Most JDO implementations support using metadata mapping definitions. JDO provides its own query language, JDOQL, and its own strategy for change detection. TopLink: It provides a visual mapping editor (Mapping Workbench) and offers a particularly wide range of object, relational mappings, including a complete set of direct and relational mappings, object-to-XML mappings, and JAXB (Java API for XML Binding) support. TopLink provides a rich query framework that supports an object-oriented expression framework, EJB QL, SQL, and stored procedures. It can be used in either a JSE or a JEE environment. Hibernate designers has borrowed many Hibernate concepts and useful features from its ancestors Hibernate versus other frameworks Unlike the frameworks just mentioned, Hibernate is easy to learn, simple to use, comprehensive, and (unlike EJB) does not need an application server. Hibernate is well documented, and many resources are available for it. Downloaded more than three million times, Hibernate is used in many applications around the world. To use Hibernate, you need only J2SE 1.2 or later, and it can be used in stand-alone or distributed applications. The current version of Hibernate is 3, but the usage and configuration of this version are very similar to version 2. Most of the changes in Hibernate 3 are compatible with Hibernate 2. Hibernate solves many of the problems of mapping objects to a relational environment, isolating the application from getting involved in many persistence issues. Keep in mind that Hibernate is not a replacement for JDBC. Rather, it can be thought of as a tool that connects to the database through JDBC and presents an object-oriented, application-level view of the database.
Read more
  • 0
  • 0
  • 1610

article-image-solving-many-many-relationship-dimensional-modeling
Packt
28 Dec 2009
3 min read
Save for later

Solving Many-to-Many Relationship in Dimensional Modeling

Packt
28 Dec 2009
3 min read
Bridge table solution We will use a simplified book sales dimensional model as an example to demonstrate our bridge solution. Our book sales model initially has the SALES_FACT fact table and two dimension tables: BOOK_DIM and DATE_DIM. The granularity of the model is sales amount by date (daily) and by book. Assume the BOOK_DIM table has five rows: BOOK_SK TITLE AUTHOR 1 Programming in Java King, Chan 3 Learning Python Simpson 2 Introduction to BIRT Chan, Gupta, Simpson (Editor) 4 Advanced Java King, Chan 5 Beginning XML King, Chan (Foreword) The DATE_DIM has two rows: DATE_SK DT 1 11-DEC-2009 2 12-DEC-2009 3 13-DEC-2009 And, the SALES_FACT table has ten rows: DATE_SK BOOK_SK SALES_AMT 1 1 1000 1 2 2000 1 3 3000 1 4 4000 2 2 2500 2 3 3500 2 4 4500 2 5 5500 3 3 8000 3 4 8500 Note that:The columns with _sk suffixes in the dimension tables are surrogate keys of the dimension tables; these surrogate keys relate the rows of the fact table to the rows in the dimension tables.King and Chan have collaborated in three books; two as co-authors, while in the “Beginning XML” Chan’s contribution is writing its foreword. Chan also co-authors the “Introduction to BIRT”.Simpson singly writes the “Learning Python” and is an editor for “Introduction to BIRT”. To analyze daily book sales, you simply run a query, joining the dimension tables to the fact table: SELECT dt, title, sales_amt FROM sales_fact s, date_dim d, book_dim bWHERE s.date_sk = d.date_skAND s.book_sk = b.book_sk This query produces the result showing the daily sales amount of every book that has a sale: DT TITLE SALES_AMT 11-DEC-09 Advanced Java 4000 11-DEC-09 Introduction to BIRT 2000 11-DEC-09 Learning Python 3000 11-DEC-09 Programming in Java 1000 12-DEC-09 Advanced Java 4500 12-DEC-09 Beginning XML 5500 12-DEC-09 Introduction to BIRT 2500 12-DEC-09 Learning Python 3500 13-DEC-09 Advanced Java 8500 13-DEC-09 Learning Python 8000 You will notice that the model does not allow you to readily analyze the sales by individual writer—the AUTHOR column is multi-value, not normalized, which violates the dimension modeling rule (we can resolve this by creating a view to “bundle” the AUTHOR_DIM with the SALES_FACT tables such that the AUTHORtable connects to the view as a normal dimension. We will create the view a bit later in this section). We can solve this issue by adding an AUTHOR_DIM and its AUTHOR_GROUP bridge table. The AUTHOR_DIM must contain all individual contributors, which you will have to extract from the books and enter into the table. In our example we have four authors. AUTHOR_SK NAME 1 Chan 2 King 3 Gupta 4 Simpson The weighting_factor column in the AUTHOR_GROUP bridge table contains a fractional numeric value that determines the contribution of an author to a book. Typically the authors have equal contribution to the book they write, but you might want to have different weighting_factor for different roles; for example, an editor and a foreword writer have smaller weighting_factors than that of an author. The total weighting_factors for a book must always equal to 1. The AUTHOR_GROUP bridge table has one surrogate key for every group of authors (a single author is considered a group that has one author only), and as many rows with that surrogate key for every contributor in the group.
Read more
  • 0
  • 0
  • 7844