Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-developing-applications-jboss-and-hibernate-part-1
Packt
19 Jan 2010
4 min read
Save for later

Developing Applications with JBoss and Hibernate: Part 1

Packt
19 Jan 2010
4 min read
Introducing Hibernate Hibernate provides a bridge between the database and the application by persisting application objects in the database, rather than requiring the developer to write and maintain lots of code to store and retrieve objects. The main configuration file, hibernate.cfg.xml, specifies how Hibernate obtains database connections, either from a JNDI DataSource or from a JDBC connection pool. Additionally, the configuration file defines the persistent classes, which are backed by mapping definition files. This is a sample hibernate.cfg.xml configuration file that is used to handle connections to a MySQL database, mapping the com.sample.MySample class. <hibernate-configuration><session-factory><property name="connection.username">user</property><property name="connection.password">password</property><property name="connection.url"> jdbc:mysql://localhost/database</property><property name="connection.driver_class"> com.mysql.jdbc.Driver</property><property name="dialect"> org.hibernate.dialect.MySQLDialect</property><mapping resource="com/sample/MyClass.hbm.xml"/></session-factory></hibernate-configuration> From our point of view, it is important to know that Hibernate applications can coexist in both the managed environment and the non-managed environment. An application server is a typical example of a managed environment that provides services to hosting applications, such as connection pooling and transaction. On the other hand, a non-managed application refers to standalone applications, such as Swing Java clients that typically lack any built-in service. In this article, we will focus on managed environment applications, installed on JBoss Application Server. You will not need to download any library to your JBoss installation. As a matter of fact, JBoss persistence layer is designed around Hibernate API, so it already contains all the core libraries. Creating a Hibernate application You can choose different strategies for building a Hibernate application. For example, you could start building Java classes and map files from scratch, and then let Hibernate generate the database schema accordingly. You can also start from a database schema and reverse engineer it into Java classes and Hibernate mapping files. We will choose the latter option, which is also the fastest. Here's an overview of our application. In this example, we will design an employee agenda divided into departments. The persistence model will be developed with Hibernate, using the reverse engineering facet of JBoss tools. We will then need an interface for recording our employees and departments, and to query them as well. The web interface will be developed using a simple Model-View-Controller (MVC) pattern and basic JSP 2.0 and servlet features. The overall architecture of this system resembles the AppStore application that has been used to introduce JPA. As a matter of fact, this example can be used to compare the two persistence models and to decide which option best suits your project needs. We have added a short section at the end of this example to stress a few important points about this choice. Setting up the database schema The overall architecture of this system resembles the AppStore application that has been used to introduce JPA. As a matter of fact, this example can be used to compare the two persistence models and to decide which option best suits your project needs. We have added a short section at the end of this example to stress a few important points about this choice. CREATE schema hibernate;GRANT ALL PRIVILEGES ON hibernate.* TO 'jboss'@'localhost' WITH GRANTOPTION;CREATE TABLE `hibernate`.`department` (`department_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,`department_name` VARCHAR(45) NOT NULL,PRIMARY KEY (`department_id`))ENGINE = InnoDB;CREATE TABLE `hibernate`.`employee` (`employee_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,`employee_name` VARCHAR(45) NOT NULL,`employee_salary` INTEGER UNSIGNED NOT NULL,`employee_department_id` INTEGER UNSIGNED NOT NULL,PRIMARY KEY (`employee_id`),CONSTRAINT `FK_employee_1` FOREIGN KEY `FK_employee_1` (`employee_department_id`)REFERENCES `department` (`department_id`)ON DELETE CASCADEON UPDATE CASCADE)ENGINE = InnoDB; With the first Data Definition Language (DDL) command, we have created a schema named Hibernate that will be used to store our tables. Then, we have assigned the necessary privileges on the Hibernate schema to the user jboss. Finally, we created a table named department that contains the list of company units, and another table named employee that contains the list of workers. The employee table references the department with a foreign key constraint.
Read more
  • 0
  • 0
  • 2672

article-image-overview-tomcat-6-servlet-container-part-2
Packt
18 Jan 2010
8 min read
Save for later

An Overview of Tomcat 6 Servlet Container: Part 2

Packt
18 Jan 2010
8 min read
Nested components These components are specific to the Tomcat implementation, and their primary purpose is to enable the various Tomcat containers to perform their tasks. Valve A valve is a processing element that can be placed within the processing path of each of Tomcat's containers—engine, host, context, or a servlet wrapper. A Valve is added to a container using the <Valve> element in server.xml. They are executed in the order in which they are encountered within the server.xml file. The Tomcat distribution comes with a number of pre-rolled valves. These include: A valve that logs specific elements of a request (such as the remote client's IP address) to a log file or database A valve that lets you control access to a particular web application based on the remote client's IP address or host name A valve that lets you log every request and response header A valve that lets you configure single sign-on access across multiple web applications on a specific virtual host If these don't meet your needs, you can write your own implementations of org.apache.catalina.Valve and place them into service. A container does not hold references to individual valves. Instead, it holds a reference to a single entity known as the Pipeline, which represents a chain of valves associated with that container. When a container is invoked to process a request, it delegates the processing to its associated pipeline. The valves in a pipeline are arranged as a sequence, based on how they are defined within the server.xml file. The final valve in this sequence is known as the pipeline's basic valve. This valve performs the task that embodies the core purpose of a given container. Unlike individual valves, the pipeline is not an explicit element in server.xml, but instead is implicitly defined in terms of the sequence of valves that are associated with a given container. Each Valve is aware of the next valve in the pipeline. After it performs its pre processing, it invokes the next Valve in the chain, and when the call returns, it performs its own post processing before returning. This is very similar to what happens in filter chains within the servlet specification. In this image, the engine's configured valve(s) fire when an incoming request is received. An engine's basic valve determines the destination host and delegates processing to that host. The destination host's (www.host1.com) valves now fire in sequence. The host's basic valve then determines the destination context (here, Context1) and delegates processing to it. The valves configured for Context1 now fire and processing is then delegated by the context's basic valve to the appropriate wrapper, whose basic valve hands off processing to its wrapped servlet. The response then returns over the same path in reverse. A Valve becomes part of the Tomcat server's implementation and provides a way for developers to inject custom code into the servlet container's processing of a request. As a result, the class files for custom valves must be deployed to CATALINA_HOME/lib, rather than to the WEB-INF/classes of a deployed application. As they are not part of the servlet specification, valves are non-portable elements of your enterprise application. Therefore, if you rely on a particular valve, you will need to find equivalent alternatives in a different application server. It is important to note that valves are required to be very efficient in order not to introduce inordinate delays into the processing of a request. Realm Container managed security works by having the container handle the authentication and authorization aspects of an application. Authentication is defined as the task of ensuring that the user is who she says she is, and authorization is the task of determining whether the user may perform some specific action within an application. The advantage of container managed security is that security can be configured declaratively by the application's deployer. That is, the assignment of passwords to users and the mapping of users to roles can all be done through configuration, which can then be applied across multiple web applications without any coding changes being required to those web applications. Application Managed Security The alternative is having the application manage security. In this case, your web application code is the sole arbiter of whether a user may access some specific functionality or resource within your application. For Container managed security to work, you need to assemble the following components: Security constraints: Within your web application's deployment descriptor, web.xml, you must identify the URL patterns for restricted resources, as well as the user roles that would be permitted to access these resources. Credential input mechanism: In the web.xml deployment descriptor, you specify how the container should prompt the user for authentication credentials. This is usually accomplished by showing the user a dialog that prompts the user for a user name and password, but can also be configured to use other mechanisms such as a custom login form. Realm: This is a data store that holds user names, passwords, and roles, against which the user-supplied credentials are checked. It can be a simple XML file, a table in a relational database that is accessed using the JDBC API, or a Lightweight Directory Access Protocol (LDAP) server that can be accessed through the JNDI API. A realm provides Tomcat with a consistent mechanism of accessing these disparate data sources. All three of the above components are technically independent of each other. The power of container based security is that you can assemble your own security solution by mixing and matching selections from each of these groups. Now, when a user requests a resource, Tomcat will check to see whether a security constraint exists for this resource. For a restricted resource, Tomcat will then automatically request the user for her credentials and will then check these credentials against the configured realm. Access to the resource will be allowed only if the user's credentials are valid and if the user is a member of the role that is configured to access that resource. Executor This is a new element, available only since 6.0.11. It allows you to configure a shared thread pool that is available to all your connectors. This places an upper limit on the number of concurrent threads that may be started by your connectors. Note that this limit applies even if a particular connector has not used up all the threads configured for it. Listener Every major Tomcat component implements the org.apache.catalina.Lifecycle interface. This interface lets interested listeners to register with a component, to be notified of lifecycle events, such as the starting or stopping of that component. A listener implements the org.apache.catalina.LifecycleListener interface and implements its lifecycleEvent() method, which takes a LifecycleEvent that represents the event that has occurred. This gives you an opportunity to inject your own custom processing into Tomcat's lifecycle. Manager Sessions allows 'applications' to be made possible over the stateless HTTP protocol. A session represents a conversation between a client and a server and is implemented by a javax.servlet.http.HttpSession instance that is stored on the server and is associated with a unique identifier that is passed back by the client on each interaction. A new session is created on request and remains alive on the server either until it times out after a period of inactivity by its associated client, or until it is explicitly invalidated, for instance, by the client choosing to log out. The above image shows a very simplistic view of the session mechanism within Tomcat. An org.apache.catalina.Manager component is used by the Catalina engine to create, find, or invalidate sessions. This component is responsible for the sessions that are created for a context and their life cycles. The default Manager implementation simply retains sessions in memory, but supports session survival across server restarts. It writes out all active sessions to disk when the server is stopped and will reload them into memory when the server is started up again. A <Manager> must be a child of a <Context> element and is responsible for managing the sessions associated with that web application context. The default Manager takes attributes such as the algorithm that is used to generate its session identifiers, the frequency in seconds with which the manager should check for expired sessions, the maximum number of active sessions supported, and the file in which the sessions should be stored. Other implementations of Manager are provided that let you persist sessions to a durable data store such as a file or a JDBC database.
Read more
  • 0
  • 0
  • 2727

article-image-overview-tomcat-6-servlet-container-part-1
Packt
18 Jan 2010
11 min read
Save for later

An Overview of Tomcat 6 Servlet Container: Part 1

Packt
18 Jan 2010
11 min read
In practice, it is highly unlikely that you will interface an EJB container from WebSphere and a JMS implementation from WebLogic, with the Tomcat servlet container from the Apache foundation, but it is at least theoretically possible. Note that the term 'interface', as it is used here, also encompasses abstract classes. The specification's API might provide a template implementation whose operations are defined in terms of some basic set of primitives that are kept abstract for the service provider to implement. A service provider is required to make available concrete implementations of these interfaces and abstract classes. For example, the HttpSession interface is implemented by Tomcat in the form of org.apache.catalina.session.StandardSession. Let's examine the image of the Tomcat container: The objective of this article is to cover the primary request processing components that are present in this image. Advanced topics, such as clustering and security, are shown as shaded in this image and are not covered. In this image, the '+' symbol after the Service, Host, Context, and Wrapper instances indicate that there can be one or more of these elements. For instance, a Service may have a single Engine, but an Engine can contain one or more Hosts. In addition, the whirling circle represents a pool of request processor threads. Here, we will fly over the architecture of Tomcat from a 10,000-foot perspective taking in the sights as we go. Component taxonomy Tomcat's architecture follows the construction of a Matrushka doll from Russia. In other words, it is all about containment where one entity contains another, and that entity in turn contains yet another. In Tomcat, a 'container' is a generic term that refers to any component that can contain another, such as a Server, Service, Engine, Host, or Context. Of these, the Server and Service components are special containers, designated as Top Level Elements as they represent aspects of the running Tomcat instance. All the other Tomcat components are subordinate to these top level elements. The Engine, Host, and Context components are officially termed Containers, and refer to components that process incoming requests and generate an appropriate outgoing response. Nested Components can be thought of as sub-elements that can be nested inside either Top Level Elements or other Containers to configure how they function. Examples of nested components include the Valve, which represents a reusable unit of work; the Pipeline, which represents a chain of Valves strung together; and a Realm which helps set up container-managed security for a particular container. Other nested components include the Loader which is used to enforce the specification's guidelines for servlet class loading; the Manager that supports session management for each web application; the Resources component that represents the web application's static resources and a mechanism to access these resources; and the Listener that allows you to insert custom processing at important points in a container's life cycle, such as when a component is being started or stopped. Not all nested components can be nested within every container. A final major component, which falls into its own category, is the Connector. It represents the connection end point that an external client (such as a web browser) can use to connect to the Tomcat container. Before we go on to examine these components, let's take a quick look at how they are organized structurally. Note that this diagram only shows the key properties of each container. When Tomcat is started, the Java Virtual Machine (JVM) instance in which it runs will contain a singleton Server top level element, which represents the entire Tomcat server. A Server will usually contain just one Service object, which is a structural element that combines one or more Connectors (for example, an HTTP and an HTTPS connector) that funnel incoming requests through to a single Catalina servlet Engine. The Engine represents the core request processing code within Tomcat and supports the definition of multiple Virtual Hosts within it. A virtual host allows a single running Tomcat engine to make it seem to the outside world that there are multiple separate domains (for example, www.my-site.com and www.your-site.com) being hosted on a single machine. Each virtual host can, in turn, support multiple web applications known as Contexts that are deployed to it. A context is represented using the web application format specified by the servlet specification, either as a single compressed WAR (Web Application Archive) file or as an uncompressed directory. In addition, a context is configured using a web.xml file, as defined by the servlet specification. A context can, in turn, contain multiple servlets that are deployed into it, each of which is wrapped in a Wrapper component. The Server, Service, Connector, Engine, Host, and Context elements that will be present in a particular running Tomcat instance are configured using the server.xml configuration file. Architectural benefits This architecture has a couple of useful features. It not only makes it easy to manage component life cycles (each component manages the life cycle notifications for its children), but also to dynamically assemble a running Tomcat server instance that is based on the information that has been read from configuration files at startup. In particular, the server.xml file is parsed at startup, and its contents are used to instantiate and configure the defined elements, which are then assembled into a running Tomcat instance. The server.xml file is read only once, and edits to it will not be picked up until Tomcat is restarted. This architecture also eases the configuration burden by allowing child containers to inherit the configuration of their parent containers. For instance, a Realm defines a data store that can be used for authentication and authorization of users who are attempting to access protected resources within a web application. For ease of configuration, a realm that is defined for an engine applies to all its children hosts and contexts. At the same time, a particular child, such as a given context, may override its inherited realm by specifying its own realm to be used in place of its parent's realm. Top Level Components The Server and Service container components exist largely as structural conveniences. A Server represents the running instance of Tomcat and contains one or more Service children, each of which represents a collection of request processing components. Server A Server represents the entire Tomcat instance and is a singleton within a Java Virtual Machine, and is responsible for managing the life cycle of its contained services. The following image depicts the key aspects of the Server component. As shown, a Server instance is configured using the server.xml configuration file. The root element of this file is <Server> and represents the Tomcat instance. Its default implementation is provided using org.apache.catalina.core.StandardServer, but you can specify your own custom implementation through the className attribute of the <Server> element. A key aspect of the Server is that it opens a server socket on port 8005 (the default) to listen a shutdown command (by default, this command is the text string SHUTDOWN). When this shutdown command is received, the server gracefully shuts itself down. For security reasons, the connection requesting the shutdown must be initiated from the same machine that is running this instance of Tomcat. A Server also provides an implementation of the Java Naming and Directory Interface (JNDI) service, allowing you to register arbitrary objects (such as data sources) or environment variables, by name. At runtime, individual components (such as servlets) can retrieve this information by looking up the desired object name in the server's JNDI bindings. While a JNDI implementation is not integral to the functioning of a servlet container, it is part of the Java EE specification and is a service that servlets have a right to expect from their application servers or servlet containers. Implementing this service makes for easy portability of web applications across containers. While there is always just one server instance within a JVM, it is entirely possible to have multiple server instances running on a single physical machine, each encased in its own JVM. Doing so insulates web applications that are running on one VM from errors in applications that are running on others, and simplifies maintenance by allowing a JVM to be restarted independently of the others. This is one of the mechanisms used in a shared hosting environment (the other is virtual hosting, which we will see shortly) where you need isolation from other web applications that are running on the same physical server. Service While the Server represents the Tomcat instance itself, a Service represents the set of request processing components within Tomcat. A Server can contain more than one Service, where each service associates a group of Connector components with a single Engine. Requests from clients are received on a connector, which in turn funnels them through into the engine, which is the key request processing component within Tomcat. The image shows connectors for HTTP, HTTPS, and the Apache JServ Protocol (AJP). There is very little reason to modify this element, and the default Service instance is usually sufficient. A hint as to when you might need more than one Service instance can be found in the above image. As shown, a service aggregates connectors, each of which monitors a given IP address and port, and responds in a given protocol. An example use case for having multiple services, therefore, is when you want to partition your services (and their contained engines, hosts, and web applications) by IP address and/or port number. For instance, you might configure your firewall to expose the connectors for one service to an external audience, while restricting your other service to hosting intranet applications that are visible only to internal users. This would ensure that an external user could never access your Intranet application, as that access would be blocked by the firewall. The Service, therefore, is nothing more than a grouping construct. It does not currently add any other value to the proceedings. Connectors A Connector is a service endpoint on which a client connects to the Tomcat container. It serves to insulate the engine from the various communication protocols that are used by clients, such as HTTP, HTTPS, or the Apache JServ Protocol (AJP). Tomcat can be configured to work in two modes—Standalone or in Conjunction with a separate web server. In standalone mode, Tomcat is configured with HTTP and HTTPS connectors, which make it act like a full-fledged web server by serving up static content when requested, as well as by delegating to the Catalina engine for dynamic content. Out of the box, Tomcat provides three possible implementations of the HTTP/1.1 and HTTPS connectors for this mode of operation. The most common are the standard connectors, known as Coyote which are implemented using standard Java I/O mechanisms. You may also make use of a couple of newer implementations, one which uses the non-blocking NIO features of Java 1.4, and the other which takes advantage of native code that is optimized for a particular operating system through the Apache Portable Runtime (APR). Note that both the Connector and the Engine run in the same JVM. In fact, they run within the same Server instance. In conjunction mode, Tomcat plays a supporting role to a web server, such as Apache httpd or Microsoft's IIS. The client here is the web server, communicating with Tomcat either through an Apache module or an ISAPI DLL. When this module determines that a request must be routed to Tomcat for processing, it will communicate this request to Tomcat using AJP, a binary protocol that is designed to be more efficient than the text based HTTP when communicating between a web server and Tomcat. On the Tomcat side, an AJP connector accepts this communication and translates it into a form that the Catalina engine can process. In this mode, Tomcat is running in its own JVM as a separate process from the web server. In either mode, the primary attributes of a Connector are the IP address and port on which it will listen for incoming requests, and the protocol that it supports. Another key attribute is the maximum number of request processing threads that can be created to concurrently handle incoming requests. Once all these threads are busy, any incoming request will be ignored until a thread becomes available. By default, a connector listens on all the IP addresses for the given physical machine (its address attribute defaults to 0.0.0.0). However, a connector can be configured to listen on just one of the IP addresses for a machine. This will constrain it to accept connections from only that specified IP address. Any request that is received by any one of a service's connectors is passed on to the service's single engine. This engine, known as Catalina, is responsible for the processing of the request, and the generation of the response. The engine returns the response to the connector, which then transmits it back to the client using the appropriate communication protocol.
Read more
  • 0
  • 0
  • 7719
Banner background image

article-image-configuring-jboss-application-server-5
Packt
05 Jan 2010
7 min read
Save for later

Configuring JBoss Application Server 5

Packt
05 Jan 2010
7 min read
JBoss Web Server currently uses the Apache Tomcat 6.0 release and it is ships as service archive (SAR) application in the deploy folder. The location of the embedded web server has changed at almost every new release of JBoss. The following table could be a useful reference if you are using different versions of JBoss: JBoss release Location of Tomcat 5.0.0 GA deploy/jbossweb.sar 4.2.2 GA deploy/jboss-web.deployer 4.0.5 GA deploy/jbossweb-tomcat55.sar 3.2.X deploy/jbossweb-tomcat50.sar The main configuration file is server.xml which, by default, has the following minimal configuration: <Server><Listener className="org.apache.catalina.core.AprLifecycleListener"SSLEngine="on" /><Listener className="org.apache.catalina.core.JasperListener" /><Service name="jboss.web"><Connector protocol="HTTP/1.1" port="8080"address="${jboss.bind.address}"connectionTimeout="20000" redirectPort="8443" /><Connector protocol="AJP/1.3" port="8009"address="${jboss.bind.address}"redirectPort="8443" /><Engine name="jboss.web" defaultHost="localhost"><Realm className="org.jboss.web.tomcat.security.JBossWebRealm"certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /><Host name="localhost"><Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve"cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager"transactionManagerObjectName="jboss:service=TransactionManager" /></Host></Engine></Service></Server> Following is a short description for the key elements of the configuration: Element Description Server The Server is Tomcat itself, that is, an instance of the web application server and is a top-level component. Service An Engine is a request-processing component that represents the Catalina servlet engine. It examines the HTTP headers to determine the virtual host or context to which requests should be passed. Connector It's the gateway to Tomcat Engine. It ensures that requests are received from clients and are assigned to the Engine. Engine Engine handles all requests. It examines the HTTP headers to determine the virtual host or context to which requests should be passed. Host One virtual host. Each virtual host is differentiated by a fully qualified hostname. Valve A component that will be inserted into the request processing pipeline for the associated Catalina container. Each Valve has distinct processing capabilities. Realm It contains a set of users and roles. As you can see, all the elements are organized in a hierarchical structure where the Server element acts as top-level container: The lowest elements in the configuration are Valve and Realm, which can be nested into Engine or Host elements to provide unique processing capabilities and role management. Customizing connectors Most of the time when you want to customize your web container, you will have to change some properties of the connector. <Connector protocol="HTTP/1.1" port="8080"address="${jboss.bind.address}"connectionTimeout="20000" redirectPort="8443" /> A complete list of the connector properties can be found on the Jakarta Tomcat site (http://tomcat.apache.org/). Here, we'll discuss the most useful connector properties: port: The TCP port number on which this connector will create a server socket and await incoming connections. Your operating system will allow only one server application to listen to a particular port number on a particular IP address. acceptCount: The maximum queue length for incoming connection requests, when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 10. connectionTimeout: The number of milliseconds the connector will wait after accepting a connection for the request URI line to be presented. The default value is 60000 (that is, 60 seconds). address: For servers with more than one IP address, this attribute specifies which address will be used for listening on the specified port. By default, this port will be used on all IP addresses associated with the server. enableLookups: Set to true if you want to perform DNS lookups in order to return the actual hostname of the remote client and to false in order to skip the DNS lookup and return the IP address in string form instead (thereby improving performance). By default, DNS lookups are enabled. maxHttpHeaderSize: The maximum size of the request and response HTTP header, specified in bytes. If not specified, this attribute is set to 4096 (4 KB). maxPostSize: The maximum size in bytes of the POST, which will be handled by the container FORM URL parameter parsing. The limit can be disabled by setting this attribute to a value less than or equal to zero. If not specified, this attribute is set to 2097152 (2 megabytes). maxThreads: The maximum number of request processing threads to be created by this connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200. The new Apache Portable Runtime connector Apache Portable Runtime (APR) is a core Apache 2.x library designed to provide superior scalability, performance, and better integration with native server technologies. The mission of the Apache Portable Runtime (APR) project is to create and maintain software libraries that provide a predictable and consistent interface to underlying platform-specific implementations. The primary goal is to provide an API to which software developers may code and be assured of predictable if not identical behaviour regardless of the platform on which their software is built, relieving them of the need to code special-case conditions to work around or take advantage of platform-specific deficiencies or features. The high-level performance of the new APR connector is made possible by the introduction of socket pollers for persistent connections (keepalive). This increases the scalability of the server, and by using sendfile system calls, static content is delivered faster and with lower CPU utilization. Once you have set up the APR connector, you are allowed to use the following additional properties in your connector: keepAliveTimeout: The number of milliseconds the APR connector will wait for another HTTP request, before closing the connection. If not set, this attribute will use the default value set for the connectionTimeout attribute. pollTime: The duration of a poll call; by default it is 2000 (5 ms). If you try to decrease this value, the connector will issue more poll calls, thus reducing latency of the connections. Be aware that this will put slightly more load on the CPU as well. pollerSize: The number of sockets that the poller kept alive connections can hold at a given time. The default value is 768, corresponding to 768 keepalive connections. useSendfile: Enables using kernel sendfile for sending certain static files. The default value is true. sendfileSize: The number of sockets that the poller thread dispatches for sending static files asynchronously. The default value is 1024. If you want to consult the full documentation of APR, you can visit http://apr.apache.org/. Installing the APR connector In order to install the APR connector, you need to add some native libraries to your JBoss server. The native libraries can be found at http://www.jboss.org/jbossweb/downloads/jboss-native/. Download the version that is appropriate for your OS. Once you are ready, you need to simply unzip the content of the archive into your JBOSS_HOME directory. As an example, Unix users (such as HP users) would need to perform the following steps: cd jboss-5.0.0.GAtar tvfz jboss-native-2.0.6-hpux-parisc2-ssl.tar.gz Now, restart JBoss and, from the console, verify that the connector is bound to Http11AprProtocol. A word of caution!At the time of writing, the APR library still has some open issues that prevent it from loading correctly on some platforms, particularly on the 32-bit Windows. Please consult the JBoss Issue Tracker (https://jira.jboss.org/jira/secure/IssueNavigator.jspa?) to verify that there are no open issues for your platform.
Read more
  • 0
  • 0
  • 4018

article-image-starting-tomcat-6-part-1
Packt
31 Dec 2009
9 min read
Save for later

Starting Up Tomcat 6: Part 1

Packt
31 Dec 2009
9 min read
Using scripts The Tomcat startup scripts are found within your project under the bin folder. Each script is available either as a Windows batch file (.bat), or as a Unix shell script (.sh). The behavior of either variant is very similar, and so I'll focus on their general structure and responsibilities, rather than on any operating system differences. However, it is to be noted that the Unix scripts are often more readable than the Windows batch files. In the following text, the specific extension has been omitted, and either .bat or .sh should be substituted as appropriate. Furthermore, while the Windows file separator '' has been used, it can be substituted with a '/' as appropriate. The overall structure of the scripts is as shown—you will most often invoke the startup script. Note that the shutdown script has a similar call structure. However, given its simplicity, it lends itself to fairly easy investigation, and so I leave it as an exercise for the reader. Both startup and shutdown are simple convenience wrappers for invoking the catalina script. For example, invoking startup.bat with no command line arguments calls catalina.bat with an argument of start. On the other hand, running shutdown.bat calls catalina.bat with a command line argument of stop. Any additional command line arguments that you pass to either of these scripts are passed right along to catalina.bat. The startup script has the following three main goals: If the CATALINA_HOME environment variable has not been set, it is set to the Tomcat installation's root folder. The Unix variant defers this action to the catalina script. It looks for the catalina script within the CATALINA_HOMEbin folder. The Unix variant looks for it in the same folder as the startup script. If the catalina script cannot be located, we cannot proceed, and the script aborts. It invokes the catalina script with the command line argument start followed by any other arguments supplied to the startup script. The catalina script is the actual workhorse in this process. Its tasks can be broadly grouped into two categories. First, it must ensure that all the environment variables needed for it to function have been set up correctly, and second it must execute the main class file with the appropriate options. Setting up the environment In this step, the catalina script sets the CATALINA_HOME, CATALINA_BASE, and CATALINA_TMPDIR environment variables, sets variables to point to various Java executables, and updates the CLASSPATH variable to limit the repositories scanned by the System class loader. It ensures that the CATALINA_HOME environment variable is set appropriately. This is necessary because catalina can be called independently of the startup script. Next, it calls the setenv script to give you a chance to set any installation-specific environment variables that affect the processing of this script. This includes variables that set the path of your JDK or JRE installation, any Java runtime options that need to be used, and so on. If CATALINA_BASE is set, then the CATALINA_BASEbinsetenv script is called. Else, the version under CATALINA_HOME is used. If the CATALINA_HOMEbinsetclasspath does not exist, processing aborts. Else, the BASEDIR environment variable is set to CATALINA_HOME and the setclasspath script is invoked. This script performs the following activities: It verifies that either a JDK or a JRE is available. If both the JAVA_HOME and JRE_HOME environment variables are not set, it aborts processing after warning the user. If we are running Tomcat in debug mode, that is, if '–debug' has been specified as a command line argument, it verifies that a JDK (and not just a JRE) is available. If the JAVA_ENDORSED_DIRS environment variable is not set, it is defaulted to BASEDIRendorsed. This variable is fed to the JVM as the value of the –Djava.endorsed.java.dirs system property. The CLASSPATH is then truncated to point at just JAVA_HOMElibtools.jar. This is a key aspect of the startup process, as it ensures that any CLASSPATH set in your environment is now overridden.Note that tools.jar contains the classes needed to compile and run Java programs, and to support tools such as Javadoc and native2ascii. For instance, the class com.sun.tools.javac.main.Main that is found in tools.jar represents the javac compiler. A Java program could dynamically create a Java class file and then compile it using an instance of this compiler class. Finally, variables are set to point to various Java executables, such as java, javaw (identical to java, but without an associated console window), jdb (the Java debugger), and javac (the Java compiler). These are referred to using the _RUNJAVA, _RUNJAVAW, _RUNJDB, and _RUNJAVAC environment variables respectively. The CLASSPATH is updated to also include CATALINA_HOMEbinbootstrap.jar, which contains the classes that are needed by Tomcat during the startup process. In particular, this includes the org.apache.catalina.startup.Bootstrap class. Note that including bootstrap.jar on the CLASSPATH also automatically includes commons-daemon.jar, tomcat-juli.jar, and tomcat-coyote.jar because the manifest file of bootstrap.jar lists these dependencies in its Class-Path attribute. If the JSSE_HOME environment variable is set, additional Java Secure Sockets Extension JARs are also appended to the CLASSPATH. Secure Sockets Layer (SSL) is a technology that allows clients and servers to communicate over a secured connection where all data transmissions are encrypted by the sender. SSL also allows clients and servers to determine whether the other party is indeed who they say they are, using certificates. The JSSE API allows Java programs to create and use SSL connections. Though this API began life as a standalone extension, the JSSE classes have been integrated into the JDK since Java 1.4. If the CATALINA_BASE variable is not set, it is defaulted to CATALINA_HOME. Similarly, if the Tomcat work directory location, CATALINA_TMPDIR is not specified, then it is set to CATALINA_BASEtemp. Finally, if the file CATALINA_BASEconflogging.properties exists, then additional logging related system properties are appended to the JAVA_OPTS environment variable. All the CLASSPATH machinations described above have effectively limited the repository locations monitored by the System class loader. This is the class loader responsible for finding classes located on the CLASSPATH. At this point, our execution environment has largely been validated and configured. The script notifies the user of the current execution configuration by writing out the paths for CATALINA_BASE, CATALINA_HOME, and the CATALINA_TMPDIR to the console. If we are starting up Tomcat in debug mode, then the JAVA_HOME variable is also written, else the JRE_HOME is emitted instead. These are the lines that we've grown accustomed to seeing when starting up Tomcat. C:tomcatTOMCAT_6_0_20outputbuildbin>startupUsing CATALINA_BASE: C:tomcatTOMCAT_6_0_20outputbuildUsing CATALINA_HOME: C:tomcatTOMCAT_6_0_20outputbuildUsing CATALINA_TMPDIR: C:tomcatTOMCAT_6_0_20outputbuildtempUsing JRE_HOME: C:javajdk1.6.0_14 With all this housekeeping done, the script is now ready to actually start the Tomcat instance. Executing the requested command This is where the actual action begins. This script can be invoked with the following commands: debug [-security], which is used to start Catalina in a debugger jpda start, which is used to start Catalina under a JPDA debugger run [-security], which is used to start Catalina in the current window start [-security], which starts Catalina in a separate window stop, which is used to stop Catalina version, which prints the version of Tomcat The use of a security manager, as determined by the optional –security argument, is out of scope for this article. The easiest way to understand this part of catalina.bat is to deconstruct the command line that is executed to start up the Tomcat instance. This command takes this general form (all in one line): _EXECJAVAJAVA_OPTSCATALINA_OPTSJPDA_OPTSDEBUG_OPTS-Djava.endorsed.dirs="JAVA_ENDORSED_DIRS"-classpath "CLASSPATH"-Djava.security.manager-Djava.security.policy=="SECURITY_POLICY_FILE"-Dcatalina.base="CATALINA_BASE"-Dcatalina.home="CATALINA_HOME"-Djava.io.tmpdir="CATALINA_TMPDIR"MAINCLASSCMD_LINE_ARGSACTION Where: _EXECJAVA is the executable that should be used to execute our main class. This defaults to the Java application launcher, _RUNJAVA. However, if debug was supplied as a command-line argument to the script, this is set to _RUNJDB instead. MAINCLASS is set to org.apache.catalina.startup.Bootstrap ACTION defaults to start, but is set to stop if the Tomcat instance is being stopped. CMD_LINE_ARGS are any arguments specified on the command line that follow the arguments that are consumed by catalina. SECURITY_POLICY_FILE defaults to CATALINA_BASEconfcatalina.policy. JAVA_OPTS and CATALINA_OPTS are used to carry arguments, such as maximum heap memory settings or system properties, which are intended for the Java launcher. The difference between the two is that CATALINA_OPTS is cleared out when catalina is invoked with the stop command. In addition, as indicated by its name, the latter is targeted primarily at options for running a Tomcat instance. JPDA_OPTS sets the Java Platform Debugger Architecture (JPDA) options to support remote debugging of this Tomcat instance. The default options are set in the script. It chooses TCP/IP as the protocol used to connect to the debugger (transport=dt_socket), marks this JVM as a server application (server=y), sets the host and port number on which the server should listen for remote debugging requests (address=8000), and requires the application to run until the application encounters a breakpoint (suspend=n). DEBUG_OPTS sets the -sourcepath flag when the Java Debugger is used to launch the Tomcat instance. The other variables are set as seen in the previous section. At this point, control passes to the main() method in Bootstrap.java. This is where the steps that are unique to script-based startup end. The rest of this article follows along with the logic coded into Bootstrap.java and Catalina.java.
Read more
  • 0
  • 0
  • 2151

article-image-creating-web-application-jboss-5
Packt
31 Dec 2009
7 min read
Save for later

Creating a Web Application on JBoss AS 5

Packt
31 Dec 2009
7 min read
Wonder what was the first message sent through Internet? At 22:30 hours on October 29, 1969, a message was transmitted using ARPANET (the predecessor of the global Internet) on a host-to-host connection. It was meant to transmit "login". However, it transmitted just "lo" and crashed. Developing web layout The basic component of any Java web application is the servlet. Born in the middle of the 90s, servlets quickly gained success against their competitors, the CGI scripts. This was because of some innovative features, especially the ability to execute requests concurrently, without the overhead of creating a new process for each request. However, a few things were missing, for example, the servlet API did not address any APIs specifically for creating the client GUI. This resulted in multiple ways of creating the presentation tier, generally with tag libraries that differed from job to job and from individual developers. The second thing that was missing in the servlet specification was a clear distinction between the presentation tier and the backend. A plethora of web frameworks tried to fill this gap; particularly the Struts framework effectively realized a clean separation of the model (application logic that interacts with a database) from the view (HTML pages presented to the client) and the controller (instance that passes information between view and model). However, the limitation of these frameworks was that even if they realized a complete modular abstraction, they still failed as they always exposed theHttpServletRequest and HttpServletSessionobjects to their action(s). Their actions, in turn, needed to accept the interface contracts such as ActionForm, ActionMapping, and so on. The JavaServer Faces that emerged on the stage a few years later pursued a different approach. Unlike request-driven Model–View–Controller (MVC) web frameworks, JSF chose a component-based approach that ties the user interface component to a well-defined request processing lifecycle. This greatly simplifies the development of web applications. The JSF specification allows you to have presentation components be POJOs. This creates a cleaner separation from the servlet layer and makes it easier to do testing by not requiring the POJOs to be dependent on the servlet classes. In the following sections, we will describe how to create a web layout for our application store using the JSF technology. For an exhaustive explanation of the JSF framework, we suggest you to surf the JSF homepage at http://java.sun.com/javaee/javaserverfaces/. Installing JSF on JBoss AS JBoss AS already ships with the JSF libraries, so the good news is that you don't need to download or install them in the application server. There are different implementations of the JSF libraries. Earlier JBoss releases adopted the Apache MyFaces library. JBoss AS 4.2 and 5.x ship with the Common Development and Distribution License (CDDL) implementation (now called "Project Mojarra") of the JSF 1.2 specification that is available from the java.net open source community. Switching to another JSF implementation is anyway possible. All you have to do is package your JSF libraries with your web application and configure your web.xml to ignore the JBoss built-in implementation: <context-param><param-name>org.jboss.jbossfaces.WAR_BUNDLES_JSF_IMPL</param-name><param-value>true</param-value></context-param> We will start by creating a new JSF project. From the File menu, select New | Other | JBoss Tools Web | JSF | JSF Web project. The JSF applet wizard will display, requesting the Project Name, the JSF Environment, and the default starting Template. Choose AppStoreWeb as the project name, and check that the JSF Environment used is JSF 1.2. You can leave all other options to the defaults and click Finish. Eclipse will now suggest that you switch to the Web Projects view that logically assembles all JSF components. (It seems that the current release of the plugin doesn't understand your choice, so you have to manually click on the Web Projects tab.) The key configuration file of a JSF application is faces-config.xml contained in the Configuration folder. Here you declare all navigation rules of the application and the JSF managed beans. Managed beans are simple POJOs that provide the logic for initializing and controlling JSF components, and for managing data across page requests, user sessions, or the application as a whole. Adding JSF functionalities also requires adding some information to your web.xml file so that all requests ending with a certain suffix are intercepted by the Faces Servlet. Let's have a look at the web.xml configuration file: <?xml version="1.0"?><web-app version="2.5" xsi:schemaLocation="http://java.sun.com/xml/ns/javaeehttp://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"><display-name>AppStoreWeb</display-name><context-param><param-name>javax.faces.STATE_SAVING_METHOD</param-name><param-value>server</param-value></context-param><context-param> [1]<param-name>com.sun.faces.enableRestoreView11Compatibility</param-name><param-value>true</param-value></context-param><listener><listener-class>com.sun.faces.config.ConfigureListener</listener-class></listener><!-- Faces Servlet --><servlet><servlet-name>Faces Servlet</servlet-name><servlet-class>javax.faces.webapp.FacesServlet</servlet-class><load-on-startup>1</load-on-startup></servlet><!-- Faces Servlet Mapping --><servlet-mapping><servlet-name>Faces Servlet</servlet-name><url-pattern>*.jsf</url-pattern></servlet-mapping><login-config><auth-method>BASIC</auth-method></login-config></web-app> The context-param pointed out here [1] is not added by default when you create a JSF application. However, it needs to be added, else you'll stumble into an annoying ViewExpiredException when your session expires (JSF 1.2). Setting up navigation rules In the first step, we will define the navigation rules for our AppStore. A minimalist approach would require a homepage that displays the orders, along with two additional pages for inserting new customers and new orders respectively. Let's add the following navigation rule to the faces-config.xml: <faces-config><navigation-rule><from-view-id>/home.jsp</from-view-id> [1]<navigation-case><from-outcome>newCustomer</from-outcome> [2]<to-view-id>/newCustomer.jsp</to-view-id></navigation-case><navigation-case><from-outcome>newOrder</from-outcome> [3]<to-view-id>/newOrder.jsp</to-view-id></navigation-case></navigation-rule><navigation-rule><from-view-id></from-view-id> [4]<navigation-case><from-outcome>home</from-outcome><to-view-id>/home.jsp</to-view-id></navigation-case></navigation-rule></faces-config> In a navigation rule, you can have one from-view-id that is the (optional) starting page, and one or more landing pages that are tagged as to-view-id. The from-outcome determines the navigation flow. Think about this parameter as a Struts forward, that is, instead of embedding the landing page in the JSP/servlet, you'll simply declare a virtual path in your JSF beans. Therefore, our starting page will be home.jsp [1] that has two possible links—the newCustomer.jsp form [2] and the newOrder.jsp form [3]. At the bottom, there is a navigation rule that is valid across all pages [4]. Every page requesting the home outcome will be redirected to the homepage of the application. The above JSP will be created in a minute, so don't worry if Eclipse validator complains about the missing pages. This configuration can also be examined from the Diagram tab of your faces-config.xml: The next piece of code that we will add to the confi guration is the JSF managed bean declaration. You need to declare each bean here that will be referenced by JSF pages. Add the following code snippet at the top of your faces-config.xml (just before navigation rules): <managed-bean><managed-bean-name>manager</managed-bean-name> [1]<managed-bean-class>com.packpub.web.StoreManagerJSFBean</managed-bean-class> [2]<managed-bean-scope>request</managed-bean-scope> [3]</managed-bean> The <managed-bean-name> [1] element will be used by your JSF page to reference your beans. The <managed-bean-class> [2] is obviously the corresponding class. The managed beans can then be stored within the request, session, or application scopes, depending on the value of the <managed-bean-scope> element [3].
Read more
  • 0
  • 0
  • 1995
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-integrating-spring-framework-hibernate-orm-framework-part-1
Packt
31 Dec 2009
6 min read
Save for later

Integrating Spring Framework with Hibernate ORM Framework: Part 1

Packt
31 Dec 2009
6 min read
Spring is a general-purpose framework that plays different roles in many areas of application architecture. One of these areas is persistence. Spring does not provide its own persistence framework. Instead, it provides an abstraction layer over JDBC, and a variety of O/R mapping frameworks, such as iBATIS SQL Maps, Hibernate, JDO, Apache OJB, and Oracle TopLink. This abstraction allows consistent, manageable data-access implementation. Spring's abstraction layer abstracts the application from the connection factory, the transaction API, and the exception hierarchies used by the underlying persistence technology. Application code always uses the Spring API to work with connection factories, utilizes Spring strategies for transaction management, and involves Spring's generic exception hierarchy to handle underlying exceptions. Spring sits between the application classes and the O/R mapping tool, undertakes transactions, and manages connection objects. It translates the underlying persistence exceptions thrown by Hibernate to meaningful, unchecked exceptions of type DataAccessException. Moreover, Spring provides IoC and AOP, which can be used in the persistence layer. Spring undertakes Hibernate's transactions and provides a more powerful, comprehensive approach to transaction management. The Data Access Object pattern Although you can obtain a Session object and connect to Hibernate anywhere in the application, it's recommended that all interactions with Hibernate be done only through distinct classes. Regarding this, there is a JEE design pattern, called the DAO pattern. According to the DAO pattern, all persistent operations should be performed via specific classes, technically called DAO classes. These classes are used exclusively for communicating with the data tier. The purpose of this pattern is to separate persistence-related code from the application's business logic, which makes for more manageable and maintainable code, letting you change the persistence strategy flexibly, without changing the business rules or workflow logic. The DAO pattern states that we should define a DAO interface corresponding to each DAO class. This DAO interface outlines the structure of a DAO class, defines all of the persistence operations that the business layer needs, and (in Spring-based applications) allows us to apply IoC to decouple the business layer from the DAO class. Service Facade Pattern In implementation of data access tier, the Service Facade Pattern is always used in addition to the DAO pattern. This pattern indicates using an intermediate object, called service object, between all business tier objects and DAO objects. The service object assembles the DAO methods to be managed as a unit of work. Note that only one service class is created for all DAOs that are implemented in each use case. The service class uses instances of DAO interfaces to interact with them. These instances are instantiated from the concrete DAO classes by the IoC container at runtime. Therefore, the service object is unaware of the actual DAO implementation details. Regardless of the persistence strategy your application uses (even if it uses direct JDBC), applying the DAO and Service Facade patterns to decouple application tiers is highly recommended. Data tier implementation with Hibernate Let's now see how the discussed patterns are applied to the application that directly uses Hibernate. The following code shows a sample DAO interface: package com.packtpub.springhibernate.ch13;import java.util.Collection;public interface StudentDao { public Student getStudent(long id); public Collection getAllStudents(); public Collection getGraduatedStudents(); public Collection findStudents(String lastName); public void saveStudent(Student std); public void removeStudent(Student std);} The following code shows a DAO class that implements this DAO interface: package com.packtpub.springhibernate.ch13;import org.hibernate.Session;import org.hibernate.SessionFactory;import org.hibernate.Transaction;import org.hibernate.HibernateException;import org.hibernate.Query;import java.util.Collection;public class HibernateStudentDao implements StudentDao { SessionFactory sessionFactory; public Student getStudent(long id) { Student student = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); student = (Student) session.get(Student.class, new Long(id)); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return student; } public Collection getAllStudents(){ Collection allStudents = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std order by std.lastName, std.firstName"); allStudents = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return allStudents; } public Collection getGraduatedStudents(){ Collection graduatedStudents = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std where std.status=1"); graduatedStudents = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return graduatedStudents; } public Collection findStudents(String lastName) { Collection students = null; Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); Query query = session.createQuery( "from Student std where std.lastName like ?"); query.setString(1, lastName + "%"); students = query.list(); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } return students; } public void saveStudent(Student std) { Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); session.saveOrUpdate(std); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } } public void removeStudent(Student std) { Session session = HibernateHelper.getSession(); Transaction tx = null; try { tx = session.beginTransaction(); session.delete(std); tx.commit(); tx = null; } catch (HibernateException e) { if (tx != null) tx.rollback(); throw e; } finally { session.close(); } } public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; }} As you can see, all implemented methods do routines. All obtain a Session object at first, get a Transaction object, perform a persistence operation, commit the transaction, rollback the transaction if exception occurs, and finally close the Session object. Each method contains much boilerplate code that is very similar to the other methods. Although applying the DAO pattern to the persistence code leads to more manageable and maintainable code, the DAO classes still include much boilerplate code. Each DAO method must obtain a Session instance, start a transaction, perform the persistence operation, and commit the transaction. Additionally, each DAO method should include its own duplicated exception-handling implementation. These are exactly the problems that motivate us to use Spring with Hibernate. Template Pattern: To clean the code and provide more manageable code, Spring utilizes a pattern called Template Pattern. By this pattern, a template object wraps all of the boilerplate repetitive code. Then, this object delegates the persistence calls as a part of functionality in the template. In the Hibernate case, HibernateTemplate extracts all of the boilerplate code, such as obtaining a Session, performing transaction, and handing exceptions.
Read more
  • 0
  • 0
  • 2565

article-image-getting-started-spring-mvc-developing-mvc-components
Packt
31 Dec 2009
5 min read
Save for later

Getting Started With Spring MVC - Developing the MVC components

Packt
31 Dec 2009
5 min read
In the world of networked applications, thin clients (also known as web applications) are more in demand than thick clients. Due to this demand, every language is providing frameworks that try to make web-application development simpler. The simplicity is not provided just through setting up the basic application structure or generating boiler plate code. These frameworks are trying to provide simplicity through plug-ability of the frameworks i.e. the components of different frameworks could be brought together without much difficulty. Among such frameworks, Spring Framework is one of the most used. With its support to multiple Data Access frameworks/libraries and light-weight IoC container makes it suitable for scenarios where one would like mix-and-match multiple frameworks, a different one for each layer. This aspect of Spring Framework becomes more suitable for development of web-applications where the UI does not need to know with which framework it is dealing for business process or data access. The component of the Spring Framework stack that caters to the web UI is Spring MVC. In this discussion, we will focus on the basics of Spring MVC. First section will deal with the terms and terminologies related with Spring MVC and MVC. The second section will detail the steps for developing components of a web-application using Spring MVC. That is the agenda for this discussion. Spring MVC Spring MVC, as the name suggests, is a framework based on Model (M), View (V), Controller (C) pattern. Currently there are more than seven well known web-application frameworks that implement MVC pattern. Then what are the features of Spring MVC that sets it apart from other frameworks? The two main features are: Pluggable View technology Injection of services into controllers The former provides a way to use different UI frameworks instead of Spring MVC’s UI library and the latter removes the need to develop a new way to access functionality of business layer. Pluggable View technologyVarious View technologies are available in the market (including Tiles, Velocity, etc) with which Spring MVC can quite easily be integrated. In other words, JSP is not the only template engine supported. The pluggable feature is not limited to the templating technologies. By using common configuration functionality, other frameworks such as JSF can be integrated with Spring MVC applications. Thus, it is possible to mix-and-match different View technologies by using Spring MVC. Injection of Services into ControllersThis feature comes into picture when the Spring Framework is used to implement the business layer. Using the IoC capabilities of Spring Framework, the business layer services and/or objects can be injected into the Controller without explicitly setting up the call to the service or mirroring the business layer objects in controller. This helps in reduction of code duplication between Web UI/process layer and business process layer. The next important aspect of Spring MVC is its components. They are: Model (M) View (V) Controller (C) Model deals with the data that the application has to present, View contains the logic to present the data and Controller takes care of the flow of navigation and application logic. Following are the details. ModelModel is an object that holds the data to be displayed. It can be any Java object – from simple POJO to any type of Collection object. It can also be a combination of both – an instance of POJO to hold the detailed data and a collection object to hold all the instances of the POJO which, in reality, is most commonly used Model in Spring MVC. Also, the framework has its own way to hold the data. It holds the data using the Model object that is an instance of org.springframework.ui.ModelMap. Internally, whichever collection class object is used, the framework maps it to the ModelMap class. ViewIn MVC, it is the View that presents the data to the user. Spring MVC, just as many other JEE frameworks, uses a combination of JSP and tag libraries to implement View. Apart from using JSP, many kinds of View technologies like Tiles, Velocity, and Jasper Reports can be plugged into the Framework. The main class behind this plug ability is the org.springframework.web.servlet.View. The View class achieves the plug-in functionality by presenting the View as Logical View instead of actual/physical View. Physical view corresponds to the page developed using any of the templating technologies. The Logical View corresponds to the name of the View to be used. The name is then mapped to the actual View in the configuration file. One important point to remember about how Spring MVC uses Logical View is that Logical View and Model are treated as one entity named Model And View represented by org.springframework.web.servlet.ModelAndView class. ControllerThe flow of application and navigation is directed by the controller. It also processes the user input and transforms it into the Model. In Spring MVC, controllers are developed either by extending the out-of-the-box Controller classes or implementing the Controller interface. Following comes under the former category SimpleFormController AbstractController AbstractCommandController CancellableFormController AbstractCommandController MultiActionController ParameterizableViewController ServletForwardingController ServletWrappingController UrlFilenameViewController Of these most commonly used are AbstractController, AbstractCommandController, SimpleFormController and CancellableFormController. That wraps up this section. Let us move onto the next section – steps for developing an application using Spring MVC.
Read more
  • 0
  • 0
  • 2026

article-image-editing-datagrids-popup-windows-flex
Packt
31 Dec 2009
4 min read
Save for later

Editing DataGrids with Popup Windows in Flex

Packt
31 Dec 2009
4 min read
To start with we'll create a DataGrid which will contain a first name and a last name. We will add a Panel container to place our DataGrid into. Here is what our initial app looks like: <?xml version="1.0" encoding="utf-8"?> <mx:Application layout="absolute"> <mx:Panel title="Pop Up Window" > <mx:DataGrid width="100%" height="100%"> <mx:columns> <mx:DataGridColumn headerText="First Name"/> <mx:DataGridColumn headerText="Last Name"/> </mx:columns> </mx:DataGrid> </mx:Panel> </mx:Application> You'll notice the common Application tag along with the Panel. Immediately inside the Panel is our all important DataGrid. Note, if you are not familiar with the basics of Flex then stop here and head over to the Adobe site for an introduction. Although this tutorial is not complex, I wont be able to focus on the fundamentals. I've set the DataGrid width and height attibutes to 100%. This will force it to expand to the size and width of the panel. You can make this application full screen by setting the same attributes on the Panel tag. Inside our DataGrid tag is the columns tag. Here we can describe what each column of our DataGrid will contain. In this case, one column for first name and one column for the last name. Here is a first look at our app: Adding Data The ease of Flex has allowed us to create this simple user interface with little effort. Now we come to a point where we need to add data to the DataGrid. The easiest way to do this is to use the dataProvider attribute. We will add an ArrayCollection Object to the script portion of our application to hold all the names which will appear in our DataGrid. The DataGrid and the accompanying ArrayCollection will look something like this: <mx:DataGrid id="names" width="100%" height="100%" dataProvider="{namesDP}"> <mx:columns> <mx:DataGridColumn headerText="First Name" dataField="firstName"/> <mx:DataGridColumn headerText="Last Name" dataField="lastName"/> </mx:columns> </mx:DataGrid> [Bindable] public var namesDP:ArrayCollection = new ArrayCollection ([ {firstName:'Keith',lastName:'Lee'}, {firstName:'Ira',lastName:'Glass'}, {firstName:'Christopher',lastName:'Rossin'}, {firstName:'Mary',lastName:'Little'}, {firstName:'Charlie',lastName:'Wagner'}, {firstName:'Cali',lastName:'Gonia'}, {firstName:'Molly',lastName:'Ivans'}, {firstName:'Amber',lastName:'Johnson'}]); The dataProvider attribute binds the DataGrid to the nameDP ArrayCollection. The ArrayCollection constructor's argument is an array of objects. Each object has a firstName and lastName property. If you wanted to expand on this application, you can add more data to this Array. You should also notice that the DataGridColumn tags also have an addition. We've set the dataField equal to the property in the dataProvider. The dataField is the real key in populating the DataGrid. Here is what our populated DataGrid looks like: We've now created the foundation of our application. Activating the DoubleClick Event By default the DataGrid doubleclick event is turned off. This means when a user doubleclicks an entry nothing will happen. In order to tell the DataGrid to trigger the doublelcick event we need to set the doubleClick and doubleClickEnabled attributes of the DataGrid tag. It looks like this: <mx:DataGrid width="100%" height="100%" doubleClick="showPopUp(event)" doubleClickEnabled="true"> I've set the doubleClick attribute to a method called showPopUp. We've not written this method yet, but it will be responsible for displaying the PopUp window which will allow editing of the DataGrid data. For now, let's add the Script tag and an empty showPopUp method: <mx:Script> <![CDATA[ import mx.events.ItemClickEvent; public function showPopUp(event:MouseEvent):void{ // show the popup } ]]> </mx:Script> The Script tag is a child of the Application tag.
Read more
  • 0
  • 0
  • 2031

article-image-starting-tomcat-6-part-2
Packt
31 Dec 2009
8 min read
Save for later

Starting Up Tomcat 6: Part 2

Packt
31 Dec 2009
8 min read
Bootstrapping the embedded container As we saw earlier, Bootstrap is simply a convenience class that is used to run the Embedded class, or rather to run Catalina, which subclasses Embedded. The Catalina class is intended to add the ability to process a server.xml file to its parent class. It even exposes a main() method, so you can invoke it directly with appropriate command-line arguments. Bootstrap uses its newly constructed serverLoader to load the Catalina class, which is then instantiated. It delegates the loading process to this Catalina instance's load() method. This method updates the catalina.base and catalina.home system properties to absolute references, verifies that the working directory is set appropriately, and initializes the naming system, which is Tomcat's implementation of the JNDI API. For now, all we need to note is that it indicates that JNDI is enabled by setting the catalina.useNaming system property to true, and prefixing the Context.URL_PKG_PREFIXES system property with the package org.apache.naming using a colon delimiter. The Context.URL_PKG_PREFIXES property indicates a list of fully qualified package prefixes for URL context factories. Setting org.apache.naming as the first entry makes it the first URL context factory implementation that will be located. For the java:comp/env Environment Naming Context (ENC), the actual class name for the URL context factory implementation is generated as org.apache.naming.java.javaURLContextFactory. If the Context.INITIAL_CONTEXT_FACTORY is currently not set for this environment, then this is set as the default INITIAL_CONTEXT_FACTORY to be used. Bootstrapping the Tomcat component hierarchy The configuration for a Tomcat instance is found in the confserver.xml file. This file is now processed, converting each element found into a Java object. The net result at the end of this processing is a Java object tree that mirrors this configuration file. This conversion process is facilitated by the use of the Apache Commons Digester project (http://commons.apache.org/digester/), an open source Commons project that allows you to harness the power of a SAX parser while at the same time avoiding the complexity that comes with event driven parsing. Commons Digester The Digester project was originally devised as a way of unmarshalling the struts-config.xml configuration file for Struts, but was moved out to a Commons project due to its general purpose usefulness. The basic principle behind the Digester is very simple. It takes an XML document and a RuleSet document as inputs, and generates a graph of Java objects that represents the structure that is defined in the XML instance document. There are three key concepts that come into play when using the Digester—a pattern, a rule, and an object stack. The pattern As the digester parses the input XML instance document, it keeps track of the elements it visits. Each element is identified by its parent's name followed by a forward slash ('/') and then by its name. For instance, in the example document below, the root element is represented by the pattern rolodex. Two <contact> elements are represented by the pattern rolodex/contact, the <company> elements are represented by the pattern rolodex/contact/company, and so on. <rolodex type=paperSales><contact id="1"><firstname>Damodar</firstname><lastname>Chetty</lastname><company>Software Engineering Solutions, Inc.</company></contact><contact id="2"><firstname>John</firstname><lastname>Smith</lastname><company>Ingenuitix, Inc.</company></contact></rolodex> The rule A rule specifies the action(s) that the Digester should take when a particular pattern is encountered. The common rules you will encounter are: Creational actions (create an instance of a given class to represent this XML element) Property setting actions (call setters on the Java object representing this XML element, passing in the value of either a child element or an attribute) Method invocation actions (call the specified method on the Java object representing this element, passing in the specified parameters) Object linking actions (set an object reference by calling a setter on one object while passing in the other as an argument) The object stack As objects are created, using the creational actions discussed above, Digester pushes them to the top of its internal stack. All actions typically affect the object at the top of the stack. A creational action will automatically pop the element on the top of the stack when the end tag for the pattern is detected. Using the Digester The typical sequence of actions is to create an object using a creational action, set its properties using a property setting action, and once the object is fully formed, to pop it off the top of the stack by linking it to its parent, which is usually just below it on the stack. Once the child has been popped off, the parent is once again at the top of the stack. This repeats as additional children objects are created, initialized, linked, and popped. Once all the children are processed and the parent object is fully initialized, the parent itself is popped off the stack, and we are done. You instantiate an org.apache.commons.digester.Digester by invoking the createDigester() method of org.apache.commons.digester.xmlrules.DigesterLoader and passing it the URL for the file containing the patterns and rules. Patterns and rules can also be specified programmatically by calling methods directly on the digester instance. However, defining them in a separate XML RuleSet instance document is much more modular, as it extracts rule configuration out of program code, making the code more readable and maintainable. Then, you invoke the parse() method of a Digester instance and pass it the actual XML instance document. The digester uses its configured rules to convert elements in the instance document into Java objects. The server.xml Digester The Catalina instance creates a Digester to process the server.xml file. Every element in this file is converted into an instance of the appropriate class, its properties are set based on configuration information in this file, and connections between the objects are set up, until what you are left with is a functioning framework of classes. This ability to configure the structure of cooperating classes using a declarative approach makes it easy to customize a Tomcat installation with very little effort. The createStartDigester() method in Catalina does the work of instantiating a new Digester and registering patterns and rules with it. The Catalina instance is then pushed to the top of the Digester stack, making it the root ancestor for all the elements parsed from the server.xml document. The rules can be described as follows: Pattern Rule Server Creational action: Instantiates an org.apache.catalina.core.StandardServer Set properties action: Copies attribute values over to the topmost object of the stack using mutator methods that are named similarly to the attribute Object linking action: Invokes setServer()to set this newly minted Server instance on the Catalina instance found on the stack. Server/ GlobalNamingResources Creational action: Instantiate an org.apache.catalina.deploy.NamingResources Set properties action: Copies attribute values from this element over to the topmost object on the stack Object linking action: Sets this newly instantiated object on the StandardServer instance at the top of the stack, by invoking its setGlobalNamingResources(). Server/Listener Creational action: Instantiate the class specified by the fully qualified class name provided as an attribute. Set properties action: Copy attributes from this element. Object linking action: Sets this instance on the StandardServer instance at the top of the stack, by invoking its addLifecycleListener() method with this new instance. Server/Service Creational action: Instantiates an org.apache.catalina.core.StandardService. Set properties action: Copy attributes from this element Object linking action: Invokes addService()on the StandardServer instance at the top of the stack passing in this newly minted instance Server/Service/Listener Creational action: Instantiate the class specified by the fully qualified class name provided as the className attribute Set properties action: Copy attributes from this element Object linking action: Invokes addLifecycleListener() on the StandardService instance at the top of the stack, passing in this listener instance Server/Service/Executor   Creational action: Instantiate the class org.apache.catalina.core.StandardThreadExecutor Set properties action: Copy attributes for this element Object linking action: Invokes addExecutor() with this instance, on the StandardService instance at the top of the stack Server/Service/Connector   Creational action: Instantiate the class org.apache.catalina.startup.ConnectorCreateRule Set properties action: Copy all attributes for this element except for the executor property Object linking action: Invokes addConnector(), passing in this instance, on the StandardService instance at the top of the stack Server/Service/Connector/Listener   Creational action: Instantiate the class specified by the fully qualified class name provided as the className attribute Set properties action: Copy attributes from this element Object linking action: Invokes addLifecycleListener(), passing in this instance, on the Connector instance at the top of the stack Server/Service/Engine   Set the Engine instance's parent class loader to the serverLoader
Read more
  • 0
  • 0
  • 1024
article-image-introduction-jsf-part-1
Packt
30 Dec 2009
6 min read
Save for later

An Introduction to JSF: Part 1

Packt
30 Dec 2009
6 min read
While the main focus of this article is learning how to use JSF UI components, and not to cover the JSF framework in complete detail, a basic understanding of fundamental JSF concepts is required before we can proceed. Therefore, by way of introduction, let's look at a few of the building blocks of JSF applications: the Model-View-Controller architecture, the JSF request processing lifecycle, managed beans, EL expressions, UI components, converters, validators, and internationalization (I18N). The Model-View-Controller architecture Like many other web frameworks, JSF is based on the Model-View-Controller (MVC) architecture. The MVC pattern promotes the idea of “separation of concerns”, or the decoupling of the presentation, business, and data access tiers of an application. The Model in MVC represents “state” in the application. This includes the state of user interface components (for example: the selection state of a radio button group, the enabled state of a button, and so on) as well as the application’s data (the customers, products, invoices, orders, and so on). In a JSF application, the Model is typically implemented using Plain Old Java Objects (POJOs) based on the JavaBeans API. These classes are also described as the “domain model” of the application, and act as Data Transfer Objects (DTOs) to transport data between the various tiers of the application. JSF enables direct data binding between user interface components and domain model objects using the Expression Language (EL), greatly simplifying data transfer between the View and the Model in a Java web application. The View in MVC represents the user interface of the application. The View is responsible for rendering data to the user, and for providing user interface components such as labels, text fields, buttons, radios, and checkboxes that support user interaction. As users interact with components in the user interface, events are fired by these components and delivered to Controller objects by the MVC framework. In this respect, JSF has much in common with a desktop GUI toolkit such as Swing or AWT. We can think of JSF as a GUI toolkit for building web applications. JSF components are organized in the user interface declaratively using UI component tags in a JSF view (typically a JSP or Facelets page). The Controller in MVC represents an object that responds to user interface events and to query or modify the Model. When a JSF page is displayed in the browser, the UI components declared in the markup are rendered as HTML controls. The JSF markup supports the JSF Expression Language (EL), a scripting language that enables UI components to bind to managed beans for data transfer and event handling. We use value expressions such as #{backingBean.name} to connect UI components to managed bean properties for data binding, and we use method expressions such as #{backingBean.sayHello} to register an event handler (a managed bean method with a specific signature) on a UI component. In a JSF application, the entity classes in our domain model act as the Model in MVC terms, a JSF page provides the View, and managed beans act as Controller objects. The JSF EL provides the scripting language necessary to tie the Model, View, and Controller concepts together. There is an important variation of the Controller concept that we should discuss before moving forward. Like the Struts framework, JSF implements what is known as the “Front Controller” pattern, where a single class behaves like the primary request handler or event dispatcher for the entire system. In the Struts framework, the ActionServlet performs the role of the Front Controller, handling all incoming requests and delegating request processing to application-defined Action classes. In JSF, the FacesServlet implements the Front Controller pattern, receiving all incoming HTTP requests and processing them in a sophisticated chain of events known as the JSF request processing lifecycle. The JSF Request Processing Lifecycle In order to understand the interplay between JSF components, converters, validators, and managed beans, let’s take a moment to discuss the JSF request processing lifecycle. The JSF lifecycle includes six phases: Restore/create view – The UI component tree for the current view is restored from a previous request, or it is constructed for the first time. Apply request values – The incoming form parameter values are stored in server-side UI component objects. Conversion/Validation – The form data is converted from text to the expected Java data types and validated accordingly (for example: required fields, length and range checks, valid dates, and so on). Update model values – If conversion and validation was successful, the data is now stored in our application’s domain model. Invoke application – Any event handler methods in our managed beans that were registered with UI components in the view are executed. Render response – The current view is re-rendered in the browser, or another view is displayed instead (depending on the navigation rules for our application). To summarize the JSF request handling process, the FacesServlet (the Front Controller) first handles an incoming request sent by the browser for a particular JSF page by attempting to restore or create for the first time the server-side UI component tree representing the logical structure of the current View (Phase 1). Incoming form data sent by the browser is stored in the components such as text fields, radio buttons, checkboxes, and so on, in the UI component tree (Phase 2). The data is then converted from Strings to other Java types and is validated using both standard and custom converters and validators (Phase 3). Once the data is converted and validated successfully, it is stored in the application’s Model by calling the setter methods of any managed beans associated with the View (Phase 4). After the data is stored in the Model, the action method (if any) associated with the UI component that submitted the form is called, along with any other event listener methods that were registered with components in the form (Phase 5). At this point, the application’s logic is invoked and the request may be handled in an application-defined way. Once the Invoke Application phase is complete, the JSF application sends a response back to the web browser, possibly displaying the same view or perhaps another view entirely (Phase 6). The renderers associated with the UI components in the view are invoked and the logical structure of the view is transformed into a particular presentation format or markup language. Most commonly, JSF views are rendered as HTML using the framework’s default RenderKit, but JSF does not require pages to be rendered only in HTML. In fact, JSF was designed to be a presentation technology neutral framework, meaning that views can be rendered according to the capabilities of different client devices. For example, we can render our pages in HTML for web browsers and in WML for PDAs and wireless devices.
Read more
  • 0
  • 0
  • 1890

article-image-introduction-jsf-part-2
Packt
30 Dec 2009
7 min read
Save for later

An Introduction to JSF: Part 2

Packt
30 Dec 2009
7 min read
Standard JSF Validators The JSF Core tag library also includes a number of built-in validators. These validator tags can also be registered with UI components to verify that required fields are completed by the user-that numeric values are within an acceptable range,and that text values are a certain length. For more specific validation scenarios, we can also write our own custom validators. User input validation happens immediately after data conversion during the JSF request lifecycle. Validating the Length of a Text Value JSF includes a built-in validator that can be used to ensure a text value entered by the user is between an expected minimum and maximum length. The following example demonstrates using the <f:validatelength> tag’s minimum and maximum attributes to check that the password entered by the user in the password field is exactly 8 characters long. It also demonstrates how to use the label attribute of certain JSF input components (introduced in JSF 1.2) to render a localizable validation message. JSF Validation Messages The JSF framework includes predefined validation messages for different input components and validation scenarios. These messages are defined in a message bundle (properties file) including in the JSF implementation jar file. Many of these messages are parameterized, meaning that since JSF 1.2 a UI component’s label can be inserted into these messages to provide more detailed information to the user. The default JSF validation messages can be overridden by specifying the same message bundle keys in the application’s message bundle. We will see an example of customizing JSF validation messages below. Notice that we also set the maxlength attribute of the <h:inputsecret> tag to limit the input to 8 characters. This does not, however, ensure that the user enters a minimum of 8 characters. Therefore, the <f:validatelength> validator tag is required.   <f:view> <h:form> <h:outputLabel value="Please enter a password (must be 8 characters): " /> <h:inputSecret maxlength="8" id="password" value="#{backingBean.password}" label="Password"> <f:validateLength minimum="8" maximum="8" /> </h:inputSecret> <h:commandButton value="Submit" /><br /> <h:message for="password" errorStyle="color:red" /> </h:form> </f:view> Validating a Required Field The following example demonstrates how to use the built-in JSF validators to ensure that a text field is filled out before the form is processed, and that the numeric value is between 1 and 10: <f:view> <h:form> <h:outputLabel value="Please enter a number: " /> <h:inputText id="number" label="Number" value="#{backingBean.price}" required="#{true}" /> <h:commandButton value="Submit" /><br /> <h:message for="number" errorClass="error" /> </h:form> </f:view> The following screenshot demonstrates the result of submitting a JSF form containing a required field that was not filled out. We render the validation error message using an <h:message> tag with a for attribute set to the ID of the text field component. We have also overridden the default JSF validation message for required fields by specifying the following message key in our message bundle. We will discuss message bundles and internationalization (I18N) shortly.   javax.faces.component.UIInput.REQUIRED=Required field. javax.faces.component.UIInput.REQUIRED_detail=Please fill in this field. Validating a numeric range The JSF Core <f:validatelongrange> and </f:validatedoublerange> tags can be used to validate numeric user input. The following example demonstrates how to use the <f:validatelongrange> tag to ensure an integer value entered by the user is between 1 and 10. <f:view> <h:form> <h:outputLabel value="Please enter a number between 1 and 10: " /> <h:inputText id="number" value="#{backingBean.number}" label="Number"> <f:validateLongRange minimum="1" maximum="10" /> </h:inputText> <h:commandButton value="Submit" /><br /> <h:message for="number" errorStyle="color:red" /> <h:outputText value="You entered: #{backingBean.number}" rendered="#{backingBean.number ne null}" /> </h:form> </f:view> The following screenshot shows the result of entering an invalid value into the text field. Notice that the value of the text field’s label attribute is interpolated with the standard JSF validation message. Validating a floating point number is similar to validating an integer. The following example demonstrates how to use the value to ensure that a floating point number is between 0.0 and 1.0.   <f:view> <h:form> <h:outputLabel value="Please enter a floating point number between 0 and 1: " /> <h:inputText id="number" value="#{backingBean.percentage}" label="Percent"> <f:validateDoubleRange minimum="0.0" maximum="1.0" /> </h:inputText> <h:commandButton value="Submit" /><br /> <h:message for="number" errorStyle="color:red" /> <h:outputText value="You entered: " rendered="#{backingBean.percentage ne null}" /> <h:outputText value="#{backingBean.percentage}" rendered="#{backingBean.percentage ne null}" > <f:convertNumber type="percent" maxFractionDigits="2" /> </h:outputText> </h:form> </f:view> Registering a custom validator JSF also supports defining custom validation classes to provide more specialized user input validation. To create a custom validator, first we need to implement the javax.faces.validator.Validator interface. Implementing a custom validator in JSF is straightforward. In this example, we check if a date supplied by the user represents a valid birthdate. As most humans do not live more than 120 years, we reject any date that is more than 120 years ago. The important thing to note from this code example is not the validation logic itself, but what to do when the validation fails. Note that we construct a FacesMessage object with an error message and then throw a ValidatorException.   package chapter1.validator; import java.util.Calendar; import java.util.Date; import javax.faces.application.FacesMessage; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.faces.validator.Validator; import javax.faces.validator.ValidatorException; public class CustomDateValidator implements Validator { public void validate(FacesContext context, UIComponent component, Object object) throws ValidatorException { if (object instanceof Date) { Date date = (Date) object; Calendar calendar = Calendar.getInstance(); calendar.roll(Calendar.YEAR, -120); if (date.before(calendar.getTime())) { FacesMessage msg = new FacesMessage(); msg.setSummary("Invalid birthdate: " + date); msg.setDetail("The date entered is more than 120 years ago."); msg.setSeverity(FacesMessage.SEVERITY_ERROR); throw new ValidatorException(msg); } } } } We have to declare our custom validators in faces-config.xml as follows, giving the validator an ID of customDateValidator: <validator> <description>This birthdate validator checks a date to make sure it is within the last 120 years.</description> <display-name>Custom Date Validator</display-name> <validator-id>customDateValidator</validator-id> <validator-class> chapter1.validator.CustomDateValidator </validator-class> </validator> Next, we would register our custom validator on a JSF UI component using the tag. This tag has a converterId attribute that expects the ID of a custom converter declared in faces-config.xml. Notice in the following example that we are also registering the standard JSF <f:convertdatetime></f:convertdatetime> converter on the tag. This is to ensure that the value entered by the user is first converted to a java.util.Date object before it is passed to our custom validator. <h:inputText id="name" value="#{backingBean.date}"> <f:convertDateTime type="date" pattern="M/d/yyyy" /> <f:validator validatorId="customDateValidator" /> </h:inputText> Many JSF UI component tags have both a converter and validator attribute that accept EL method expressions. These attributes provides another way to register custom converters and validators implemented in managed beans on UI components.
Read more
  • 0
  • 0
  • 1296

article-image-introduction-hibernate-and-spring-part-2
Packt
29 Dec 2009
6 min read
Save for later

An Introduction to Hibernate and Spring: Part 2

Packt
29 Dec 2009
6 min read
Object relational mapping As the previous discussion shows, we are looking for a solution that enables applications to work with the object representation of the data in database tables, rather than dealing directly with that data. This approach isolates the business logic from any relational issues that might arise in the persistence layer. The strategy to carry out this isolation is generally called object/relational mapping (O/R Mapping, or simply ORM). A broad range of ORM solutions have been developed. At the basic level, each ORM framework maps entity objects to JDBC statement parameters when the objects are persisted, and maps the JDBC query results back to the object representation when they are retrieved. Developers typically implement this framework approach when they use pure JDBC. Furthermore, ORM frameworks often provide more sophisticated object mappings, such as the mapping of inheritance hierarchy and object association, lazy loading, and caching of the persistent objects. Caching enables ORM frameworks to hold repeatedly fetched data in memory, instead of being fetched from the database in the next requests, causing deficiencies and delayed responses, the objects are returned to the application from memory. Lazy loading, another great feature of ORM frameworks, allows an object to be loaded without initializing its associated objects until these objects are accessed. ORM frameworks usually use mapping definitions, such as metadata, XML files, or Java annotations, to determine how each class and its persistent fields should be mapped onto database tables and columns. These frameworks are usually configured declaratively, which allows the production of more flexible code. Many ORM solutions provide an object query language, which allows querying the persistent objects in an object-oriented form, rather than working directly with tables and columns through SQL. This behavior allows the application to be more isolated from the database properties. Hibernate as an O/R Mapping solution For a long time, Hibernate has been the most popular persistence framework in the Java community. Hibernate aims to overcome the already mentioned impedance mismatch between object-oriented applications and relational databases. With Hibernate, we can treat the database as an object-oriented store, thereby eliminating mapping of the object-oriented and relational environments. Hibernate is a mediator that connects the object-oriented environment to the relational environment. It provides persistence services for an application by performing all of the required operations in the communication between the object-oriented and relational environments. Storing, updating, removing, and loading can be done regardless of the objects persistent form. In addition, Hibernate increases the application's effectiveness and performance, makes the code less verbose, and allows the code to be more focused on business rules than persistence logic. The following screenshot depicts Hibernates role in persistence: Hibernate fully supports object orientation, meaning all aspects of objects, such as association and inheritance, are properly persisted. Hibernate can also persist object navigation, that is, how an object is navigable through its associated objects. It caches data that is fetched repeatedly and provides lazy loading, which notably enhances database performance. As you will see, Hibernate provides caches in two levels: first-level built-in, and second-level pluggable cache strategies. Th e first-level cache is a required property for any ORM to preserve object consistency. It guaranties that the application always works with consistent objects. This is originated from the fact that many threads in the application use the ORM to persist the objects which might potentially be associated to the same table rows in the database. The following screenshot depicts the role of a cache when using Hibernate: Hibernate provides its own query language, which is Hibernate Query Language (HQL). At runtime, HQL expressions are transformed to their corresponding SQL statements, based on the database used. Because databases may use different versions of SQL and may expose different features, Hibernate presents a new concept, called an SQL dialect, t o distinguish how databases differ. Furthermore, Hibernate allows SQL expressions to be used either declaratively or programmatically, which is useful in specific situations when Hibernate does not satisfy application persistence requirements. Hibernate keeps track of object changes through snapshot comparisons to prevent unnecessary updating. Other O/R Mapping solutions Although Hibernate is the most popular persistence framework, many other frameworks do exist. Some of these are explained as follows: Enterprise JavaBeans (EJB): It is a standard J2EE (J ava 2 Enterprise Edition) technology that defines a different type of persistence by presenting entity beans. Mostly, for declarative middleware services that are provided by the application server, such as transactions, EJB may be preferred for architecture. However, due to its complexity, nontransparent persistence, and need for a container (all of which make it difficult to implement, test, and maintain), EJB is less often used than other persistence frameworks. iBatis SQL Map: It is a result set–mapping framework which works at the SQL level, allowing SQL string definitions with parameter placeholders in XML files. At runtime, the placeholders are filled with runtime values, either from simple parameter objects, JavaBeans properties, or a parameter map. To their advantage, SQL maps allow SQL to be fully customized for a specific database. To their disadvantage, however, these maps do not provide an abstraction from the specific features of the target database. Java Data Objects (JDO): It is a specification for general object persistence in any kind of data store, including relational databases and object-oriented databases. Most JDO implementations support using metadata mapping definitions. JDO provides its own query language, JDOQL, and its own strategy for change detection. TopLink: It provides a visual mapping editor (Mapping Workbench) and offers a particularly wide range of object, relational mappings, including a complete set of direct and relational mappings, object-to-XML mappings, and JAXB (Java API for XML Binding) support. TopLink provides a rich query framework that supports an object-oriented expression framework, EJB QL, SQL, and stored procedures. It can be used in either a JSE or a JEE environment. Hibernate designers has borrowed many Hibernate concepts and useful features from its ancestors Hibernate versus other frameworks Unlike the frameworks just mentioned, Hibernate is easy to learn, simple to use, comprehensive, and (unlike EJB) does not need an application server. Hibernate is well documented, and many resources are available for it. Downloaded more than three million times, Hibernate is used in many applications around the world. To use Hibernate, you need only J2SE 1.2 or later, and it can be used in stand-alone or distributed applications. The current version of Hibernate is 3, but the usage and configuration of this version are very similar to version 2. Most of the changes in Hibernate 3 are compatible with Hibernate 2. Hibernate solves many of the problems of mapping objects to a relational environment, isolating the application from getting involved in many persistence issues. Keep in mind that Hibernate is not a replacement for JDBC. Rather, it can be thought of as a tool that connects to the database through JDBC and presents an object-oriented, application-level view of the database.
Read more
  • 0
  • 0
  • 1610
article-image-introduction-hibernate-and-spring-part-1
Packt
29 Dec 2009
4 min read
Save for later

An Introduction to Hibernate and Spring: Part 1

Packt
29 Dec 2009
4 min read
This article by Ahmad Seddighi, introduces Spring and Hibernate, explaining what persistence is, why it is important, and how it is implemented in Java applications. It provides a theoretical discussion of Hibernate and how Hibernate solves problems related to persistence. Finally, we take a look at Spring and the role of Spring in persistence. Hibernate and Spring are open-source Java frameworks that simplify developing Java/JEE applications from simple, stand-alone applications running on a single JVM, to complex enterprise applications running on full-blown application servers. Hibernate and Spring allow developers to produce scalable, reliable, and effective code. Both frameworks support declarative configuration and work with a POJO (Plain Old Java Object) programming model (discussed later in this article), minimizing the dependence of application code on the frameworks, and making development more productive and portable. Although the aim of these frameworks partially overlap, for the most part, each is used for a different purpose. The Hibernate framework aims to solve the problems of managing data in Java: those problems which are not fully solved by the Java persistence API, JDBC (Java Database Connectivity), persistence providers, DBMS (Database Management Systems), and their mediator language, SQL (Structured Query Language). In contrast, Spring is a multitier framework that is not dedicated to a particular area of application architecture. However, Spring does not provide its own solution for issues such as persistence, for which there are already good solutions. Rather, Spring unifies preexisting solutions under its consistent API and makes them easier to use. As mentioned, one of these areas is persistence. Spring can be integrated with a persistence solution, such as Hibernate, to provide an abstraction layer over the persistence technology, and produce more portable, manageable, and effective code. Furthermore, Spring provides other services spread over the application architecture, such as inversion of control and aspect-oriented programming (explained later in this article), decoupling the application's components, and modularizing common behaviors. This article looks at the motivation and goals for Hibernate and Spring. The article begins with an explanation of why Hibernate is needed, where it can be used, and what it can do. We'll take a quick look at Hibernates alternatives, exploring their advantages and disadvantages. I'll outline the valuable features that Hibernate offers and explain how it can solve the problems of the traditional approach to Java persistence. The discussion continues with Spring. I'll explain what Spring is, what services it offers, and how it can help to develop a high-quality data-access layer with Hibernate. Persistence management in Java Persistence has long been a challenge in the enterprise community. Many persistence solutions from primitive, file-based approaches, to modern, object-oriented databases have been presented. For any of these approaches, the goal is to provide reliable, efficient, flexible, and scalable persistence. Among these competing solutions, relational databases (because of certain advantages) have been most widely accepted in the IT world. Today, almost all enterprise applications use relational databases. A relational database is an application that provides the persistence service. It provides many persistence features, such as indexing data to provide speedy searches; solves the relevant problems, such as protecting data from unauthorized access; and handles many complications, such as preserving relationships among data. Creating, modifying, and accessing relational databases is fairly simple. All such databases present data in two-dimensional tables and support SQL, which is relatively easy to learn and understand. Moreover, they provide other services, such as transactions and replication. These advantages are enough to ensure the popularity of relational databases. To provide support for relational databases in Java, the JDBC API was developed. JDBC allows Java applications to connect to relational databases, express their persistence purpose as SQL expressions, and transmit data to and from databases. The following screenshot shows how this works: Using this API, SQL statements can be passed to the database, and the results can be returned to the application, all through a driver. The mismatch problem JDBC handles many persistence issues and problems in communicating with relational databases. It also provides the needed functionality for this purpose. However, there remains an unsolved problem in Java applications: Java applications are essentially object-oriented programs, whereas relational databases store data in a relational form. While applications use object-oriented forms of data, databases represent data in two-dimensional table forms. This situation leads to the so-called object-relational paradigm mismatch, which (as we will see later) causes many problems in communication between object-oriented and relational environments. For many reasons, including ease of understanding, simplicity of use, efficiency, robustness, and even popularity, we may not discard relational databases. However, the mismatch cannot be eliminated in an effortless and straightforward manner.
Read more
  • 0
  • 0
  • 4263

article-image-integrating-spring-framework-hibernate-orm-framework-part-2
Packt
29 Dec 2009
5 min read
Save for later

Integrating Spring Framework with Hibernate ORM Framework: Part 2

Packt
29 Dec 2009
5 min read
Configuring Hibernate in a Spring context Spring provides the LocalSessionFactoryBean class as a factory for a SessionFactory object. The LocalSessionFactoryBean object is configured as a bean inside the IoC container, with either a local JDBC DataSource or a shared DataSource from JNDI. The local JDBC DataSource can be configured in turn as an object of org.apache.commons.dbcp.BasicDataSource in the Spring context: <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"> <property name="driverClassName"> <value>org.hsqldb.jdbcDriver</value> </property> <property name="url"> <value>jdbc:hsqldb:hsql://localhost/hiberdb</value> </property> <property name="username"> <value>sa</value> </property> <property name="password"> <value></value> </property></bean> In this case, the org.apache.commons.dbcp.BasicDataSource (the Jakarta Commons Database Connection Pool) must be in the application classpath. Similarly, a shared DataSource can be configured as an object of org.springframework.jndi.JndiObjectFactoryBean. This is the recommended way, which is used when the connection pool is managed by the application server. Here is the way to configure it: <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>java:comp/env/jdbc/HiberDB</value> </property></bean> When the DataSource is configured, you can configure the LocalSessionFactoryBean instance upon the configured DataSource as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> ...</bean> Alternatively, you may set up the SessionFactory object as a server-side resource object in the Spring context. This object is linked in as a JNDI resource in the JEE environment to be shared with multiple applications. In this case, you need to use JndiObjectFactoryBean instead of LocalSessionFactoryBean: <bean id="sessionFactory" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>java:comp/env/jdbc/hiberDBSessionFactory</value> </property></bean> JndiObjectFactoryBean is another factory bean for looking up any JNDI resource. When you use JndiObjectFactoryBean to obtain a preconfigured SessionFactory object, the SessionFactory object should already be registered as a JNDI resource. For this purpose, you may run a server-specific class which creates a SessionFactory object and registers it as a JNDI resource. LocalSessionFactoryBean uses three properties: datasource, mappingResources, and hibernateProperties. These properties are as follows: datasource refers to a JDBC DataSource object that is already defined as another bean inside the container. mappingResources specifies the Hibernate mapping files located in the application classpath. hibernateProperties determines the Hibernate configuration settings. We have the sessionFactory object configured as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> <property name="mappingResources"> <list> <value>com/packtpub/springhibernate/ch13/Student.hbm.xml</value> <value>com/packtpub/springhibernate/ch13/Teacher.hbm.xml</value> <value>com/packtpub/springhibernate/ch13/Course.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect </prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.max_fetch_depth">2</prop> </props> </property></bean> The mappingResources property loads mapping definitions in the classpath. You may use mappingJarLocations, or mappingDirectoryLocations to load them from a JAR file, or from any directory of the file system, respectively. It is still possible to configure Hibernate with hibernate.cfg.xml, instead of configuring Hibernate as just shown. To do so, configure sessionFactory with the configLocation property, as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> <property name="configLocation"> <value>/conf/hibernate.cfg.xml</value> </property></bean> Note that hibernate.cfg.xml specifies the Hibernate mapping definitions in addition to the other Hibernate properties. When the SessionFactory object is configured, you can configure DAO implementations as beans in the Spring context. These DAO beans are the objects which are looked up from the Spring IoC container and consumed by the business layer. Here is an example of DAO configuration: <bean id="studentDao" class="com.packtpub.springhibernate.ch13.HibernateStudentDao"> <property name="sessionFactory"> <ref local="sessionFactory"/> </property></bean> This is the DAO configuration for a DAO class that extends HibernateDaoSupport, or directly uses a SessionFactory property. When the DAO class has a HibernateTemplate property, configure the DAO instance as follows: <bean id="studentDao" class="com.packtpub.springhibernate.ch13.HibernateStudentDao"> <property name="hibernateTemplate"> <bean class="org.springframework.orm.hibernate3.HibernateTemplate"> <constructor-arg> <ref local="sessionFactory"/> </constructor-arg> </bean> </property></bean> According to the preceding declaration, the HibernateStudentDao class has a hibernateTemplate property that is configured via the IoC container, to be initialized through constructor injection and a SessionFactory instance as a constructor argument. Now, any client of the DAO implementation can look up the Spring context to obtain the DAO instance. The following code shows a simple class that creates a Spring application context, and then looks up the DAO object from the Spring IoC container: package com.packtpub.springhibernate.ch13; public class DaoClient { public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext("com/packtpub/springhibernate/ch13/applicationContext.xml"); StudentDao stdDao = (StudentDao)ctx.getBean("studentDao"); Student std = new Student(); //set std properties //save std stdDao.saveStudent(std); }}
Read more
  • 0
  • 0
  • 3936