Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-introduction-re-host-based-modernization-using-tuxedo
Packt
23 Oct 2009
19 min read
Save for later

Introduction to Re-Host based Modernization Using Tuxedo

Packt
23 Oct 2009
19 min read
Introduction SOA enablement wraps key application interfaces in services, and integrates it into the SOA. This largely leaves the existing application logic intact, minimizing changes and adding risk only to those components that needed restructuring work to become SOA-ready. While the interfaces are modernized, without subjecting the core application components to a lot of change, the high costs and the various legacy risks associated with the mainframe platform remain. In addition, the performance and scalability of the new interfaces needs to be well-specified and tested, and the additional load they place on the system should be included in any planned capacity upgrades, potentially increasing the overall costs. Reducing or eliminating the legacy mainframe costs and risks via re-host based modernization also helps customers to fund SOA enablement, and the re-architecture phases of legacy modernization, and lay the groundwork for these steps. SOA-enabling a re-hosted application is a much easier process on an open-systems-based, SOA-ready software stack, and a more efficient one as well in terms of system resource utilization and cost. Re-architecting selected components of a re-hosted application based on specific business needs is a lower risk approach than re-architecting the entire applications en masse, and the risk can be further reduced by ensuring that target re-hosting stack provides rugged and transparent integration between re-hosted services and new components. Keeping It Real: Selective re-architecture is all about maximizing ROI by focusing re-architecture investment in the areas with the best pay-off. Undertaking a change from one language or development paradigm to another shouldn't be undertaken lightly—the investment and risks need to be well understood and justified. It is the right investment for components that require frequent maintenance changes but are difficult to maintain, because of poor /structure and layered changes. The payback on re-architecture investment will come from reducing the cost of future maintenance. Similarly, components that need significant functional changes to meet new business requirements can benefit from substantial productivity increase after re-architecture to a more modern development framework with richer tools to support future changes. The payback comes from greater business agility and time-to-market improvements. On the other hand, well-structured and maintainable COBOL components that do not need extensive changes to meet business needs will have very little return to show for the significant re-architecture investment. Leaving them in COBOL on a modern, extensible platform saves significant re-architecture costs that can be invested elsewhere, reduces risk, and shortens payback time. These considerations can help to optimize ROI for medium to large modernization projects where components measure in hundreds or thousands and contain millions or tens of millions lines of code. Re-Hosting Based Modernization For many organizations, mainframe modernization has become a matter of 'how', and not 'if'. Numerous enterprises and public sector organizations choose re-hosting as the first tangible step in their legacy modernization program precisely because it delivers the best ROI in the fastest possible manner, and accelerates the move to SOA enablement and selective re-architecture. Oracle together with our services partners provides a comprehensive re-hosting-based modernization solution that many customers have leveraged for a successful migration of selected applications or complete mainframe environments ranging from a few hundred MIPS to well over 10,000 MIPS. Two key pillars support successful re-hosting projects: Optimal target environment that lowers the Total Cost of Ownership (TCO) by 50–80 percent and maintains mainframe-class Quality of Service (QoS) using open, extensible, SOA-ready, future-proof architecture Predictable, efficient projects delivered by our SI partners with proven methodologies and automated tools Optimal target environment provided by Oracle is powered by proven open systems software stack leveraging Oracle Database and Oracle Tuxedo for a rock-solid, mainframe-class transaction processing (TP) infrastructure closely matching mainframe requirements for online applications. Mainframe-compatible Transaction Processing: Support for IBM CICS or IMS TM applications in native COBOL or C/C++ language containers with mainframe-compatible TP features. RASP: Mainframe-class performance, reliability, and scalability provided by Oracle Real Application Clusters (RAC) and Tuxedo multi-node and multi-domain clustering for load-balancing and high availability despite failure of individual nodes or network links. Workload and System Management: End-to-end transaction and service monitoring to support 24X7 operations management provided by Oracle's Enterprise Manager Grid Control and Tuxedo System and Application Monitor. SOA Enablement and Integration: Extensibility with Web services using Oracle Services Architecture Leveraging Tuxedo (SALT), J2EE integration (using WebLogic-Tuxedo Connector (WTC), Enterprise Service Bus (ESB), Portal, and BPM technologies to enable easy integration of re-hosted applications into modern Service-Oriented Architectures (SOAs). Scalable Platforms and Commodity Hardware: Scalable, Linux/UNIX-based open systems from HP, Dell, Sun, and IBM, providing: Performance on a par with mainframe systems for most workloads at significantly reduced TCO Reliability and workload management similar to mainframe installations, including physical and logical partitioning Robust clustering technologies for high availability and fail-over capabilities within a data center or across the world The diagram below shows conceptual mapping of mainframe environment to compatible open systems infrastructure: Predictable, efficient projects delivered by leading SIs and key modernization specialists use risk-mitigation methodologies, and automated tools honed over numerous projects to address a complete range of Online, Batch, and Data architectures, and the various technologies used in them. These project methodologies and automated tools that support them encompass all phases of a migration project: Preliminary Assessment Study Application Asset Discovery and Analysis Application and Data Conversion (pilot or entire application portfolio) System and Application Integration Test Engineering Regression and Performance Testing Education and Training Operations Migration Switch-Over Combining a proven target architecture stack that is well-matched to the needs of mainframe applications with mature methodologies supported by automated tools has led to a large and growing number of successful re-hosting projects. There is a rising interest to leverage the re-hosting approach to mainframe application modernization, as a way to get off a mainframe fast, and with minimal risk, in a more predictable manner for large, business-critical applications evolved over a long term and multiple development teams. Re-hosting based modernization approach preserves an organizations long term investment in critical business logic and data without risking business operations or sacrificing the QoS, while enabling customers to: Reduce or eliminate mainframe maintenance costs, and/or defer upgrade costs, saving customers 50–80 percent of their annual maintenance and operations budget Increase productivity and flexibility in IT development and operations, protecting long-term investment through application modernization Speed up and simplify application integration via SOA, without losing transactional integrity and the high performance expected by the users The rest of this article explores the critical success factors and proven transformation architecture for re-hosting legacy applications and data, describes SOA integration options and considerations when SOA-enabling re-hosted applications, highlights key risk mitigation methodologies, and provides a foundation for the financial analysis and ROI model derived from over a hundred, mainframe re-hosting projects. Critical Success Factors in Mainframe Re-Hosting Companies considering a re-hosting-based modernization strategy that involves migrating some applications off the mainframe have to address a range of concerns, which can be summarized by the following questions: How to preserve the business logic of these applications and their valuable data? How to ensure that migrated applications continue to meet performance requirements? How to maintain scalability, reliability, transactional integrity, and other QoS attributes in an open system environment? How to migrate in phases, maintaining robust integration links between migrated and mainframe applications? How to achieve predictable, cost-effective results and ensure a low-risk project? Meeting these challenges requires a versatile and powerful application infrastructure—one that natively supports key mainframe languages and services, enables automated adaptation of application code, and delivers proven, mainframe-like QoS on open system platforms. For re-hosting to enable broader aspects of the modernization strategy, this infrastructure must also provide native Web services and ESB capabilities to rapidly integrate re-hosted applications as first-class services in an SOA. Equally important is a proven, risk-mitigation methodology, automated tools, and project services specifically honed to address automated conversion and adaptation of application code and data, supported by cross-platform test engineering and execution methodology, strong system and application integration expertise, and deep experience with operations migration and switch-over. Preserving Application Logic and Data The re-hosting approach depends on a mainframe-compatible transaction processing and application services platform supporting common mainframe languages such as COBOL and C, which preserves the original business logic and data for the majority of mainframe applications and avoids the risks and uncertainties of a re-write. A complete re-hosting solution provides native support for TP and Batch programs, leveraging an application server-based platform that provides container-based support for COBOL and C/C++ application services, and TP APIs similar to IBM CICS, IMS TM, or other mainframe TP monitors. Online Transaction Processing Environment Oracle Tuxedo is the most popular TP platform for open systems, as well as leading re-hosting platform that can run most of mainframe COBOL and C applications unchanged in container-based framework that combines common application server features, including health monitoring, fail-over, service virtualization, and dynamic load balancing critical to large-scale OLTP applications together with standard TP features, including transaction management and reliable coordination of distributed transactions (a.k.a. Two-Phase Commit or XA standard). It provides the highest possible performance and scalability, and has been recently benchmarked against a mainframe at over 100,000 transactions per second, with sub-second response time. Oracle Tuxedo supports common mainframe programming languages, that is, COBOL and C, and provides comprehensive TP features compatible with CICS and IMS TM, which makes it a preferred application platform choice for re-hosting CICS or IMS TM applications with minimal changes and risks. In the Tuxedo environment, COBOL or C business logic remains unchanged. The only adaptation required is automated mapping of CICS APIs (CICS EXEC calls) to equivalent Tuxedo API functions. This mapping typically leverages a pre-processor and a mapping library implemented on Tuxedo platform, and using a full range of Tuxedo APIs. The automated nature of pre-processing and comprehensive coverage provided by the library ensures that most CICS COBOL or C programs are easily transformed into Tuxedo services. Unlike other solutions that embed this transformation in their compiler coupled with a proprietary emulation run-time, Tuxedo-based solution provides this mapping as a compiler-independent source module, which can be easily extended as needed. The resultant code uses Tuxedo API at native speed, allowing it to reach tens of thousands of transactions per second, while taking advantage of all Tuxedo facilities. In a re-hosted application CICS transactions become Tuxedo services, registered for processing by Tuxedo server processes. These services can be deployed in a single machine or across multiple machines in a Tuxedo domain (SYSPLEX-like cluster.). The services are called by front-end Java, .Net, or Tuxedo/WS clients, or UI components (tn3270 or web-based converted 3270/BMS screens), or by other services in case of transaction linking. Deferred transactions are handled by Tuxedo's/Q component, which provides in-memory and persistent queuing services. The diagram below shows Oracle Tuxedo and its surrounding ecosystem of SOA, J2EE, ESB, CORBA, MQ, and Mainframe integration components:   User Interface Migration The UI elements in these programs are typically defined using CICS Basic Mapping Support (BMS) for 3270 "green screen" terminals. While it is possible to preserve these using tn3270 emulation, many customers in re-hosting projects choose to take advantage of automated conversion of BMS macros into JSP/HTML for Web UI. Supported by a specialized Javascript library, these Web screens mimic the appearance and the behavior of "green screens" in a web browser, including tab-based navigation and PF keys. These UI components can connect to re-hosted CICS transactions running as Tuxedo services using Oracle Jolt (Java client interface for Tuxedo), Weblogic-Tuxedo Connector (WTC), or Tuxedo's Web services gateway provided by Oracle Services Architecture Leveraging Tuxedo (SALT) product. The diagram on the next page depicts a target re-hosting architecture for a typical mainframe OLTP application. The architecture uses Tuxedo services to run re-hosted CICS programs and a web application server to run re-hosted BMS UI. The servlets or JSPs containing the HTML that defines the screens, connect with Tuxedo services via Oracle Jolt, WTC, or SALT. Customers using mainframe 4GLs or languages such as PL/I or Assembler frequently choose to convert these applications to COBOL or C/C++. The adaptation of CICS or IMS TM API calls is automated through a mapping layer, which minimizes overall changes for the development team and allows them to maintain the familiar applications. For more significant extensions and new capabilities, customers incrementally leverage Tuxedo's own APIs and facilities, or leverage a tightly-linked J2EE environment provided by the WebLogic Server, and even transparently make Web services calls. The optimal extensibility options depend on application needs, availability of Java or C/COBOL skills, and other factors.   Feature or Action CICS Verb Tuxedo API Communications Area DFHCOMMAREA Typed Buffer Transaction Request LINK tpcall Transaction Return RETURN tpreturn Transfer Control XCTL tpforward Allocate Storage GETMAIN tpalloc Queues READQ / WRITEQ TD,TS /Q tpenqueue / tpdequeue Begin new transaction START TRANID /Q and TMQFORWARD Abort transaction ISSUE ABEND tpreturn TPFAIL Commit or Rollback SYNCPOINT / SYNCPOINT ROLLBACK tpcommit / tpabort     Keeping it Real:For those familiar with CICS, this is a very short example of the CICS verbs. CICS has many functions, most of which either map natively to a similar Tuxedo API or are provided by migration specialists based on their extensive experience with such migrations. In summary, Tuxedo provides a popular platform for deploying, executing, and managing COBOL and C re-hosted transactional applications requiring any of the following OLTP and infrastructure services: Native, compiler-independent support for COBOL, C, or C++ Rich set of infrastructure services for managing and scaling diverse workloads Feature-set compatibility and inter-operability with IBM CICS and IMS/TM Two-Phase Commit (2PC) for managing transactions across multiple application domains and XA-compliant resource managers (databases, message queues) Guaranteed inter-application messaging and transactional queuing Transactional data access (using XA-compliant resource managers) with ACID qualities Services virtualization and dynamic load balancing Centralized management of multiple nodes in a domain, and across multiple domains Communications gateways for multiple traditional and modern communication protocols SOA Enablement through native Web services and ESB integration Workload Monitoring and Management An important aspect of the mainframe environment is workload monitoring and management, which provides information for effective performance analysis and capabilities that enable mainframe systems to achieve better throughput and responsiveness. Oracle's Tuxedo System and Application Monitor (TSAM) provides similar capabilities too. Define monitoring policies and patterns based on application requests, services, system servers such as gateways, bridges, and XA-defined stages of a distributed transaction Define SLA thresholds that can trigger a variety of events within Tuxedo event services including notifications, and instantiation of additional servers Monitor transactions on an end-to-end basis from a client call through all services across all domains involved in a client request Collect service statistics for all infrastructure components such as servers and gateways Detail time spent on IPC queues, waiting on network links, and time spent on subordinate services TSAM provides a built-in, central, web-based management and monitoring console, and an open framework for integration with third-party performance management tools. Batch Jobs Mainframe batch jobs are a response to a human 24-hour clock on which many businesses run. It includes beginning-of-period or end-of-period (day, week, month, quarter) processing for batched updates, reconciliation, reporting, statement generation, and similar applications. In some industries, external events tied to a fixed schedule such as intra-day, opening or closing trade in a stock exchange, drive specific processing needs. Batch applications are an equally important asset, and often need to be preserved and migrated as well. The batch environment uses Job Control Language (JCL) jobs managed and monitored by JES2 or JES3 (Job Entry System), which invoke one or more programs, access and manipulate large datasets and databases using sort and other specialized utilities, and often run under the control of a job scheduler such as CA-7/CA-11. JCL defines a series of job steps—a sequence of programs and utilities, specifies input and output files, and provides exception handling. Automated parsing and translation of JCL jobs to UNIX scripts such as Korn shell (ksh) or Perl, enables the overall structure of the job to remain the same, including job steps, classes, and exception handling. Standard shell processing is supplemented with required utilities such as SyncSort, and support for Generation Data Group (GDG) files. REXX/CLIST/PROC scripting environments on the mainframe are similarly converted to ksh or other scripting languages. Integration with Oracle Scheduler, or other job schedulers running in UNIX/Linux or Windows provides a rich set of calendar and event-based scheduling capabilities as well as dependency management similar to mainframe schedulers. In some cases, reporting done via batch jobs can be replaced using standard reporting packages such as Oracle BI Publisher. The diagram below shows a typical target re-hosting architecture for batch. It includes a scheduler to control and trigger batch jobs, scripting framework to support individual job scripts, and an application server execution framework for the batch COBOL or C programs. Unlike other solutions that run these programs directly as OS processes without the benefit of application server middleware, Oracle recommends using container-based middleware to provide higher reliability, availability, and monitoring to the batch programs. The target batch programs invoked by the scripts can also run directly as OS processes, but if mainframe-class management and monitoring similar to JES2 or JES3 environment is a requirement, these programs can run as services under Tuxedo, benefiting from the health monitoring, fail-over, load balancing, and other application server-like features it provides. Files and Databases When moving platforms (mainframe to open systems), the application and data have to be moved together. Data schemas and data stores need to be moved in a re-hosted mainframe modernization project just as with a re-architecture. The approach taken depends on the source data store. DB2 is the most straightforward, since DB2 and Oracle are both relational databases. In addition to migrating the data, customers sometimes choose to perform data cleansing, field extensions, merge columns, or other data maintenance practices leveraging the automated tooling that synchronizes all data changes with changes to the application's data access code. Mainframe DB2 DB2 is a predominant relational database on IBM mainframes. When migrating to Oracle Database, the migration approach is highly automated, and resolves all discrepancies between the two RDBMS in terms of field formats as well as error codes returned to applications, so as to maintain application behavior unchanged, including stored procedures if any. IMS IMS/DB (also known as DL/1) is a popular hierarchical database for older applications. Creating appropriate relational data schema for this data requires an understanding of the application access patterns so as to optimize the schema for best performance based on the most frequent access paths. To minimize code impact, a translation layer can be used at run-time to support IMS DB style data access from the application, and map it to appropriate SQL calls. This allows the applications to interface with the segments, now translated as DB2 UDB or ORACLE tables, without impacting application code and maintenance. VSAM VSAM files are used for keyed-sequential data access, and can be readily migrated to ISAM files or to Oracle Database tables wherever transactional integrity is required (XA features). Some customers also choose to migrate VSAM files to Oracle Database to provide accessibility from other distributed applications, or to simplify the re-engineering required to extend certain data fields or merge multiple data sources. Meeting Performance and Other QoS Requirements The mainframe's performance, reliability, scalability, manageability, and other QoS attributes have earned it pre-eminence for business-critical applications. How well do re-hosting solutions measure up against these characteristics? Earlier solutions based on IBM CICS emulators derived from development tools often did not measure up to the demands of mainframe workloads since they were never intended for true production environment and have not been exposed to large-scale applications. As a result, they have only been used for re-hosting small systems under 300 MIPS and not requiring any clustering or distributed workload handling. Oracle Tuxedo was built to scale ground up, to support high performance telecommunications operations. It has the distinction of being the only non-mainframe TP solution recognized for its mainframe-like performance, reliability, and QoS characteristics. Most large enterprise customers requiring such capabilities in distributed systems have traditionally relied on Tuxedo. Consistently rated by IDC and Gartner as the market leader, and predominant in non-mainframe OLTP applications, it has also become the preferred COBOL/C application platform and transaction engine for re-hosted mainframe applications requiring high performance and/or mission-critical availability and reliability. Reasons for the broad recognition of Tuxedo as the only mainframe-class application platform and transaction engine for distributed systems are based on mainframe-class performance, scalability, reliability, availability, and other QoS attributes proven in multiple customer deployments. The following table highlights some of these capabilities:   Reliability Availability Guaranteed messaging and transactional integrity Hardened code from 25 years of use in the world's largest transaction applications Transaction integrity across systems and domains through a two phase commit (XA) for all resources such as databases, queues, and so on. Proven in mainframe-to-mainframe transactions and messaging No single point of failure, 99.999% uptime with N+1/N+2 clusters Application services upgradeable in operation Self-monitoring, automated fail-over, datadriven routing for super high availability Centralized monitoring and management with clustered domains; automated, lights-out operations     Workload Management   Performance and Scalability   Resource management and prioritization across Tuxedo services Dynamic load balancing across domains based on load conditions Data-driven routing enables horizontally distributed database grids and differentiated QoS End-to-end monitoring of Tuxedo system and application services enables SLA enforcement Virtualization support enables spawning of Tuxedo servers on demand Parallel processing to maximize resource utilization with low latency code paths that provide sub-second response at any load Horizontal and vertical scaling of system resources yields linear performance increases Request multiplexing (synchronous and asynchronous) maximizes CPU utilization Proven in credit card authorizations at over 13.5K tps, and in telco billing at over 56K tps. Middleware of choice in HP, Fujitsu, Sun, IBM, and NEC TPC-C benchmarks    
Read more
  • 0
  • 0
  • 2235

article-image-chatroom-application-using-dwr-java-framework
Packt
23 Oct 2009
19 min read
Save for later

Chatroom Application using DWR Java Framework

Packt
23 Oct 2009
19 min read
Starting the Project and Configuration We start by creating a new project for our chat room, with the project name DWRChatRoom. We also need to add the dwr.jar file to the lib directory and enable DWR in the web.xml file. The following is the source code of the dwr.xml file. <?xml ver sion="1.0" encoding="UTF-8"?> <!DOCTYPE dwr PUBLIC "-//GetAhead Limited//DTD Direct Web Remoting 2.0//EN" "http://getahead.org/dwr/dwr20.dtd"> <dwr> <allow> <create creator="new" javascript="Login"> <param name="class" value="chatroom.Login" /> </create> <create creator="new" javascript="ChatRoomDatabase"> <param name="class" value="chatroom.ChatRoomDatabase" /> </create> </allow> </dwr> The source code for web.xml is as follows: <?xml ver sion="1.0" encoding="UTF-8"?> <web-app xsi_schemaLocation="http://java. sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_ 5.xsd" id="WebApp_ID" version="2.5"> <display-name>DWRChatRoom</display-name> <servlet> <display-name>DWR Servlet</display-name> <servlet-name>dwr-invoker</servlet-name> <servlet-class> org.directwebremoting.servlet.DwrServlet </servlet-class> <init-param> <param-name>debug</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>activeReverseAjaxEnabled</param-name> <param-value>true</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list></web-app> Developing the User Interface The next step we do is to create files for presentation: style sheet and HTML/JSP files. The style sheet, loginFailed.html, and index.jsp files are required for the application. The source code of the style sheet is as follows: body{margin:0;padding:0;line-height: 1.5em;}b{font-size: 110%;}em{color: red;}#topsection{background: #EAEAEA;height: 90px; /*Height of top section*/}#topsection h1{margin: 0;padding-top: 15px;}#contentwrapper{float: left;width: 100%;}#contentcolumn{margin-left: 200px; /*Set left margin to LeftColumnWidth*/}#leftcolumn{float: left;width: 200px; /*Width of left column*/margin-left: -100%;background: #C8FC98;}#footer{clear: left;width: 100%;background: black;color: #FFF;text-align: center;padding: 4px 0;}#footer a{color: #FFFF80;}.innertube{margin: 10px; /*Margins for inner DIV inside each column (to provide padding)*/margin-top: 0;} Our first page is the login page. It is located in the WebContent directory and it is named index.jsp. The source code for the page is given as follows: <%@ page language="java" contentType="text/html; charset=ISO-8859-1"    pageEncoding="ISO-8859-1"%><!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>Book Authoring</title><script type='text/javascript' src='/DWRChatroom/dwr/interface/Login.js'></script><script type='text/javascript' src='/DWRChatroom/dwr/engine.js'></script><script type='text/javascript' src='/DWRChatroom/dwr/util.js'></script>  <script type="text/javascript">function login(){  var userNameInput=dwr.util.byId('userName');  var userName=userNameInput.value;  Login.doLogin(userName,loginResult);}function loginResult(newPage){  window.location.href=newPage;}</script></head><body><h1>Book Authoring Sample</h1><table cellpadding="0" cellspacing="0"><tr><td>User name:</td><td><input id="userName" type="text" size="30"></td></tr><tr><td>&nbsp;</td><td><input type="button" value="Login" onclick="login();return false;"></td></tr></table></body></html> The login screen uses the DWR functionality to process the user login (the Java classes are presented after the web pages). The loginResults function opens either the failure page or the main page based on the result of the login operation. If the login was unsuccessful, a very simple loginFailed.html page is shown to the user, the source code for which is as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html;                                         charset=ISO-8859-1"><title>Login failed</title></head><body><h2>Login failed.</h2></body></html> The main page, mainpage.jsp, includes all the client-side logic of our ChatRoom application. The source code for the page is as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html lang="en" xml_lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /><title>Chatroom</title><link href="styles.css" rel="stylesheet" type="text/css" /><%   if (session.getAttribute("username") == null         || session.getAttribute("username").equals("")) {      //if not logged in and trying to access this page      //do nothing, browser shows empty page      return;   }%><script type='text/javascript' src='/DWRChatRoom/dwr/interface/Login.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/interface/ChatRoomDatabase.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/engine.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/util.js'></script>  <script type="text/javascript">dwr.engine.setActiveReverseAjax(true);function logout(){  Login.doLogout(showLoginScreen);}function showLoginScreen(){  window.location.href='index.jsp';}function showUsersOnline(){  var cellFuncs = [          function(user) {             return '<i>'+user+'</i>';          }          ];    Login.getUsersOnline({    callback:function(users)     {      dwr.util.removeAllRows('usersOnline');      dwr.util.addRows( "usersOnline",users, cellFuncs,                                    { escapeHtml:false });    }    });}function getPreviousMessages(){    ChatRoomDatabase.getChatContent({    callback:function(messages)     {      var chatArea=dwr.util.byId('chatArea');      var html="";      for(index in messages)      {         var msg=messages[index];         html+=msg;      }      chatArea.innerHTML=html;      var chatAreaHeight = chatArea.scrollHeight;      chatArea.scrollTop = chatAreaHeight;    }    });}function newMessage(message){  var chatArea=dwr.util.byId('chatArea');  var oldMessages=chatArea.innerHTML;  chatArea.innerHTML=oldMessages+message;    var chatAreaHeight = chatArea.scrollHeight;  chatArea.scrollTop = chatAreaHeight;}function sendMessageIfEnter(event){  if(event.keyCode == 13)  {    sendMessage();  }}function sendMessage(){    var message=dwr.util.byId('messageText');    var messageText=message.value;    ChatRoomDatabase.postMessage(messageText);    message.value='';}</script></head><body onload="showUsersOnline();"><div id="maincontainer"><div id="topsection"><div class="innertube"><h1>Chatroom</h1><h4>Welcome <i><%=(String) session.getAttribute("username")%></i></h4></div></div><div id="contentwrapper"><div id="contentcolumn"><div id="chatArea" style="width: 600px; height: 300px; overflow: auto"></div><div id="inputArea"><h4>Send message</h4><input id="messageText" type="text" size="50"  onkeyup="sendMessageIfEnter(event);"><input type="button" value="Send msg"                                                      onclick="sendMessage();"></div></div></div><div id="leftcolumn"><div class="innertube"><table cellpadding="0" cellspacing="0">  <thead>    <tr>      <td><b>Users online</b></td>    </tr>  </thead>  <tbody id="usersOnline">  </tbody></table><input id="logoutButton" type="button" value="Logout"  onclick="logout();return false;"></div></div><div id="footer">Stylesheet by <a  href="http://www.dynamicdrive.com/style/">Dynamic Drive CSSLibrary</a></div></div><script type="text/javascript">getPreviousMessages();</script></body></html> The first chat-room-specific JavaScript function is getPreviousMessages(). This function is called at the end of mainpage.jsp, and it retrieves previous chat messages for this chat room. The newMessage() function is called by the server-side Java code when a new message is posted to the chat room. The function also scrolls the chat area automatically to show the latest message. The sendMessageIfEnter() and sendMessage() functions are used to send user messages to the server. There is the input field for the message text in the HTML code, and the sendMessageIfEnter() function listens to onkeyup events in the input field. If the user presses enter, the sendMessage() function is called to send the message to the server. The HTML code includes the chat area of specified size and with automatic scrolling. Developing the Java Code There are several Java classes in the application. The Login class handles the user login and logout and also keeps track of the logged-in users. The source code of the Login class is as follows: package chatroom;import java.util.Collection;import java.util.List;import javax.servlet.ServletContext;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpSession;import org.directwebremoting.ScriptSession;import org.directwebremoting.ServerContext;import org.directwebremoting.ServerContextFactory;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;import org.directwebremoting.proxy.ScriptProxy;public class Login {   public Login() {   }      public String doLogin(String userName) {      UserDatabase userDb=UserDatabase.getInstance();      if(!userDb.isUserLogged(userName)) {         userDb.login(userName);         WebContext webContext= WebContextFactory.get();         HttpServletRequest request = webContext.getHttpServletRequest();         HttpSession session=request.getSession();         session.setAttribute("username", userName);         String scriptId = webContext.getScriptSession().getId();         session.setAttribute("scriptSessionId", scriptId);         updateUsersOnline();         return "mainpage.jsp";      }      else {         return "loginFailed.html";      }   }      public void doLogout() {      try {         WebContext ctx = WebContextFactory.get();         HttpServletRequest request = ctx.getHttpServletRequest();         HttpSession session = request.getSession();         Util util = new Util();         String userName = util.getCurrentUserName(session);         UserDatabase.getInstance().logout(userName);         session.removeAttribute("username");         session.removeAttribute("scriptSessionId");         session.invalidate();      } catch (Exception e) {         System.out.println(e.toString());      }      updateUsersOnline();   }      private void updateUsersOnline() {      WebContext webContext= WebContextFactory.get();      ServletContext servletContext = webContext.getServletContext();      ServerContext serverContext = ServerContextFactory.get(servletContext);      webContext.getScriptSessionsByPage("");      String contextPath = servletContext.getContextPath();      if (contextPath != null) {         Collection<ScriptSession> sessions =                            serverContext.getScriptSessionsByPage                                            (contextPath + "/mainpage.jsp");         ScriptProxy proxy = new ScriptProxy(sessions);         proxy.addFunctionCall("showUsersOnline");      }   }      public List<String> getUsersOnline() {      UserDatabase userDb=UserDatabase.getInstance();      return userDb.getLoggedInUsers();   }} The following is the source code of the UserDatabase class package chatroom;import java.util.List;import java.util.Vector;//this class holds currently logged in users//there is no persistencepublic class UserDatabase {      private static UserDatabase userDatabase=new UserDatabase();      private List<String> loggedInUsers=new Vector<String>();      private UserDatabase() {}      public static UserDatabase getInstance() {      return userDatabase;   }      public List<String> getLoggedInUsers() {      return loggedInUsers;   }      public boolean isUserLogged(String userName) {      return loggedInUsers.contains(userName);    }      public void login(String userName) {      loggedInUsers.add(userName);   }      public void logout(String userName) {      loggedInUsers.remove(userName);   }} The Util class is used by the Login class, and it provides helper methods for the sample application. The source code for the Util class is as follows: package chatroom;import java.util.Hashtable;import java.util.Map;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpSession;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;public class Util {      public Util() {         }      public String getCurrentUserName() {      //get user name from session      WebContext ctx = WebContextFactory.get();      HttpServletRequest request = ctx.getHttpServletRequest();      HttpSession session=request.getSession();      return getCurrentUserName(session);   }   public String getCurrentUserName(HttpSession session) {      String userName=(String)session.getAttribute("username");      return userName;   }} The logic for the server-side chat room functionality is in the ChatRoomDatabase class. The source code for the ChatRoomDatabase is as follows: package chatroom;import java.util.Collection;import java.util.Date;import java.util.List;import java.util.Vector;import javax.servlet.ServletContext;import org.directwebremoting.ScriptSession;import org.directwebremoting.ServerContext;import org.directwebremoting.ServerContextFactory;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;import org.directwebremoting.proxy.ScriptProxy;public class ChatRoomDatabase {   private static List<String> chatContent = new Vector<String>();   public ChatRoomDatabase() {   }   public void postMessage(String message) {      String user = (new Util()).getCurrentUserName();      if (user != null) {         Date time = new Date();         StringBuffer sb = new StringBuffer();         sb.append(time.toString());         sb.append(" <b><i>");         sb.append(user);         sb.append("</i></b>:  ");         sb.append(message);         sb.append("<br/>");         String newMessage=sb.toString();         chatContent.add(newMessage);         postNewMessage(newMessage);      }   }   public List<String> getChatContent() {      return chatContent;   }   private ScriptProxy getScriptProxyForSessions() {      WebContext webContext = WebContextFactory.get();      ServletContext servletContext = webContext.getServletContext();      ServerContext serverContext = ServerContextFactory.get(servletContext);      webContext.getScriptSessionsByPage("");      String contextPath = servletContext.getContextPath();      if (contextPath != null) {         Collection<ScriptSession> sessions = serverContext               .getScriptSessionsByPage(contextPath + "/mainpage.jsp");         ScriptProxy proxy = new ScriptProxy(sessions);         return proxy;      }      return null;   }   public void postNewMessage(String newMessage) {      ScriptProxy proxy = getScriptProxyForSessions();      if (proxy != null) {         proxy.addFunctionCall("newMessage",newMessage);      }   }} The Chatroom code is surprisingly simple. The chat content is stored in a Vector of Strings. The getChatContent()method just returns the chat content Vector to the browser. The postMessage()method is called when the user sends a new chat message. The method verifies whether the user is logged in, and adds the current time and username to the chat message and then appends the message to the chat content. The method also calls the postNewMessage() method that is used to show new chat content to all logged-in users. Note that the postMessage() method does not return any value. We let DWR and reverse AJAX functionality show the chat message to all users, including the user who sent the message. The getScriptProxyForSessions() and postNewMessage() methods use reverse AJAX to update the chat areas of all logged-in users with the new message. And that is it! The chat room sample is very straightforward and basic functionality is already in place, and the application is ready for further development. Testing the Chat We test the chat room application with three users: Smith, Brown, and Jones. We have given some screenshots of a typical scenario in a chat room here. Both Smith and Brown log into the system and exchange some messages. Both users see empty chat rooms when they log in and start chatting. The empty area that is above the send message input field is reserved for chat content. Smith and Brown exchange some messages as is seen in the following screenshot: The third user, Jones, joins the chat and sees all the previous messages in the chat room. Jones then exchanges some messages with Smith and Brown. Smith and Brown log out from the system leaving Jones alone in the chat room (until she also logs out). This is visible in the following screenshot: Summary This sample application showed how to use DWR in a chat room application. This application makes it clear that DWR makes development of these kind of collaborative applications very easy. DWR itself does not even play a big part in the applications. DWR is just a transparent feature of the application. So developers can concentrate on the actual project and aspects such as persistence of data and a neat user interface, instead of the low-level details of AJAX.    
Read more
  • 0
  • 0
  • 2181

article-image-troubleshooting-lotus-notesdomino-7-applications
Packt
23 Oct 2009
19 min read
Save for later

Troubleshooting Lotus Notes/Domino 7 applications

Packt
23 Oct 2009
19 min read
Introduction The major topics that we'll cover in this article are: Testing your application (in other words, uncovering problems before your users do it for you). Asking the right questions when users do discover problems. Using logging to help troubleshoot your problems. We'll also examine two important new Notes/Domino 7 features that can be critical for troubleshooting applications: Domino Domain Monitoring (DDM) Agent Profiler   For more troubleshooting issues visit: TroubleshootingWiki.org Testing your Application Testing an application before you roll it out to your users may sound like an obvious thing to do. However, during the life cycle of a project, testing is often not allocated adequate time or money. Proper testing should include the following: A meaningful amount of developer testing and bug fixing: This allows you to catch most errors, which saves time and frustration for your user community. User representative testing: A user representative, who is knowledgeable about the application and how users use it, can often provide more robust testing than the developer. This also provides early feedback on features. Pilot testing: In this phase, the product is assumed to be complete, and a pilot group uses it in production mode. This allows for limited stress testing as well as more thorough testing of the feature set. In addition to feature testing, you should test the performance of the application. This is the most frequently skipped type of testing, because some consider it too complex and difficult. In fact, it can be difficult to test user load, but in general, it's not difficult to test data load. So, as part of any significant project, it is a good practice to programmatically create the projected number of documents that will exist within the application, one or two years after it has been fully deployed, and have a scheduled agent trigger the appropriate number of edits-per-hour during the early phases of feature testing. Although this will not give a perfect picture of performance, it will certainly help ascertain whether and why the time to create a new document is unacceptable (for example, because the @Db formulas are taking too long, or because the scheduled agent that runs every 15 minutes takes too long due to slow document searches). Asking the Right Questions Suppose that you've rolled out your application and people are using it. Then the support desk starts getting calls about a certain problem. Maybe your boss is getting an earful at meetings about sluggish performance or is hearing gripes about error messages whenever users try to click a button to perform some action. In this section, we will discuss a methodology to help you troubleshoot a problem when you don't necessarily have all the information at your disposal. We will include some specific questions that can be asked verbatim for virtually any application. The first key to success in troubleshooting an application problem is to narrow down where and when it happens. Let's take these two very different problems suggested above (slow performance and error messages), and pose questions that might help unravel them: Does the problem occur when you take a specific action? If so, what is that action? Your users might say, "It's slow whenever I open the application", or "I get an error when I click this particular button in this particular form". Does the problem occur for everyone who does this, or just for certain people? If just certain people, what do they have in common? This is a great way to get your users to help you help them. Let them be a part of the solution, not just "messengers of doom". For example, you might ask questions such as, "Is it slow only for people in your building or your floor? Is it slow only for people accessing the application remotely? Is it slow only for people who have your particular access (for example, SalesRep)?" Does this problem occur all the time, at random times, or only at certain times? It's helpful to check whether or not the time of day or the day of week/month is relevant. So typical questions might be similar to the following: "Do you get this error every time you click the button or just sometimes? If just sometimes, does it give you the error during the middle of the day, but not if you click it at 7 AM when you first arrive? Do you only get the error on Mondays or some other day of the week? Do you only see the error if the document is in a certain status or has certain data in it? If it just happens for a particular document, please send me a link to that document so that I can inspect it carefully to see if there is invalid or unexpected data." Logging Ideally, your questions have narrowed down the type of problem it could be. So at this point, the more technical troubleshooting can start. You will likely need to gather concrete information to confirm or refine what you're hearing from the users. For example, you could put a bit of debugging code into the button that they're clicking so that it gives more informative errors, or sends you an email (or creates a log document) whenever it's clicked or whenever an error occurs. Collecting the following pieces of information might be enough to diagnose the problem very quickly: Time/date User name Document UNID (if the button is pushed in a document) Error Status or any other likely field that might affect your code By looking for common denominators (such as the status of the documents in question, or access or roles of the users), you will likely be able to further narrow down the possibilities of why the problem is happening. This doesn't solve your problem of course, but it helps in advancing you a long way towards that goal. A trickier problem to troubleshoot might be one we mentioned earlier: slow performance. Typically, after you've determined that there is some kind of performance delay, it's a good idea to first collect some server logging data. Set the following Notes.ini variables in the Server Configuration document in your Domino Directory, on the Notes.ini tab: Log_Update=1Log_AgentManager=1 These variables instruct the server to write output to the log.nsf database in the Miscellaneous Events view. Note that they may already be set in your environment. If not, they're fairly unobtrusive, and shouldn't trouble your administration group. Set them for a 24-hour period during a normal business week, and then examine the results to see if anything pops out as being suspicious. For view indexing, you should look for lines like these in the Miscellaneous Events (Log_Update=1): 07/01/2006 09:29:57 AM Updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Updating views in appsTracking.nsf07/01/2006 09:30:17 AM Finished updating views in appsTracking.nsf07/01/2006 09:30:17 AM Updating views in appsZooSchedule.nsf07/01/2006 09:30:18 AM Finished updating views in appsZooSchedule.nsf And lines like these for Agent execution (Log_AgentManager=1): 06/30/2006 09:43:49 PM AMgr: Start executing agent 'UpdateTickets' in 'appsSalesPipeline.nsf ' by Executive '1'06/30/2006 09:43:52 PM AMgr: Start executing agent 'ZooUpdate' in 'appsZooSchedule.nsf ' by Executive '2'06/30/2006 09:44:44 PM AMgr: Start executing agent 'DirSynch' in 'appsTracking.nsf ' by Executive '1' Let's examine these lines to see whether or not there is anything we can glean from them. Starting with the Log_Update=1 setting, we see that it gives us the start and stop times for every database that gets indexed. We also see that the database file paths appear alphabetically. This means that, if we search for the text string updating views and pull out all these lines covering (for instance) an hour during a busy part of the day, and copy/paste these lines into a text editor so that they're all together, then we should see complete database indexing from A to Z on your server repeating every so often. In the log.nsf database, there may be many thousands of lines that have nothing to do with your investigation, so culling the important lines is imperative for you to be able to make any sense of what's going on in your environment. You will likely see dozens or even hundreds of databases referenced. If you have hundreds of active databases on your server, then culling all these lines might be impractical, even programmatically. Instead, you might focus on the largest group of databases. You will notice that the same databases are referenced every so often. This is the Update Cycle, or view indexing cycle. It's important to get a sense of how long this cycle takes, so make sure you don't miss any references to your group of databases. Imagine that SalesPipeline.nsf and Tracking.nsf were the two databases that you wanted to focus on. You might cull the lines out of the log that have updating views and which reference these two databases, and come up with something like the following: 07/01/2006 09:29:57 AM Updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Updating views in appsTracking.nsf07/01/2006 09:30:20 AM Finished updating views in appsTracking.nsf07/01/2006 10:15:55 AM Updating views in appsSalesPipeline.nsf07/01/2006 10:16:33 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 10:16:33 AM Updating views in appsTracking.nsf07/01/2006 10:16:43 AM Finished updating views in appsTracking.nsf07/01/2006 11:22:31 AM Updating views in appsSalesPipeline.nsf07/01/2006 11:23:33 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 11:23:33 AM Updating views in appsTracking.nsf07/01/2006 11:23:44 AM Finished updating views in appsTracking.nsf This gives us some very important information: the Update task (view indexing) is taking approximately an hour to cycle through the databases on the server; that's too long. The Update task is supposed to run every 15 minutes, and ideally should only run for a few minutes each time it executes. If the cycle is an hour, then that means update is running full tilt for that hour, and as soon as it stops, it realizes that it's overdue and kicks off again. It's possible that if you examine each line in the log, you'll find that certain databases are taking the bulk of the time, in which case it might be worth examining the design of those databases. But it might be that every database seems to take a long time, which might be more indicative of a general server slowdown. In any case, we haven't solved the problem; but at least we know that the problem is probably server-wide. More complex applications, and newer applications, tend to reflect server‑performance problems more readily, but that doesn't necessarily mean they carry more responsibility for the problem. In a sense, they are the "canary in the coal mine". If you suspect the problem is confined to one database (or a few), then you can increase the logging detail by setting Log_Update=2. This will give you the start time for every view in every database that the Update task indexes. If you see particular views taking a long time, then you can examine the design of those views. If no database(s) stand out, then you might want to see if the constant indexing occurs around the clock or just during business hours. If it's around the clock, then this might point to some large quantities of data that are changing in your databases. For example, you may be programmatically synchronizing many gigabytes of data throughout the day, not realizing the cost this brings in terms of indexing. If slow indexing only occurs during business hours, then perhaps the user/data load has not been planned out well for this server. As the community of users ramps up in the morning, the server starts falling behind and never catches up until evening. There are server statistics that can help you determine whether or not this is the case. (These server statistics go beyond the scope of this book, but you can begin your investigation by searching on the various Notes/Domino forums for "server AND performance AND statistics".) As may be obvious at this point, troubleshooting can be quite time-consuming. The key is to make sure that you think through each step so that it either eliminates something important, or gives you a forward path. Otherwise, you can find yourself still gathering information weeks and months later, with users and management feeling very frustrated. Before moving on from this section, let's take a quick look at agent logging. Agent Manager can run multiple agents in different databases, as determined by settings in your server document. Typically, production servers only allow two or three concurrent agents to run during business hours, and these are marked in the log as Executive '1', Executive '2', and so on. If your server is often busy with agent execution, then you can track Executive '1' and see how many different agents it runs, and for how long. If there are big gaps between when one agent starts and when the next one does (for Executive '1'), this might raise suspicion that the first agent took that whole time to execute. To verify this, turn up the logging by setting the Notes.ini variable debug_amgr=*. (This will output a fair amount of information into your log, so it's best not to leave it on for too long, but normally one day is not a problem.) Doing this will give you a very important piece of information: the number of "ticks" it took for the agent to run. One second equals 100 ticks, so if the agent takes 246,379 ticks, this equals 2,463 seconds (about 41 minutes). As a general rule, you want scheduled agents to run in seconds, not minutes; so any agent that is taking this long will require some examination. In the next section, we will talk about some other ways you can identify problematic agents. Domino Domain Monitoring (DDM) Every once in a while, a killer feature is introduced—a feature so good, so important, so helpful, that after using it, we just shake our heads and wonder how we ever managed without it for so long. Domino Domain Monitor (DDM) is just such a feature. DDM is too large to be completely covered in this one section, so we will confine our overview to what it can do in terms of troubleshooting applications. For a more thorough explanation of DDM and all its features, see the book, Upgrading to Lotus Notes and Domino (www.packtpub.com/upgrading_lotus/book). In the events4.nsf database, you will find a new group of documents you can create for tracking agent or application performance. On Domino 7 servers, a new database is created automatically with the filename ddm.nsf. This stores the DDM output you will examine. For application troubleshooting, some of the most helpful areas to track using DDM are the following: Full-text index needs to be built. If you have agents that are creating a full‑text index on the fly because the database has no full‑text index built, DDM can track that potential problem for you. Especially useful is the fact that DDM compiles the frequency per database, so (for instance) you can see if it happens once per month or once per hour. Creating full‑text indexes on the fly can result in a significant demand on server resources, so having this notification is very useful. We discuss an example of this later in this section. Agent security warnings. You can manually examine the log to try to find errors about agents not being able to execute due to insufficient access. However, DDM will do this for you, making it much easier to find (and therefore fix) such problems. Resource utilization. You can track memory, CPU, and time utilization of your agents as run by Agent Manager or by the HTTP task. This means that at any time you can open the ddm.nsf database and spot the worst offenders in these categories, over your entire server/domain. We will discuss an example of CPU usage later in this section. The following illustration shows the new set of DDM views in the events4.nsf (Monitoring configuration) database: The following screenshot displays the By Probe Server view after we've made a few document edits: Notice that there are many probes included out-of-the-box (identified by the property "author = Lotus Notes Template Development") but set to disabled. In this view, there are three that have been enabled (ones with checkmarks) and were created by one of the authors of this book. If you edit the probe document highlighted above, Default Application Code/Agents Evaluated By CPU Usage (Agent Manager), the document consists of three sections. The first section is where you choose the type of probe (in this case Application Code) and the subtype (in this case Agents Evaluated By CPU Usage). The second section allows you to choose the servers to run against, and whether you want this probe to run against agents/code executed by Agent Manager or by the HTTP task (as shown in the following screenshot). This is an important distinction. For one thing, they are different tasks, and therefore one can hit a limit while the other still has room to "breathe". But perhaps more significantly, if you choose a subtype of Agents Evaluated By Memory Usage, then the algorithms used to evaluate whether or not an agent is using too much memory are very different. Agents run by the HTTP task will be judged much more harshly than those run by the Agent Manager task. This is because with the HTTP task, it is possible to run the same agent with up to hundreds of thousands of concurrent executions. But with Agent Manager, you are effectively limited to ten concurrent instances, and none within the same database. The third section allows you to set your threshold for when DDM should report the activity: You can select up to four levels of warning: Fatal, Failure, Warning (High), and Warning (Low). Note that you do not have the ability to change the severity labels (which appear as icons in the view). Unless you change the database design of ddm.nsf, the icons displayed in the view and documents are non-configurable. Experiment with these settings until you find the approach that is most useful for your corporation. Typically, customers start by overwhelming themselves with information, and then fine-tuning the probes so that much less information is reported. In this example, only two statuses are enabled: one for six seconds, with a label of Warning (High), and one for 60 seconds, with a label of Failure. Here is a screenshot of the DDM database: Notice that there are two Application Code results, one with a status of Failure (because that agent ran for more than 60 seconds), and one with a status of Warning (High) (because that agent ran for more than six seconds but less than 60 seconds). These are the parameters set in the Probe document shown previously, which can easily be changed by editing that Probe document. If you want these labels to be different, you must enable different rows in the Probe document. If you open one of these documents, there are three sections. The top section gives header information about this event, such as the server name, the database and agent name, and so on. The second section includes the following table, with a tab for the most recent infraction and a tab for previous infractions. This allows you to see how often the problem is occurring, and with what severity. The third section provides some possible solutions, and (if applicable) automation. For example, in our example, you might want to "profile" your agent. (We will profile one of our agents in the final section of this article.) DDM can capture full-text operations against a database that is not full‑text indexed. It tracks the number of times this happens, so you can decide whether to full‑text index the database, change the agent, or neither. For a more complete list of the errors and problems that DDM can help resolve, check the Domino 7 online help or the product documentation (www.lotus.com). Agent Profiler If any of the troubleshooting tips or techniques we've discussed in this article causes you to look at an agent and think, "I wonder what makes this agent so slow", then the Agent Profiler should be the next tool to consider. Agent Profiler is another new feature introduced in Notes/Domino 7. It gives you a breakdown of many methods/properties in your LotusScript agent, telling you how often each one was executed and how long they took to execute. In Notes/Domino 7, the second (security) tab of Agent properties now includes a checkbox labeled Profile this agent. You can select this option if you want an agent to be profiled. The next time the agent runs, a profile document in the database is created and filled with the information from that execution. This document is then updated every time the agent runs. You can view these results from the Agent View by highlighting your agent and selecting Agent | View Profile Results. The following is a profile for an agent that performed slow mail searches: Although this doesn't completely measure (and certainly does not completely troubleshoot) your agents, it is an important step forward in troubleshooting code. Imagine the alternative: dozens of print statements, and then hours of collating results! Summary In closing, we hope that this article has opened your eyes to new possibilities in troubleshooting, both in terms of techniques and new Notes/Domino 7 features. Every environment has applications that users wish ran faster, but with a bit of care, you can troubleshoot your performance problems and find resolutions. After you have your servers running Notes/Domino 7, you can use DDM and Agent Profiler (both exceptionally easy to use) to help nail down poorly performing code in your applications. These tools really open a window on what had previously been a room full of mysterious behavior. Full-text indexing on the fly, code that uses too much memory, and long running agents are all quickly identified by Domino Domain Monitoring (DDM). Try it!
Read more
  • 0
  • 0
  • 2312
Banner background image

article-image-interacting-databases-through-java-persistence-api
Packt
23 Oct 2009
17 min read
Save for later

Interacting with Databases through the Java Persistence API

Packt
23 Oct 2009
17 min read
We will look into: Creating our first JPA entity Interacting with JPA entities with entity manager Generating forms in JSF pages from JPA entities Generating JPA entities from an existing database schema JPA named queries and JPQL Entity relationships Generating complete JSF applications from JPA entities Creating Our First JPA Entity JPA entities are Java classes whose fields are persisted to a database by the JPA API. JPA entities are Plain Old Java Objects (POJOs), as such, they don't need to extend any specific parent class or implement any specific interface. A Java class is designated as a JPA entity by decorating it with the @Entity annotation. In order to create and test our first JPA entity, we will be creating a new web application using the JavaServer Faces framework. In this example we will name our application jpaweb. As with all of our examples, we will be using the bundled GlassFish application server. To create a new JPA Entity, we need to right-click on the project and select New | Entity Class. After doing so, NetBeans presents the New Entity Class wizard. At this point, we should specify the values for the Class Name and Package fields (Customer and com.ensode.jpaweb in our example), then click on the Create Persistence Unit... button. The Persistence Unit Name field is used to identify the persistence unit that will be generated by the wizard, it will be defined in a JPA configuration file named persistence.xml that NetBeans will automatically generate from the Create Persistence Unit wizard. The Create Persistence Unit wizard will suggest a name for our persistence unit, in most cases the default can be safely accepted. JPA is a specification for which several implementations exist. NetBeans supports several JPA implementations including Toplink, Hibernate, KODO, and OpenJPA. Since the bundled GlassFish application server includes Toplink as its default JPA implementation, it makes sense to take this default value for the Persistence Provider field when deploying our application to GlassFish. Before we can interact with a database from any Java EE 5 application, a database connection pool and data source need to be created in the application server. A database connection pool contains connection information that allow us to connect to our database, such as the server name, port, and credentials. The advantage of using a connection pool instead of directly opening a JDBC connection to a database is that database connections in a connection pool are never closed, they are simply allocated to applications as they need them. This results in performance improvements, since the operations of opening and closing database connections are expensive in terms of performance. Data sources allow us to obtain a connection from a connection pool by obtaining an instance of javax.sql.DataSource via JNDI, then invoking its getConnection() method to obtain a database connection from a connection pool. When dealing with JPA, we don't need to directly obtain a reference to a data source, it is all done automatically by the JPA API, but we still need to indicate the data source to use in the application's Persistence Unit. NetBeans comes with a few data sources and connection pools pre-configured. We could use one of these pre-configured resources for our application, however, NetBeans also allows creating these resources "on the fly", which is what we will be doing in our example. To create a new data source we need to select the New Data Source... item from the Data Source combo box. A data source needs to interact with a database connection pool. NetBeans comes pre-configured with a few connection pools out of the box, but just like with data sources, it allows us to create a new connection pool "on demand". In order to do this, we need to select the New Database Connection... item from the Database Connection combo box. NetBeans includes JDBC drivers for a few Relational Database Management Systems (RDBMS) such as JavaDB, MySQL, and PostgreSQL "out of the box". JavaDB is bundled with both GlassFish and NetBeans, therefore we picked JavaDB for our example. This way we avoid having to install an external RDBMS. For RDBMS systems that are not supported out of the box, we need to obtain a JDBC driver and let NetBeans know of it's location by selecting New Driver from the Name combo box. We then need to navigate to the location of a JAR file containing the JDBC driver. Consult your RDBMS documentation for details. JavaDB is installed in our workstation, therefore the server name to use is localhost. By default, JavaDB listens to port 1527, therefore that is the port we specify in the URL. We wish to connect to a database called jpaintro, therefore we specify it as the database name. Since the jpaintro database does not exist yet, we pass the attribute create=true to JavaDB, this attribute is used to create the database if it doesn't exist yet. Every JavaDB database contains a schema named APP, since each user by default uses a schema named after his/her own login name. The easiest way to get going is to create a user named "APP" and select a password for this user. Clicking on the Show JDBC URL checkbox reveals the JDBC URL for the connection we are setting up. The New Database Connection wizard warns us of potential security risks when choosing to let NetBeans remember the password for the database connection. Database passwords are scrambled (but not encrypted) and stored in an XML file under the .netbeans/[netbeans version]/config/Databases/Connections directory. If we follow common security practices such as locking our workstation when we walk away from it, the risks of having NetBeans remember database passwords will be minimal. Once we have created our new data source and connection pool, we can continue configuring our persistence unit. It is a good idea to leave the Use Java Transaction APIs checkbox checked. This will instruct our JPA implementation to use the Java Transaction API (JTA) to allow the application server to manage transactions. If we uncheck this box, we will need to manually write code to manage transactions. Most JPA implementations allow us to define a table generation strategy. We can instruct our JPA implementation to create tables for our entities when we deploy our application, to drop the tables then regenerate them when our application is deployed, or not create any tables at all. NetBeans allows us to specify the table generation strategy for our application by clicking the appropriate value in the Table Generation Strategy radio button group. When working with a new application, it is a good idea to select the Drop and Create table generation strategy. This will allow us to add, remove, and rename fields in our JPA entity at will without having to make the same changes in the database schema. When selecting this table generation strategy, tables in the database schema will be dropped and recreated, therefore any data previously persisted will be lost. Once we have created our new data source, database connection and persistence unit, we are ready to create our new JPA entity. We can do so by simply clicking on the Finish button. At this point NetBeans generates the source for our JPA entity. JPA allows the primary field of a JPA entity to map to any column type (VARCHAR, NUMBER). It is best practice to have a numeric surrogate primary key, that is, a primary key that serves only as an identifier and has no business meaning in the application. Selecting the default Primary Key type of long will allow for a wide range of values to be available for the primary keys of our entities. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } //Other generated methods (hashCode(), equals() and //toString() omitted for brevity.} As we can see, a JPA entity is a standard Java object. There is no need to extend any special class or implement any special interface. What differentiates a JPA entity from other Java objects are a few JPA-specific annotations. The @Entity annotation is used to indicate that our class is a JPA entity. Any object we want to persist to a database via JPA must be annotated with this annotation. The @Id annotation is used to indicate what field in our JPA entity is its primary key. The primary key is a unique identifier for our entity. No two entities may have the same value for their primary key field. This annotation can be placed just above the getter method for the primary key class. This is the strategy that the NetBeans wizard follows. It is also correct to specify the annotation right above the field declaration. The @Entity and the @Id annotations are the bare minimum two annotations that a class needs in order to be considered a JPA entity. JPA allows primary keys to be automatically generated. In order to take advantage of this functionality, the @GeneratedValue annotation can be used. As we can see, the NetBeans generated JPA entity uses this annotation. This annotation is used to indicate the strategy to use to generate primary keys. All possible primary key generation strategies are listed in the following table:   Primary Key Generation Strategy   Description   GenerationType.AUTO   Indicates that the persistence provider will automatically select a primary key generation strategy. Used by default if no primary key generation strategy is specified.   GenerationType.IDENTITY   Indicates that an identity column in the database table the JPA entity maps to must be used to generate the primary key value.   GenerationType.SEQUENCE   Indicates that a database sequence should be used to generate the entity's primary key value.   GenerationType.TABLE   Indicates that a database table should be used to generate the entity's primary key value.       In most cases, the GenerationType.AUTO strategy works properly, therefore it is almost always used. For this reason the New Entity Class wizard uses this strategy. When using the sequence or table generation strategies, we might have to indicate the sequence or table used to generate the primary keys. These can be specified by using the @SequenceGenerator and @TableGenerator annotations, respectively. Consult the Java EE 5 JavaDoc at http://java.sun.com/javaee/5/docs/api/ for details. For further knowledge on primary key generation strategies you can refer EJB 3 Developer Guide by Michael Sikora, which is another book by Packt Publishing (http://www.packtpub.com/developer-guide-for-ejb3/book). Adding Persistent Fields to Our Entity At this point, our JPA entity contains a single field, its primary key. Admittedly not very useful, we need to add a few fields to be persisted to the database. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; private String firstName; private String lastName; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } //Additional methods omitted for brevity} In this modified version of our JPA entity, we added two fields to be persisted to the database; firstName will be used to store the user's first name, lastName will be used to store the user's last name. JPA entities need to follow standard JavaBean coding conventions. This means that they must have a public constructor that takes no arguments (one is automatically generated by the Java compiler if we don't specify any other constuctors), and all fields must be private, and accessed through getter and setter methods. Automatically Generating Getters and Setters In NetBeans, getter and setter methods can be generated automatically. Simply declare new fields as usual then use the "insert code" keyboard shortcut (default is Alt+Insert), then select Getter and Setter from the resulting pop-up window, then click on the check box next to the class name to select all fields, then click on the Generate button. Before we can use JPA persist our entity's fields into our database, we need to write some additional code. Creating a Data Access Object (DAO) It is a good idea to follow the DAO design pattern whenever we write code that interacts with a database. The DAO design pattern keeps all database access functionality in DAO classes. This has the benefit of creating a clear separation of concerns, leaving other layers in our application, such as the user interface logic and the business logic, free of any persistence logic. There is no special procedure in NetBeans to create a DAO. We simply follow the standard procedure to create a new class by selecting File | New, then selecting Java as the category and the Java Class as the file type, then entering a name and a package for the class. In our example, we will name our class CustomerDAO and place it in the com.ensode.jpaweb package. At this point, NetBeans create a very simple class containing only the package and class declarations. To take complete advantage of Java EE features such as dependency injection, we need to make our DAO a JSF managed bean. This can be accomplished by simply opening faces-config.xml, clicking its XML tab, then right-clicking on it and selecting JavaServer Faces | Add Managed Bean. We get the Add Manged Bean dialog as seen here: We need to enter a name, fully qualified name, and scope for our managed bean (which, in our case, is our DAO), then click on the Add button. This action results in our DAO being declared as a managed bean in our application's faces-config.xml configuration file. <managed-bean> <managed-bean-name>CustomerDAO</managed-bean-name> <managed-bean-class> com.ensode.jpaweb.CustomerDAO </managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> We could at this point start writing our JPA code manually, but with NetBeans there is no need to do so, we can simply right-click on our code and select Persistence | Use Entity Manager, and most of the work is automatically done for us. Here is how our code looks like after doing this trivial procedure: package com.ensode.jpaweb;import javax.annotation.Resource;import javax.naming.Context;import javax.persistence.EntityManager;import javax.persistence.PersistenceContext;@PersistenceContext(name = "persistence/LogicalName", unitName = "jpawebPU")public class CustomerDAO { @Resource private javax.transaction.UserTransaction utx; protected void persist(Object object) { try { Context ctx = (Context) new javax.naming.InitialContext(). lookup("java:comp/env"); utx.begin(); EntityManager em = (EntityManager) ctx.lookup("persistence/LogicalName"); em.persist(object); utx.commit(); } catch (Exception e) { java.util.logging.Logger.getLogger( getClass().getName()).log( java.util.logging.Level.SEVERE, "exception caught", e); throw new RuntimeException(e); } }} All highlighted code is automatically generated by NetBeans. The main thing NetBeans does here is add a method that will automatically insert a new row in the database, effectively persisting our entity's properties. As we can see, NetBeans automatically generates all necessary import statements. Additionally, our new class is automatically decorated with the @PersistenceContext annotation. This annotation allows us to declare that our class depends on an EntityManager (we'll discuss EntityManager in more detail shortly). The value of its name attribute is a logical name we can use when doing a JNDI lookup for our EntityManager. NetBeans by default uses persistence/LogicalName as the value for this property. The Java Naming and Directory Interface (JNDI) is an API we can use to obtain resources, such as database connections and JMS queues, from a directory service. The value of the unitName attribute of the @PersistenceContext annotation refers to the name we gave our application's Persistence Unit. NetBeans also creates a new instance variable of type javax.transaction.UserTransaction. This variable is needed since all JPA code must be executed in a transaction. UserTransaction is part of the Java Transaction API (JTA). This API allows us to write code that is transactional in nature. Notice that the UserTransaction instance variable is decorated with the @Resource annotation. This annotation is used for dependency injection. in this case an instance of a class of type javax.transaction.UserTransaction will be instantiated automatically at run-time, without having to do a JNDI lookup or explicitly instantiating the class. Dependency injection is a new feature of Java EE 5 not present in previous versions of J2EE, but that was available and made popular in the Spring framework. With standard J2EE code, it was necessary to write boilerplate JNDI lookup code very frequently in order to obtain resources. To alleviate this situation, Java EE 5 made dependency injection part of the standard. The next thing we see is that NetBeans added a persist method that will persist a JPA entity, automatically inserting a new row containing our entity's fields into the database. As we can see, this method takes an instance of java.lang.Object as its single parameter. The reason for this is that the method can be used to persist any JPA entity (although in our example, we will use it to persist only instances of our Customer entity). The first thing the generated method does is obtain an instance of javax.naming.InitialContext by doing a JNDI lookup on java:comp/env. This JNDI name is the root context for all Java EE 5 components. The next thing the method does is initiate a transaction by invoking uxt.begin(). Notice that since the value of the utx instance variable was injected via dependency injection (by simply decorating its declaration with the @Resource annotation), there is no need to initialize this variable. Next, the method does a JNDI lookup to obtain an instance of javax.persistence.EntityManager. This class contains a number of methods to interact with the database. Notice that the JNDI name used to obtain an EntityManager matches the value of the name attribute of the @PersistenceContext annotation. Once an instance of EntityManager is obtained from the JNDI lookup, we persist our entity's properties by simply invoking the persist() method on it, passing the entity as a parameter to this method. At this point, the data in our JPA entity is inserted into the database. In order for our database insert to take effect, we must commit our transaction, which is done by invoking utx.commit(). It is always a good idea to look for exceptions when dealing with JPA code. The generated method does this, and if an exception is caught, it is logged and a RuntimeException is thrown. Throwing a RuntimeException has the effect of rolling back our transaction automatically, while letting the invoking code know that something went wrong in our method. The UserTransaction class has a rollback() method that we can use to roll back our transaction without having to throw a RunTimeException. At this point we have all the code we need to persist our entity's properties in the database. Now we need to write some additional code for the user interface part of our application. NetBeans can generate a rudimentary JSF page that will help us with this task.
Read more
  • 0
  • 0
  • 5321

article-image-ejb-3-security
Packt
23 Oct 2009
15 min read
Save for later

EJB 3 Security

Packt
23 Oct 2009
15 min read
Authentication and authorization in Java EE Container Security There are two aspects covered by Java EE container security: authentication and authorization. Authentication is the process of verifying that users are who they claim to be. Typically this is performed by the user providing credentials such as a password. Authorization, or access control, is the process of restricting operations to specific users or categories of users. The EJB specification provides two kinds of authorization: declarative and programmatic, as we shall see later in the article. The Java EE security model introduces a few concepts common to both authentication and authorization. A principal is an entity that we wish to authenticate. The format of a principal is application-specific but an example is a username. A role is a logical grouping of principals. For example, we can have administrator, manager, and employee roles. The scope over which a common security policy applies is known as a security domain, or realm. Authentication For authentication, every Java EE compliant application server provides the Java Authentication and Authorization Service (JAAS) API. JAAS supports any underlying security system. So we have a common API regardless of whether authentication is username/password verification against a database, iris or fingerprint recognition for example. The JAAS API is fairly low level and most application servers provide authentication mechanisms at a higher level of abstraction. These authentication mechanisms are application-server specific however. We will not cover JAAS any further here, but look at authentication as provided by the GlassFish application server. GlassFish Authentication There are three actors we need to define on the GlassFish application server for authentication purposes: users, groups, and realms. A user is an entity that we wish to authenticate. A user is synonymous with a principal. A group is a logical grouping of users and is not the same as a role. A group's scope is global to the application server. A role is a logical grouping of users whose scope is limited to a specific application. Of course for some applications we may decide that roles are identical to groups. For other applications we need some mechanism for mapping the roles onto groups. We shall see how this is done later. A realm, as we have seen, is the scope over which a common security policy applies. GlassFish provides three kinds of realms: file, certificate, and admin-realm. The file realm stores user, group, and realm credentials in a file named keyfile. This file is stored within the application server file system. A file realm is used by web clients using http or EJB application clients. The certificate realm stores a digital certificate and is used for authenticating web clients using https. The admin-realm is similar to the file realm and is used for storing administrator credentials. GlassFish comes pre-configured with a default file realm named file. We can add, edit, and delete users, groups, and realms using the GlassFish administrator console. We can also use the create-file-user option of the asadmin command line utility. To add a user named scott to a group named bankemployee, in the file realm, we would use the command: <target name="create-file-user"> <exec executable="${glassfish.home}/bin/asadmin" failonerror="true" vmlauncher="false"> <arg line="create-file-user --user admin --passwordfile userpassword --groups bankemployee scott"/> </exec> </target> --user specifies the GlassFish administrator username, admin in our example. --passwordfile specifies the name of the file containing password entries. In our example this file is userpassword. Users, other than GlassFish administrators, are identified by AS_ADMIN_USERPASSWORD. In our example the content of the userpassword file is: AS_ADMIN_USERPASSWORD=xyz This indicates that the user's password is xyz. --groups specifies the groups associated with this user (there may be more than one group). In our example there is just one group, named bankemployee. Multiple groups are colon delineated. For example if the user belongs to both the bankemployee and bankcustomer groups, we would specify: --groups bankemployee:bankcustomer The final entry is the operand which specifies the name of the user to be created. In our example this is scott. There is a corresponding asadmin delete-file-user option to remove a user from the file realm. Mapping Roles to Groups The Java EE specification specifies that there must be a mechanism for mapping local application specific roles to global roles on the application server. Local roles are used by an EJB for authorization purposes. The actual mapping mechanism is application server specific. As we have seen in the case of GlassFish, the global application server roles are called groups. In GlassFish, local roles are referred to simply as roles. Suppose we want to map an employee role to the bankemployee group. We would need to create a GlassFish specific deployment descriptor, sun-ejb-jar.xml, with the following element: <security-role-mapping> <role-name>employee</role-name> <group-name>bankemployee</group-name> </security-role-mapping> We also need to access the configuration-security screen in the administrator console. We then disable the Default Principal To Role Mapping flag. If the flag is enabled then the default is to map a group onto a role with the same name. So the bankemployee group will be mapped to the bankemployee role. We can leave the default values for the other properties on the configuration-security screen. Many of these features are for advanced use where third party security products can be plugged in or security properties customized. Consequently we will give only a brief description of these properties here. Security Manager: This refers to the JVM security manager which performs code-based security checks. If the security manager is disabled GlassFish will have better performance. However, even if the security manager is disabled, GlassFish still enforces standard Java EE authentication/authorization. Audit Logging: If this is enabled, GlassFish will provide an audit trail of all authentication and authorization decisions through audit modules. Audit modules provide information on incoming requests, outgoing responses and whether authorization was granted or denied. Audit logging applies for web-tier and ejb-tier authentication and authorization. A default audit module is provided but custom audit modules can also be created. Default Realm: This is the default realm used for authentication. Applications use this realm unless they specify a different realm in their deployment descriptor. The default value is file. Other possible values are admin-realm and certificate. We discussed GlassFish realms in the previous section. Default Principal: This is the user name used by GlassFish at run time if no principal is provided. Normally this is not required so the property can be left blank. Default Principal Password: This is the password of the default principal. JACC: This is the class name of a JACC (Java Authorization Contract for Containers) provider. This enables the GlassFish administrator to set up third-party plug in modules conforming to the JACC standard to perform authorization. Audit Modules: If we have created custom modules to perform audit logging, we would select from this list. Mapped Principal Class: This is only applicable when Default Principal to Role Mapping is enabled. The mapped principal class is used to customize the java.security.Principal implementation class used in the default principal to role mapping. If no value is entered, the com.sun.enterprise.deployment.Group implementation of java.security.Principal is used. Authenticating an EJB Application Client Suppose we want to invoke an EJB, BankServiceBean, from an application client. We also want the application client container to authenticate the client. There are a number of steps we first need to take which are application server specific. We will assume that all roles will have the same name as the corresponding application server groups. In the case of GlassFish we need to use the administrator console and enable Default Principal To Role Mapping. Next we need to define a group named bankemployee with one or more associated users. An EJB application client needs to use IOR (Interoperable Object Reference) authentication. The IOR protocol was originally created for CORBA (Common Object Request Broker Architecture) but all Java EE compliant containers support IOR. An EJB deployed on one Java EE compliant vendor may be invoked by a client deployed on another Java EE compliant vendor. Security interoperability between these vendors is achieved using the IOR protocol. In our case the client and target EJB both happen to be deployed on the same vendor, but we still use IOR for propagating security details from the application client container to the EJB container. IORs are configured in vendor specific XML files rather than the standard ejb-jar.xml file. In the case of GlassFish, this is done within the <ior-security-config> element within the sun-ejb-jar.xml deployment descriptor file. We also need to specify the invoked EJB, BankServiceBean, in the deployment descriptor. An example of the sun-ejb-jar.xml deployment descriptor is shown below: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD       Application Server 9.0 EJB 3.0//EN"       "http://www.sun.com/software/appserver/dtds/sun-ejb-jar_3_0-0.dtd"> <sun-ejb-jar>   <enterprise-beans>     <ejb>       <ejb-name>BankServiceBean</ejb-name>         <ior-security-config>           <as-context>              <auth-method>USERNAME_PASSWORD</auth-method>              <realm>default</realm>              <required>true</required>           </as-context>         </ior-security-config>     </ejb>   </enterprise-beans> </sun-ejb-jar> The as in <as-context> stands for the IOR authentication service. This specifies authentication mechanism details. The <auth-method> element specifies the authentication method. This is set to USERNAME_PASSWORD which is the only value for an application client. The <realm> element specifies the realm in which the client is authenticated. The <required> element specifies whether the above authentication method is required to be used for client authentication. When creating the corresponding EJB JAR file, the sun-ejb-jar.xml file should be included in the META-INF directory, as follows: <target name="package-ejb" depends="compile">     <jar jarfile="${build.dir}/BankService.jar">         <fileset dir="${build.dir}">              <include name="ejb30/session/**" />                           <include name="ejb30/entity/**" />               </fileset>               <metainf dir="${config.dir}">             <include name="persistence.xml" />                          <include name="sun-ejb-jar.xml" />         </metainf>     </jar> </target> As soon as we run the application client, GlassFish will prompt with a username and password form, as follows: If we reply with the username scott and password xyz the program will run. If we run the application with an invalid username or password we will get the following error message: javax.ejb.EJBException: nested exception is: java.rmi.AccessException: CORBA NO_PERMISSION 9998 ..... EJB Authorization Authorization, or access control, is the process of restricting operations to specific roles. In contrast with authentication, EJB authorization is completely application server independent. The EJB specification provides two kinds of authorization: declarative and programmatic. With declarative authorization all security checks are performed by the container. An EJB's security requirements are declared using annotations or deployment descriptors. With programmatic authorization security checks are hard-coded in the EJBs code using API calls. However, even with programmatic authorization the container is still responsible for authentication and for assigning roles to principals. Declarative Authorization As an example, consider the BankServiceBean stateless session bean with methods findCustomer(), addCustomer() and updateCustomer(): package ejb30.session; import javax.ejb.Stateless; import javax.persistence.EntityManager; import ejb30.entity.Customer; import javax.persistence.PersistenceContext; import javax.annotation.security.RolesAllowed; import javax.annotation.security.PermitAll; import java.util.*; @Stateless @RolesAllowed("bankemployee") public class BankServiceBean implements BankService { @PersistenceContext(unitName="BankService") private EntityManager em; private Customer cust; @PermitAll public Customer findCustomer(int custId) { return ((Customer) em.find(Customer.class, custId)); } public void addCustomer(int custId, String firstName, String lastName) { cust = new Customer(); cust.setId(custId); cust.setFirstName(firstName); cust.setLastName(lastName); em.persist(cust); } public void updateCustomer(Customer cust) { Customer mergedCust = em.merge(cust); } } We have prefixed the bean class with the annotation: @RolesAllowed("bankemployee") This specifies the roles allowed to access any of the bean's method. So only users belonging to the bankemployee role may access the addCustomer() and updateCustomer() methods. More than one role can be specified by means of a brace delineated list, as follows: @RolesAllowed({"bankemployee", "bankcustomer"}) We can also prefix a method with @RolesAllowed, in which case the method annotation will override the class annotation. The @PermitAll annotation allows unrestricted access to a method, overriding any class level @RolesAllowed annotation. As with EJB 3 in general, we can use deployment descriptors as alternatives to the @RolesAllowed and @PermitAll annotations. Denying Authorization Suppose we want to deny all users access to the BankServiceBean.updateCustomer() method. We can do this using the @DenyAll annotation: @DenyAll public void updateCustomer(Customer cust) { Customer mergedCust = em.merge(cust); } Of course if you have access to source code you could simply delete the method in question rather than using @DenyAll. However suppose you do not have access to the source code and have received the EJB from a third party. If you in turn do not want your clients accessing a given method then you would need to use the <exclude-list> element in the ejb-jar.xml deployment descriptor: <?xml version="1.0" encoding="UTF-8"?> <ejb-jar version="3.0"                         xsi_schemaLocation="http://java.sun.com/xml/ns/javaee             http://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd"> <enterprise-beans> <session> <ejb-name>BankServiceBean</ejb-name> </session> </enterprise-beans> <assembly-descriptor> <exclude-list><method> <ejb-name>BankServiceBean</ejb-name> <method-name>updateCustomer</method-name></method></exclude-list> </assembly-descriptor> </ejb-jar> EJB Security Propagation Suppose a client with an associated role invokes, for example, EJB A. If EJB A then invokes, for example, EJB B then by default the client's role is propagated to EJB B. However, you can specify with the @RunAs annotation that all methods of an EJB execute under a specific role. For example, suppose the addCustomer() method in the BankServiceBean EJB invokes the addAuditMessage() method of the AuditServiceBean EJB: @Stateless @RolesAllowed("bankemployee") public class BankServiceBean implements BankService { private @EJB AuditService audit; ....      public void addCustomer(int custId, String firstName,                                                          String lastName) {              cust = new Customer();              cust.setId(custId);              cust.setFirstName(firstName);              cust.setLastName(lastName);              em.persist(cust);              audit.addAuditMessage(1, "customer add attempt");      }      ... } Note that only a client with an associated role of bankemployee can invoke addCustomer(). If we prefix the AuditServiceBean class declaration with @RunAs("bankauditor") then the container will run any method in AuditServiceBean as the bankauditor role, regardless of the role which invokes the method. Note that the @RunAs annotation is applied only at the class level, @RunAs cannot be applied at the method level. @Stateless @RunAs("bankauditor") public class AuditServiceBean implements AuditService { @PersistenceContext(unitName="BankService") private EntityManager em; @TransactionAttribute( TransactionAttributeType.REQUIRES_NEW) public void addAuditMessage (int auditId, String message) { Audit audit = new Audit(); audit.setId(auditId); audit.setMessage(message); em.persist(audit); } } Programmatic Authorization With programmatic authorization the bean rather than the container controls authorization. The javax.ejb.SessionContext object provides two methods which support programmatic authorization: getCallerPrincipal() and isCallerInRole(). The getCallerPrincipal() method returns a java.security.Principal object. This object represents the caller, or principal, invoking the EJB. We can then use the Principal.getName() method to obtain the name of the principal. We have done this in the addAccount() method of the BankServiceBean as follows: Principal cp = ctx.getCallerPrincipal(); System.out.println("getname:" + cp.getName()); The isCallerInRole() method checks whether the principal belongs to a given role. For example, the code fragment below checks if the principal belongs to the bankcustomer role. If the principal does not belong to the bankcustomer role, we only persist the account if the balance is less than 99. if (ctx.isCallerInRole("bankcustomer")) {     em.persist(ac); } else if (balance < 99) {            em.persist(ac);   } When using the isCallerInRole() method, we need to declare all the security role names used in the EJB code using the class level @DeclareRoles annotation: @DeclareRoles({"bankemployee", "bankcustomer"}) The code below shows the BankServiceBean EJB with all the programmatic authorization code described in this section: package ejb30.session; import javax.ejb.Stateless; import javax.persistence.EntityManager; import ejb30.entity.Account; import javax.persistence.PersistenceContext; import javax.annotation.security.RolesAllowed; import java.security.Principal; import javax.annotation.Resource; import javax.ejb.SessionContext; import javax.annotation.security.DeclareRoles; import java.util.*; @Stateless @DeclareRoles({"bankemployee", "bankcustomer"}) public class BankServiceBean implements BankService { @PersistenceContext(unitName="BankService") private EntityManager em; private Account ac; @Resource SessionContext ctx; @RolesAllowed({"bankemployee", "bankcustomer"}) public void addAccount(int accountId, double balance, String accountType) { ac = new Account(); ac.setId(accountId); ac.setBalance(balance); ac.setAccountType(accountType); Principal cp = ctx.getCallerPrincipal(); System.out.println("getname:" + cp.getName()); if (ctx.isCallerInRole("bankcustomer")) { em.persist(ac); } else if (balance < 99) { em.persist(ac); } } ..... } Where we have a choice declarative authorization is preferable to programmatic authorization. Declarative authorization avoids having to mix business code with security management code. We can change a bean's security policy by simply changing an annotation or deployment descriptor instead of modifying the logic of a business method. However, some security rules, such as the example above of only persisting an account within a balance limit, can only be handled by programmatic authorization. Declarative security is based only on the principal and the method being invoked, whereas programmatic security can take state into consideration. Because an EJB is typically invoked from the web-tier by a servlet, JSP page or JSF component, we will briefly mention Java EE web container security. The web-tier and EJB tier share the same security model. So the web-tier security model is based on the same concepts of principals, roles and realms.
Read more
  • 0
  • 0
  • 2653

article-image-tapestry-5-advanced-components
Packt
23 Oct 2009
10 min read
Save for later

Tapestry 5 Advanced Components

Packt
23 Oct 2009
10 min read
Following are some of the components, we'll examine in Tapestry 5: The Grid component allows us to display different data in a fairly sophisticated table. We are going to use it to display our collection of celebrities. The BeanEditForm component greatly simplifies creating forms for accepting user input. We shall use it for adding a new celebrity to our collection. The DateField component provides an easy and attractive way to enter or edit the date. The FCKEditor component is a rich text editor, and it is as easy to incorporate into a Tapestry 5 web application, just as a basic TextField is. This is a third party component, and the main point here is to show that using a library of custom components in a Tapestry 5 application requires no extra effort. It is likely that a similar core component will appear in a future version of the framework. Grid Component It is possible to display our collection of celebrities with the help of the Loop component. It isn't difficult, and in many cases, that will be exactly the solution you need for the task at hand. But, as the number of displayed items grow (our collection grows) different problems may arise. We might not want to display the whole collection on one page, so we'll need some kind of pagination mechanism and some controls to enable navigation from page to page. Also, it would be convenient to be able to sort celebrities by first name, last name, occupation, and so on. All this can be achieved by adding more controls and more code to finally achieve the result that we want, but a table with pagination and sorted columns is a very common part of a user interface, and recreating it each time wouldn't be efficient. Thankfully, the Grid component brings with it plenty of ready to use functionality, and it is very easy to deal with. Open the ShowAll.tml template in an IDE of your choice and remove the Loop component and all its content, together with the surrounding table: <table width="100%"> <tr t_type="loop" t_source="allCelebrities" t:value="celebrity"> <td> <a href="#" t_type="PageLink" t_page="Details" t:context="celebrity.id"> ${celebrity.lastName} </a> </td> <td>${celebrity.firstName}</td> <td> <t:output t_format="dateFormat" t:value="celebrity.dateOfBirth"/> </td> <td>${celebrity.occupation}</td> </tr> </table> In place of this code, add the following line: <t:grid t_source="allCelebrities"/> Run the application, log in to be able to view the collection, and you should see the following result: Quite an impressive result for a single short line of code, isn't it? Not only are our celebrities now displayed in a neatly formatted table, but also, we can sort the collection by clicking on the columns' headers. Also note that occupation now has only the first character capitalized—much better than the fully capitalized version we had before. Here, we see the results of some clever guesses on Tapestry's side. The only required parameter of the Grid component is source, the same as the required parameter of the Loop component. Through this parameter, Grid receives a number of objects of the same class. It takes the first object of this collection and finds out its properties. It tries to create a column for each property, transforming the property's name for the column's header (for example, lastName property name gives Last Name column header) and makes some additional sensible adjustments like changing the case of the occupation property values in our example. All this is quite impressive, but the table, as it is displayed now, has a number of deficiencies: All celebrities are displayed on one page, while we wanted to see how pagination works. This is because the default number of records per page for  Grid component is 25—more than we have in our collection at the moment. The last name of the celebrities does not provide a link to the Details page anymore. It doesn't make sense to show the Id column. The order of the columns is wrong. It would be more sensible to have the Last Name in the first column, then First Name, and finally the Date of Birth. By default, to define the display of the order of columns in the table, Tapestry will use the order in which getter methods are defined in the displayed class. In the Celebrity class, the getFirstName method is the first of the getters and so the First Name column will go first, and so on. There are also some other issues we might want to take care of, but let's first deal with these four. Tweaking the Grid First of all let's change the number of records per page. Just add the following parameter to the component's declaration: <t:grid t_source="allCelebrities" rowsPerPage="5"/> Run the application, and here is what you should see: You can now easily page through the records using the attractive pager control that appeared at the bottom of the table. If you would rather have the pager at the top, add another parameter to the Grid declaration: <t:grid t_source="allCelebrities" rowsPerPage="5" pagerPosition="top"/> You can even have two pagers, at the top and at the bottom, by specifying pagerPosition="both", or no pagers at all (pagerPosition="none"). In the latter case however, you will have to provide some custom way of paging through records. The next enhancement will be a link surrounding the celebrity's last name and linking to the Details page. We'll be adding an ActionLink and will need to know which Celebrity to link to, so we have the Grid store using the row parameter. This is how the Grid declaration will look: <t:grid t_source="allCelebrities" rowsPerPage="5" row="celebrity"/> As for the page class, we already have the celebrity property in it. It should have been left from our experiments with the Loop component. It will also be used in exactly the same way as with Loop, while iterating through the objects provided by its source parameter, Grid will assign the object that is used to display the current row to the celebrity property. The next thing to do is to tell Tapestry that when it comes to the contents of the Last Name column, we do not want Grid to display it in a default way. Instead, we shall provide our own way of displaying the cells of the table that contain the last name. Here is how we do this: <t:grid t_source="allCelebrities" rowsPerPage="5" row="celebrity"> <t:parameter name="lastNameCell"> <t:pagelink t_page="details" t_context="celebrity.id"> ${celebrity.lastName} </t:pagelink> </t:parameter> </t:grid> Here, the Grid component contains a special Tapestry element , similar to the one that we used in the previous chapter, inside the If component. As before, it serves to provide an alternative content to display, in this case, the content which will fill in the cells of the Last Name column. How does Tapestry know this? By the name of the element, lastNameCell. The first part of this name, lastName, is the name of one of the properties of the displayed objects. The last part, Cell, tells Tapestry that it is about the content of the table cells displaying the specified property. Finally, inside , you can see an expansion displaying the name of the current celebrity and surrounding it with the PageLink component that has for its context the ID of the current celebrity. Run the application, and you should see that we have achieved what we wanted: Click on the last name of a celebrity, and you should see the Details page with the appropriate details on it. All that is left now is to remove the unwanted Id column and to change the order of the remaining columns. For this, we'll use two properties of the Grid—remove and reorder. Modify the component's definition in the page template to look like this: <t:grid t_source="celebritySource" rowsPerPage="5" row="celebrity" remove="id" reorder="lastName,firstName,occupation,dateOfBirth"> <t:parameter name="lastNameCell"> <t:pagelink t_page="details" t_context="celebrity.id"> ${celebrity.lastName} </t:pagelink> </t:parameter> </t:grid> Please note that re-ordering doesn't delete columns. If you omit some columns while specifying their order, they will simply end up last in the table. Now, if you run the application, you should see that the table with a collection of celebrities is displayed exactly as we wanted: Changing the Column Titles Column titles are currently generated by Tapestry automatically. What if we want to have different titles? Say we want to have the title, Birth Date, instead of Date Of Birth. The easiest and the most efficient way to do this is to use the message catalog, the same one that we used while working with the Select component in the previous chapter. Add the following line to the app.properties file: dateOfBirth-label=Birth Date Run the application, and you will see that the column title has changed appropriately. This way, appending -label to the name of the property displayed by the column, you can create the key for a message catalog entry, and thus change the title of any column. Now you should be able to adjust the Grid component to most of the possible requirements and to display with its help many different kinds of objects. However, one scenario can still raise a problem. Add an output statement to the getAllCelebrities method in the ShowAll page class, like this: public List<Celebrity> getAllCelebrities() { System.out.println("Getting all celebrities..."); return dataSource.getAllCelebrities(); } The purpose of this is simply to be aware when the method is called. Run the application, log in, and as soon as the table with celebrities is shown, you will see the output, as follows: Getting all celebrities... The Grid component has the allCelebrities property defined as its source, so it invokes the getAllCelebrities method to obtain the content to display. Note however that Grid, after invoking this method, receives a list containing all 15 celebrities in collection, but displays only the first five. Click on the pager to view the second page—the same output will appear again. Grid requested for the whole collection again, and this time displayed only the second portion of five celebrities from it. Whenever we view another page, the whole collection is requested from the data source, but only one page of data is displayed. This is not too efficient but works for our purpose. Imagine, however, that our collection contains as many as 10,000 celebrities, and it's stored in a remote database. Requesting for the whole collection would put a lot of strain on our resources, especially if we are going to have 2,000 pages. We need to have the ability to request the celebrities, page-by-page—only the first five for the first page, only the second five for the second page and so on. This ability is supported by Tapestry. All we need to do is to provide an implementation of the GridDataSource interface. Here is a somewhat simplified example of such an implementation.
Read more
  • 0
  • 0
  • 2465
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-need-java-business-integration-and-service-engines-netbeans
Packt
23 Oct 2009
6 min read
Save for later

Need for Java Business Integration and Service Engines in NetBeans

Packt
23 Oct 2009
6 min read
In this article, we will discuss the following topics: Need for Java Business Integration (JBI) Enterprise Service Bus Normalized Message Router Service Engines in NetBeans Need for Java Business Integration (JBI) To have a good understanding of Service Engines (a specific type of JBI component), we need to first understand the reason for Java Business Integration. In the business world, not all systems talk the same language. They use different protocols and different forms of communications. Legacy systems in particular can use proprietary protocols for external communication. The advent and acceptance of XML has been greatly beneficial in allowing systems to be easily integrated, but XML itself is not the complete solution. When some systems were first developed, they were not envisioned to be able to communicate with many other systems; they were developed with closed interfaces using closed protocols. This, of course, is fine for the system developer, but makes system integration very difficult. This closed and proprietary nature of enterprise systems makes integration between enterprise applications very difficult. To allow enterprise systems to effectively communicate between each other, system integrators would use vendor-supplied APIs and data formats or agree on common exchange mechanisms between their systems. This is fine for small short term integration, but quickly becomes unproductive as the number of enterprise applications to integrate gets larger. The following figure shows the problems with traditional integration. As we can see in the figure, each third party system that we want to integrate with uses a different protocol. As a system integrator, we potentially have to learn new technologies and new APIs for each system we wish to integrate with. If there are only two or three systems to integrate with, this is not really too much of a problem. However, the more systems we wish to integrate with, the more proprietary code we have to learn and integration with other systems quickly becomes a large problem. To try and overcome these problems, the Enterprise Application Integration (EAI) server was introduced. This concept has an integration server acting as a central hub. The EAI server traditionally has proprietary links to third party systems, so the application integrator only has to learn one API (the EAI server vendors). With this architecture however, there are still several drawbacks. The central hub can quickly become a bottleneck, and because of the hub-and-spoke architecture, any problems at the hub are rapidly manifested at all the clients. Enterprise Service Bus To help solve this problem, leading companies in the integration community (led by Sun Microsystems) proposed the Java Business Integration Specification Request (JSR 208) (Full details of the JSR can be found at http://jcp.org/en/jsr/detail?id=208). JSR 208 proposed a standard framework for business integration by providing a standard set of service provider interfaces (SPIs) to help alleviate the problems experienced with Enterprise Application Integration. The standard framework described in JSR 208 allows pluggable components to be added into a standard architecture and provides a standard common mechanism for each of these components to communicate with each other based upon WSDL. The pluggable nature of the framework described by JSR 208 is depicted in the following figure. It shows us the concept of an Enterprise Service Bus and introduces us to the Service Engine (SE) component: JSR 208 describes a service engine as a component, which provides business logic and transformation services to other components, as well as consuming such services. SEs can integrate Java-based applications (and other resources), or applications with available Java APIs. Service Engine is a component which provides (and consumes) business logic and transformation services to other components. There are various Service Engines available, such as the BPEL service engine for orchestrating business processes, or the Java EE service engine for consuming Java EE Web Services. The Normalized Message Router As we can see from the previous figure, SE's don't communicate directly with each other or with the clients, instead they communicate via the NMR. This is one of the key concepts of JBI, in that it promotes loose coupling of services. So, what is NMR and what is its purpose? NMR is responsible for taking messages from clients and routing them to the appropriate Service Engines for processing. (This is not strictly true as there is another standard JBI component called the Binding Component responsible for receiving client messages. Again, this further enhances the support for loose coupling within JBI, as Service Engines are decoupled from their transport infrastructure). NMR is responsible for passing normalized (that is based upon WSDL) messages between JBI components. Messages typically consist of a payload and a message header which contains any other message data required for the Service Engine to understand and process the message (for example, security information). Again, we can see that this provides a loosely coupled model in which Service Engines have no prior knowledge of other Service Engines. This therefore allows the JBI architecture to be flexible, and allows different component vendors to develop standard based components. Normalized Message Router enables technology for allowing messages to be passed between loosely coupled services such as Service Engines. The figure below gives an overview of the message routing between a client application and two service engines, in this case the EE and SQL service engines. In this figure, a request is made from the client to the JBI Container. This request is passed via NMR to the EE Service Engine. The EE Service Engine then makes a request to the SQL Service Engine via NMR. The SQL Service Engine returns a message to the EE Service Engine again via NMR. Finally, the message is routed back to the client through NMR and JBI framework. The important concept here is that NMR is a message routing hub not only between clients and service engines, but also for intra-communication between different service engines. The entire architecture we have discussed is typically referred to as an Enterprise Service Bus. Enterprise Service Bus (ESB) is a standard-based middleware architecture that allows pluggable components to communicate with each other via a messaging subsystem.
Read more
  • 0
  • 0
  • 1717

article-image-jboss-plug-and-eclipse-web-tools-platform
Packt
23 Oct 2009
4 min read
Save for later

JBoss AS plug-in and the Eclipse Web Tools Platform

Packt
23 Oct 2009
4 min read
In this article, we recommend that you use the JBoss AS (version 4.2), which is a free J2EE Application Server that can be downloaded from http://www.jboss.org/jbossas/downloads/ (complete documentation can be downloaded from http://www.jboss.org/jbossas/docs/). JBoss AS plug-in and the Eclipse WTP JBoss AS plug-in can be treated as an elegant method of connecting a J2EE Application Server to the Eclipse IDE. It's important to know that JBoss AS plug-in does this by using the WTP support, which is a project included by default in the Eclipse IDE. WTP is a major project that extends the Eclipse platform with a strong support for Web and J2EE applications. In this case, WTP will sustain important operations, like starting the server in run/debug mode, stopping the server, and delegating WTP projects to their runtimes. For now, keep in mind that Eclipse supports a set of WTP servers and for every WTP server you may have one WTP runtime. Now, we will see how to install and configure the JBoss 4.2.2 runtime and server. Adding a WTP runtime in Eclipse In case of JBoss Tools, the main scope of Server Runtimes is to point to a server installation somewhere on your machine. By runtimes, we can use different configurations of the same server installed in different physical locations. Now, we will create a JBoss AS Runtime (you can extrapolate the steps shown below for any supported server): From the Window menu, select Preferences. In the Preferences window, expand the Server node and select the Runtime Environments child-node. On the right side of the window, you can see a list of currently installed runtimes, as shown in the following screenshot, where you can see that an Apache Tomcat runtime is reported (this is just an example, the Apache Tomcat runtime is not a default one). Now, if you want to install a new runtime, you should click the Add button from the top-right corner. This will bring in front the New Server Runtime Environment window as you can see in the following screenshot. Because we want to add a JBoss 4.2.2 runtime, we will select the JBoss 4.2 Runtime option (for other adapters proceed accordingly). After that, click Next for setting the runtime parameters. In the runtimes list, we have runtimes provided by WTP and runtimes provided by JBoss Tools (see the section marked in red on the previous screenshot). Because this article is about JBoss Tools, we will further discuss only the runtimes from this category. Here, we have five types of runtimes with the mention that the JBoss Deploy-Only Runtime type is for developers who start/stop/debug applications outside Eclipse. In this step, you will configure the JBoss runtime by indicating the runtime's name (in the Name field), the runtime's home directory (in the Home Directory field), the Java Runtime Environment associated with this runtime (in the JRE field), and the configuration type (in the Configuration field).In the following screenshot, we have done all these settings for our JBoss 4.2 Runtime. The official documentation of JBoss AS 4.2.2 recommends using JDK version 5. If you don't have this version in the JRE list, you can add it like this: Display the Preferences window by clicking the JRE button. In this window, click the Add button to display the Add JRE window. Continue by selecting the Standard VM option and click on the Next button. On the next page, use the Browse button to navigate to the JRE 5 home directory. Click on the Finish button and you should see a new entry in the Installed JREs field of the Preferences window (as shown in the following screenshot). Just check the checkbox of this new entry and click OK. Now, JRE 5 should be available in the JRE list of the New Server Runtime Environment window. After this, just click on the Finish button and the new runtime will be added, as shown in the following screenshot: From this window, you can also edit or remove a runtime by using the Edit and Remove buttons. These are automatically activated when you select a runtime from the list. As a final step, it is recommended to restart the Eclipse IDE.
Read more
  • 0
  • 0
  • 2649

article-image-jboss-perspective
Packt
23 Oct 2009
4 min read
Save for later

JBoss AS Perspective

Packt
23 Oct 2009
4 min read
As you know, Eclipse offers an ingenious system of perspectives that helps us to switch between different technologies and to keep the main-screen as clean as possible. Every perspective is made of a set of components that can be added/removed by the user. These components are known as views. The JBoss AS Perspective has a set of specific views as follows: JBoss Server View Project Archives View Console View Properties View For launching the JBoss AS Perspective (or any other perspective), follow these two simple steps: From the Window menu, select Open Perspective | Other article. In the Open Perspective window, select the JBoss AS option and click on OK button (as shown in the following screenshot). If everything works fine, you should see the JBoss AS perspective as shown in the following screenshot: If any of these views is not available by default in your JBoss AS perspective, then you can add it manually by selecting from the Window menu the Show View | Other option. In the Show View window (shown in the following screenshot), you just select the desired view and click on the OK button. JBoss Server View This view contains a simple toolbar known as JBoss Server View Toolbar and two panels that separate the list of servers (top part) from the list of additional information about the selected server (bottom part). Note that the quantity of additional information is directly related to the server type. Top part of JBoss Server View In the top part of the JBoss Server View, we can see a list of our servers, their states, and if they are running or if they have stopped. Starting the JBoss AS The simplest ways to start our JBoss AS server are: Select the JBoss 4.2 Server from the server list and click the Start the server button from the JBoss Server View Toolbar (as shown in the following screenshot). Select the JBoss 4.2 Server from the server list and right-click on it. From the context menu, select the Start option (as shown in the following screenshot). In both cases, a detailed evolution of the startup process will be displayed in the Console View, as you can see in the following screenshot. Stopping the JBoss AS The simplest ways to stop JBoss AS server are: Select the JBoss 4.2 Server from the server list and click the Stop the server button from the JBoss Server View Toolbar. Select the JBoss 4.2 Server from the server list and right-click on it. From the context menu, select the Stop option. In both cases, a detailed evolution of the stopping process will be displayed in the Console View, as you can see in the following screenshot. Additional operations on JBoss AS Beside Start and Stop operations, JBoss Server View allows us to: Add a new server (the New Server option from the contextual menu) Remove an existing server (the Delete option from the contextual menu) Start the server in debug mode (first button on the JBoss Server View Toolbar) Start the server in profiling mode (third button on the JBoss Server View Toolbar) Publish to the server or synching the publish information between the server and the workspace (the Publish option from the contextual menu or the last button on the JBoss Server View Toolbar) Discard all publish state and republish from scratch (the Clean option from the contextual menu) Twiddle server (the Twiddle Server option from the contextual menu) Edit launch configuration (the Edit Launch Configuration option from the contextual menu as shown in the following screenshot). Add/remove projects (the Add and Remove Projects option from the contextual menu) Double-click the server name and modify parts of that server in the Server Editor—if you have a username and a password to start the server, then you can specify those credentials here (as shown in the following screenshot). Twiddle is a JMX library that comes with JBoss, and it is used to access (any) variables that are exposed via the JBoss JMX interfaces. Server publish status A server may have one of the following statuses: Synchronized: Allows you to see if changes are sync (as shown in the follo wing screenshot) Publishing: Allows you to see if changes are being updated Republish: Allows you to see if changes are waiting
Read more
  • 0
  • 0
  • 1275

article-image-10-minute-guide-enterprise-service-bus-and-netbeans-soa-pack
Packt
23 Oct 2009
4 min read
Save for later

10 Minute Guide to the Enterprise Service Bus and the NetBeans SOA Pack

Packt
23 Oct 2009
4 min read
Introduction When you are integrating different systems together, it can be very easy to use your vendor’s APIs and program directly against them. Using that approach, developers can easily integrate applications. Supporting these applications however can become problematical. If we have a few systems integrated together in this approach, everything is fine, but the more systems we integrate together, the more integration code we have and it rapidly becomes unfeasible to support this point-to-point integration. To overcome this problem, the integration hub was developed. In this scenario, developers would write against the API of the integration vendor and only had to learn one API. This is a much better approach than the point-to-point integration method however it still has its limitations. There is still a proprietary API to learn (albeit only one this time), but if the integration hub goes down for any reason, then entire Enterprise can become unavailable. The Enterprise Service Bus (ESB) overcomes these problems by providing a scalable, standards based integration architecture. The NetBeans SOA pack includes a copy of OpenESB, which follows this architecture promoted by the Java Business Integration Specification JSR 208. Workings of an ESB At the heart of the ESB is the Normalized Message Router (NMR) - a pluggable framework that allows Java Business Integration (JBI) components to be plugged into it as required.  The NMR is responsible for passing messages between all of the different JBI components that are plugged into it. The two main JBI components that are plugged into the NMR are Binding Components and Service Engines.  Binding Components are responsible for handling all protocol specific transport such as HTTP, SOAP, JMS, File system access, etc.  Service Engines on the other hand execute business logic as BPEL processes, SQL statements, invoking external Java EE web services, etc.   There is a clear separation between Binding Components and Service Engines with protocol specific transactions being handled by the former and business logic being performed by the latter. This architecture promotes loose coupling in that service engines do not communicate directly with each other.  All communication between different JBI components is performed through Binding Components by use of normalized messages as shown in the sequence chart below. In the case of OpenESB, all of these normalized messages are based upon WSDL.  If, for example, a BPEL process needs to invoke a web service or send an email, it does not need to know about SOAP or SMTP that is the responsibility of the Binding Components.  For one Service Engine to invoke another Service Engine all that is required is a WSDL based message to be constructed, which can then be routed via the NMR and Binding Components to the destination Service Engine. OpenESB provides many different Binding Components and Service Engines enabling integration with many varied different systems. So, we can see that OpenESB provides us with a standard based architecture that promotes loose coupling between components.  NetBeans 6 provides tight integration with OpenESB allowing developers to take full advantage of its facilities. Integrating Netbeans6 IDE with OpenESB Integration with NetBeans comes in two parts.  First, NetBeans allows the different JBI components to be managed from within the IDE.  Binding Components and Service Engines can be installed into OpenESB from within NetBeans and from thereon the full lifecycle of the components (start, stop, restart, uninstall) can be controlled directly from within the IDE. Secondly, and more interestingly, the NetBeans IDE provides full editing support for developing Composite Applications¬ applications that bring together business logic and data from different sources.  One of the main features of Composite Applications is probably the BPEL editor.  This allows BPL process to be built up graphically allowing interaction with different data sources via different partner links, which may be web services, different BPEL processes, or SQL statements. Once a BPEL process or composite application has been developed, the NetBeans SOA pack provides tools to allow different bindings to be added onto the application depending on the Binding Components installed into OpenESB.  So, for example, a file binding could be added to a project that could poll the file system periodically looking for input messages to start a BPEL process, the output of which could be saved into a different file or sent directly to an FTP site. In addition to support for developing Composite Applications, the NetBeans SOA pack provides support for some features many Java developers would find useful, namely XML and WSDL editing and validation.  XML and WSDL files can be edited within the IDE as either raw text, or via graphical editors.  If changes are made in the raw text, the graphical editors update accordingly and vice versa.  
Read more
  • 0
  • 0
  • 1980
article-image-enterprise-javabeans
Packt
22 Oct 2009
10 min read
Save for later

Enterprise JavaBeans

Packt
22 Oct 2009
10 min read
Readers familiar with previous versions of J2EE will notice that Entity Beans were not mentioned in the above paragraph. In Java EE 5, Entity Beans have been deprecated in favor of the Java Persistence API (JPA). Entity Beans are still supported for backwards compatibility; however, the preferred way of doing Object Relational Mapping with Java EE 5 is through JPA. Refer to Chapter 4 in the book Java EE 5 Development using GlassFish Application Server for a detailed discussion on JPA. Session Beans As we previously mentioned, session beans typically encapsulate business logic. In Java EE 5, only two artifacts need to be created in order to create a session bean: the bean itself, and a business interface. These artifacts need to be decorated with the proper annotations to let the EJB container know they are session beans. Previous versions of J2EE required application developers to create several artifacts in order to create a session bean. These artifacts included the bean itself, a local or remote interface (or both), a local home or a remote home interface (or both) and a deployment descriptor. As we shall see in this article, EJB development has been greatly simplified in Java EE 5. Simple Session Bean The following example illustrates a very simple session bean: package net.ensode.glassfishbook; import javax.ejb.Stateless; @Stateless public class SimpleSessionBean implements SimpleSession { private String message = "If you don't see this, it didn't work!"; public String getMessage() { return message; } } The @Stateless annotation lets the EJB container know that this class is a stateless session bean. There are two types of session beans, stateless and stateful. Before we explain the difference between these two types of session beans, we need to clarify how an instance of an EJB is provided to an EJB client application. When EJBs (both session beans and message-driven beans) are deployed, the EJB container creates a series of instances of each EJB. This is what is typically referred to as the EJB pool. When an EJB client application obtains an instance of an EJB, one of the instances in the pool is provided to this client application. The difference between stateful and stateless session beans is that stateful session beans maintain conversational state with the client, where stateless session beans do not. In simple terms, what this means is that when an EJB client application obtains an instance of a stateful session bean, the same instance of the EJB is provided for each method invocation, therefore, it is safe to modify any instance variables on a stateful session bean, as they will retain their value for the next method call. The EJB container may provide any instance of an EJB in the pool when an EJB client application requests an instance of a stateless session bean. As we are not guaranteed the same instance for every method call, values set to any instance variables in a stateless session bean may be "lost" (they are not really lost; the modification is in another instance of the EJB in the pool). Other than being decorated with the @Stateless annotation, there is nothing special about this class. Notice that it implements an interface called SimpleSession. This interface is the bean's business interface. The SimpleSession interface is shown next: package net.ensode.glassfishbook; import javax.ejb.Remote; @Remote public interface SimpleSession { public String getMessage(); } The only peculiar thing about this interface is that it is decorated with the @Remoteannotation. This annotation indicates that this is a remote business interface . What this means is that the interface may be in a different JVM than the client application invoking it. Remote business interfaces may even be invoked across the network. Business interfaces may also be decorated with the @Local interface. This annotation indicates that the business interface is a local business interface. Local business interface implementations must be in the same JVM as the client application invoking their methods. As remote business interfaces can be invoked either from the same JVM or from a different JVM than the client application, at first glance, we might be tempted to make all of our business interfaces remote. Before doing so, we must be aware of the fact that the flexibility provided by remote business interfaces comes with a performance penalty, because method invocations are made under the assumption that they will be made across the network. As a matter of fact, most typical Java EE application consist of web applications acting as client applications for EJBs; in this case, the client application and the EJB are running on the same JVM, therefore, local interfaces are used a lot more frequently than remote business interfaces. Once we have compiled the session bean and its corresponding business interface,we need to place them in a JAR file and deploy them. Just as with WAR files, the easiest way to deploy an EJB JAR file is to copy it to [glassfish installationdirectory]/glassfish/domains/domain1/autodeploy. Now that we have seen the session bean and its corresponding business interface, let's take a look at a client sample application: package net.ensode.glassfishbook; import javax.ejb.EJB; public class SessionBeanClient { @EJB private static SimpleSession simpleSession; private void invokeSessionBeanMethods() { System.out.println(simpleSession.getMessage()); System.out.println("nSimpleSession is of type: " + simpleSession.getClass().getName()); } public static void main(String[] args) { new SessionBeanClient().invokeSessionBeanMethods(); } } The above code simply declares an instance variable of type net.ensode.SimpleSession, which is the business interface for our session bean. The instance variable is decorated with the @EJB annotation; this annotation lets the EJB container know that this variable is a business interface for a session bean. The EJB container then injects an implementation of the business interface for the client code to use. As our client is a stand-alone application (as opposed to a Java EE artifact such as a WAR file) in order for it to be able to access code deployed in the server, it must be placed in a JAR file and executed through the appclient utility. This utility can be found at [glassfish installation directory]/glassfish/bin/. Assuming this path is in the PATH environment variable, and assuming we placed our client code in a JAR file called simplesessionbeanclient.jar, we would execute the above client code by typing the following command in the command line: appclient -client simplesessionbeanclient.jar Executing the above command results in the following console output: If you don't see this, it didn't work! SimpleSession is of type: net.ensode.glassfishbook._SimpleSession_Wrapper which is the output of the SessionBeanClient class. The first line of output is simply the return value of the getMessage() method we implemented in the session bean. The second line of output displays the fully qualified class name of the class implementing the business interface. Notice that the class name is not the fully qualified name of the session bean we wrote; instead, what is actually provided is an implementation of the business interface created behind the scenes by the EJB container. A More Realistic Example In the previous section, we saw a very simple, "Hello world" type of example. In this section, we will show a more realistic example. Session beans are frequently used as Data Access Objects (DAOs). Sometimes, they are used as a wrapper for JDBC calls, other times they are used to wrap calls to obtain or modify JPA entities. In this section, we will take the latter approach. The following example illustrates how to implement the DAO design pattern in asession bean. Before looking at the bean implementation, let's look at the business interface corresponding to it: package net.ensode.glassfishbook; import javax.ejb.Remote; @Remote public interface CustomerDao { public void saveCustomer(Customer customer); public Customer getCustomer(Long customerId); public void deleteCustomer(Customer customer); } As we can see, the above is a remote interface implementing three methods; thesaveCustomer() method saves customer data to the database, the getCustomer()method obtains data for a customer from the database, and the deleteCustomer() method deletes customer data from the database. Let's now take a look at the session bean implementing the above business interface. As we are about to see, there are some differences between the way JPA code is implemented in a session bean versus in a plain old Java object. package net.ensode.glassfishbook; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.sql.DataSource; @Stateless public class CustomerDaoBean implements CustomerDao { @PersistenceContext private EntityManager entityManager; @Resource(name = "jdbc/__CustomerDBPool") private DataSource dataSource; public void saveCustomer(Customer customer) { if (customer.getCustomerId() == null) { saveNewCustomer(customer); } else { updateCustomer(customer); } } private void saveNewCustomer(Customer customer) { customer.setCustomerId(getNewCustomerId()); entityManager.persist(customer); } private void updateCustomer(Customer customer) { entityManager.merge(customer); } public Customer getCustomer(Long customerId) { Customer customer; customer = entityManager.find(Customer.class, customerId); return customer; } public void deleteCustomer(Customer customer) { entityManager.remove(customer); } private Long getNewCustomerId() { Connection connection; Long newCustomerId = null; try { connection = dataSource.getConnection(); PreparedStatement preparedStatement = connection .prepareStatement( "select max(customer_id)+1 as new_customer_id " + "from customers"); ResultSet resultSet = preparedStatement.executeQuery(); if (resultSet != null && resultSet.next()) { newCustomerId = resultSet.getLong("new_customer_id"); } connection.close(); } catch (SQLException e) { e.printStackTrace(); } return newCustomerId; } } The first difference we should notice is that an instance of javax.persistence. EntityManager is directly injected into the session bean. In previous JPA examples,we had to inject an instance of javax.persistence.EntityManagerFactory, then use the injected EntityManagerFactory instance to obtain an instance of EntityManager. The reason we had to do this was that our previous examples were not thread safe. What this means is that potentially the same code could be executed concurrently by more than one user. As EntityManager is not designed to be used concurrently by more than one thread, we used an EntityManagerFactory instance to provide each thread with its own instance of EntityManager. Since the EJB container assigns a session bean to a single client at time, session beans are inherently thread safe, therefore, we can inject an instance of EntityManager directly into a session bean. The next difference between this session bean and previous JPA examples is that in previous examples, JPA calls were wrapped between calls to UserTransaction.begin() and UserTransaction.commit(). The reason we had to do this is because JPA calls are required to be in wrapped in a transaction, if they are not in a transaction, most JPA calls will throw a TransactionRequiredException. The reason we don't have to explicitly wrap JPA calls in a transaction as in previous examples is because session bean methods are implicitly transactional; there is nothing we need to do to make them that way. This default behavior is what is known as Container-Managed Transactions. Container-Managed Transactions are discussed in detail later in this article. When a JPA entity is retrieved in one transaction and updated in a different transaction, the EntityManager.merge() method needs to be invoked to update the data in the database. Invoking EntityManager.persist() in this case will result in a "Cannot persist detached object" exception.
Read more
  • 0
  • 0
  • 1754

article-image-building-jsfejb3-applications
Packt
22 Oct 2009
15 min read
Save for later

Building JSF/EJB3 Applications

Packt
22 Oct 2009
15 min read
Building JSF/EJB3 Applications This practical article shows you how to create a simple data-driven application using JSF and EJB3 technologies. The article also shows you how to effectively use NetBeans IDE when building enterprise applications. What We Are Going to Build The sample application we are building throughout the article is very straightforward. It offers just a few pages. When you click the Ask us a question link on the welcomeJSF.jsp page, you will be taken to the following page, on which you can submit a question:     Once you’re done with your question, you click the Submit button. As a result, the application persist your question along with your email in the database. The next page will look like this:     The web tier of the application is build using the JavaServer Faces technology, while EJB is used to implement the database-related code. Software You Need to Follow the Article Exercise To build the sample discussed here, you will need the following software components installed on your computer: Java Standard Development Kit (JDK) 5.0 or higher Sun Java System Application Server Platform Edition 9 MySQL NetBeans IDE 5.5 Setting Up the Database The first step in building our application is to set up the database to interact with. In fact, you could choose any database you like to be the application’s backend database. For the purpose of this article, though, we will discuss how to use MySQL. To keep things simple, let’s create a questions table that contains just three columns, outlined in the following table: Column Type Description trackno INTEGER AUTO_INCREMENT PRIMARY KEY Stores a track number generated automatically when a row is inserted. user_email VARCHAR(50) NOT NULL   question VARCHAR(2000) NOT NULL   Of course, a real-world questions table would contain a few more columns, for example, dateOfSubmission containing the date and time of submitting the question. To create the questions table, you first have to create a database and grant the required privileges to the user with which you are going to connect to that database. For example, you might create database my_db and user usr identified by password pswd. To do this, you should issue the following SQL commands from MySQL Command Line Client:   CREATE DATABASE my_db; GRANT CREATE, DROP, SELECT, INSERT, UPDATE, DELETE ON my_db.* TO 'usr'@'localhost' IDENTIFIED BY 'pswd'; In order to use the newly created database for subsequent statements, you should issue the following statement:   USE my_db   Finally, create the questions table in the database as follows:   CREATE TABLE questions(      trackno INTEGER AUTO_INCREMENT PRIMARY KEY,      user_email VARCHAR(50) NOT NULL,      question  VARCHAR(2000) NOT NULL ) ENGINE = InnoDB;     Once you’re done, you have the database with the questions table required to store incoming users’ questions. Setting Up a Data Source for Your MySQL Database Since the application we are going to build will interact with MySQL, you need to have installed an appropriate MySQL driver on your application server. For example, you might want to install MySQL Connector/J, which is the official JDBC driver for MySQL. You can pick up this software from the "downloads" page of the MySQL AB website at http://mysql.org/downloads/. Install the driver on your GlassFish application server as follows: Unpack the downloaded archive containing the driver to any directory on your machine Add mysql-connector-java-xxx-bin.jar to the CLASSPATH environment variable Make sure that your GlassFish application server is up and running Launch the Application Server Admin Console by pointing your browser at http://localhost:4848/ Within the Common Tasks frame, find and double-click the ResourcesJDBCNew Connection Pool node On the New Connection Pool page, click the New… button The first step of the New Connection Pool master, set the fields as shown in the following table: Setting Value Name jdbc/mysqlPool Resource type javax.sql.DataSource Database Vendor mysql   Click Next to move on to the second page of the master On the second page of New Connection Pool, set the properties to reflect your database settings, like that shown in the following table: Name Value databaseName my_db serverName localhost port 3306 user usr password pswd   Once you are done with setting the properties, click Finish. The newly created jdbc/mysqlPool connection pool should appear on the list. To check it, you should click its link to open it in a window, and then click the Ping button. If everything is okay, you should see a message telling you Ping succeeded Creating the Project The next step is to create an application project with NetBeans. To do this, follow the steps below: Choose File/New project and then choose the EnterpriseEnterprise Application template for the project. Click Next On the Name and Location page of the master, specify the name for the project: JSF_EJB_App. Also make sure that Create EJB Module and Create Web Application Module are checked. And click Finish As a result, NetBeans generates a new enterprise application in a standard project, containing actually two projects: an EJB module project and Web application project. In this particular example, you will use the first project for EJBs and the second one for JSF pages. Creating Entity Beans and Persistent Unit You create entity beans and the persistent unit in the EJB module project—in this example this is the JSF_EJB_App-ejb project. In fact, the sample discussed here will contain the only entity bean: Question. You might automatically generate it and then edit as needed. To generate it with NetBeans, follow the steps below: Make sure that your Sun Java System Application Server is up and running In the Project window, right click JSF_EJB_App-ejb project, and then choose New/Entity Classes From Database. As a result, you’ll be prompted to connect to your Sun Java System Application Server. Do it by entering appropriate credentials. In the New Entity Classes from Database window, select jdbc/mysqlPool from the Data Source combobox. If you recall from the Setting up a Data Source for your MySQL database section discussed earlier in this article, jdbc/mysqlPool is a JDBC connection pool created on your application server In the Connect dialog appeared, you’ll be prompted to connect to your MySQL database. Enter password pswd, set the Remember password during this session checkbox, and then click OK In the Available Tables listbox, choose questions, and click Add button to move it to the Selected Tables listbox. After that, click Next On the next page of the New Entity Classes from Database master, fill up the Package field. For example, you might choose the following name: myappejb.entities. And change the class name from Questions to Question in the Class Names box. Next, click the Create Persistent Unit button In the Create Persistent Unit window, just click the Create button, leaving default values of the fields In the New Entity Classes from Database dialog, click Finish As a result, NetBeans will generate the Question entity class, which you should edit so that the resultant class looks like the following:   package myappejb.entities; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "questions") public class Question implements Serializable { @Id @Column(name = "trackno") private Integer trackno; @Column(name = "user_email", nullable = false) private String userEmail; @Column(name = "question", nullable = false) private String question; public Question() { } public Integer getTrackno() { return this.trackno; } public void setTrackno(Integer trackno) { this.trackno = trackno; } public String getUserEmail() { return this.userEmail; } public void setUserEmail(String userEmail) { this.userEmail = userEmail; } public String getQuestion() { return this.question; } public void setQuestion(String question) { this.question = question; } }     Once you’re done, make sure to save all the changes made by choosing File/Save All. Having the above code in hand, you might of course do without first generating the Question entity from the database, but simply create an empty Java file in the myappejb.entities package, and then insert the above code there. Then you could separately create the persistent unit. However, the idea behind building the Question entity with the master here is to show how you can quickly get a required piece of code to be then edited as needed, rather than creating it from scratch. Creating Session Beans To finish with the JSF_EJB_App-ejb project, let’s proceed to creating the session bean that will be used by the web tier. In particular, you need to create the QuestionSessionBean session bean that will be responsible for persisting the data a user enters on the askquestion page. To generate the bean’s frame with a master, follow the steps below: In the Project window, right click JSF_EJB_App-ejb project, and then choose New/Session Bean In the New Session Bean window, enter EJB name: QuestionSessionBean. Then specify the package: myappejb.ejb. Make sure that the Session Type is set to Stateless and Create Interface is set to Remote. Click Finish As a result, NetBeans should generate two Java files: QuestionSessionBean.java and QuestionSessionRemote.java. You should modify QuestionSessionBean.java so that it contains the following code:   package myappejb.ejb; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.ejb.TransactionManagement; import javax.ejb.TransactionManagementType; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.PersistenceUnit; import javax.transaction.UserTransaction; import myappejb.entities.Question; @Stateless <b>@TransactionManagement(TransactionManagementType.BEAN)</b> public class QuestionSessionBean implements myappejb.ejb.QuestionSessionRemote { /** Creates a new instance of QuestionSessionBean */ public QuestionSessionBean() { } @Resource private UserTransaction utx; @PersistenceUnit(unitName = "JSF_EJB_App-ejbPU") private EntityManagerFactory emf; private EntityManager getEntityManager() { return emf.createEntityManager(); } public void save(Question question) throws Exception { EntityManager em = getEntityManager(); try { utx.begin(); em.joinTransaction(); em.persist(question); utx.commit(); } catch (Exception ex) { try { utx.rollback(); throw new Exception(ex.getLocalizedMessage()); } catch (Exception e) { throw new Exception(e.getLocalizedMessage()); } } finally { em.close(); } } }   Next, modify the QuestionSessionRemote.java so that it looks like this:   package myappejb.ejb; import javax.ejb.Remote; import myappejb.entities.Question; @Remote public interface QuestionSessionRemote { void save(Question question) throws Exception; }   Choose File/Save All to save the changes made. That’s it. You just finished with your EJB module project. Adding JSF Framework to the Project Now that you have the entity and session beans created, let’s switch to the JSF_EJB_App-war project, where you’re building the web tier for the application.Before you can proceed to building JSF pages, you need to add the JavaServer Faces framework to the JSF_EJB_App-war project. To do this, follow the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose Properties In the Project Properties window, select Frameworks from Categories, and click Add button. As a result, the Add a Framework dialog should appear In the Add a Framework dialog, choose JavaServer Faces and click OK Then click OK in the Project Properties dialog As a result, NetBeans adds the JavaServer Faces framework to the JSF_EJB_App-war project. Now if you extend the Configuration Files folder under the JSF_EJB_App-war project node in the Project window, you should see, among other configuration files, faces-config.xml there. Also notice the appearance of the welcomeJSF.jsp page in the Web Pages folder Creating JSF Managed Beans The next step is to create managed beans whose methods will be called from within the JSF pages. In this particular example, you need to create only one such bean: let’s call it QuestionController. This can be achieved by following the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose New/Empty Java File In the New Empty Java File window, enter QuestionController as the class name and enter myappjsf.jsf in the Package field. Then, click Finish In the generated empty java file, insert the following code:   package myappjsf.jsf; import javax.ejb.EJB; import javax.faces.application.FacesMessage; import javax.faces.context.FacesContext; import myappejb.entities.Question; import myappejb.ejb.QuestionSessionBean; public class QuestionController { @EJB private QuestionSessionBean sbean; private Question question; public QuestionController() { } public Question getQuestion() { return question; } public void setQuestion(Question question) { this.question = question; } public String createSetup() { this.question = new Question(); this.question.setTrackno(null); return "question_create"; } public String create() { try { Integer trck = sbean.save(question); addSuccessMessage("Your question was successfully submitted."); } catch (Exception ex) { addErrorMessage(ex.getLocalizedMessage()); } return "created"; } public static void addErrorMessage(String msg) { FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_ERROR, msg, msg); FacesContext fc = FacesContext.getCurrentInstance(); fc.addMessage(null, facesMsg); } public static void addSuccessMessage(String msg) { FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_INFO, msg, msg); FacesContext fc = FacesContext.getCurrentInstance(); fc.addMessage("successInfo", facesMsg); } }   Next, you need to add information about the newly created JSF managed bean to the faces-config.xml configuration file automatically generated when adding the JSF framework to the project. Find this file in the following folder: JSF_EJB_App-warWeb PagesWEB-INF in the Project window, and then insert the following tag between the and tags:   <managed-bean> <managed-bean-name>questionJSFBean</managed-bean-name> <managed-bean-class>myappjsf.jsf.QuestionController</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean>   Finally, make sure to choose File/Save All to save the changes made in faces-config.xml as well as in QuestionController.java. Creating JSF Pages To keep things simple, you create just one more JSF page: askquestion.jsp, where a user can submit a question. First, though, let’s modify the welcomeJSF.jsp page so that you can use it to move on to askquestion.jsp and then return to, once a question has been submitted. To achieve this, modify welcomeJSF.jsp as follows:   <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h:messages errorStyle="color: red" infoStyle="color: green" layout="table"/> <h:form> <h1><h:outputText value="Ask us a question" /></h1> <h:commandLink action="#{questionJSFBean.createSetup}" value="New question"/> <br> </h:form> </f:view> </body> </html> Now you can move on and create askquestion.jsp. To do this, follow the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose New/JSP In the New JSP File window, enter askquestion as the name for the page, and click Finish Modify the newly created askquestion.jsp so that it finally looks like this:   <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib uri="http://java.sun.com/jsf/core" prefix="f" %> <%@taglib uri="http://java.sun.com/jsf/html" prefix="h" %> <html>     <head>         <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />         <title>New Question</title>     </head>     <body>         <f:view>             <h:messages errorStyle="color: red" infoStyle="color: green" layout="table"/>             <h1>Your question, please!</h1>             <h:form>                 <h:panelGrid columns="2">                     <h:outputText value="Your Email:"/>                     <h:inputText id="userEmail" value= "#{questionJSFBean.question.userEmail}" title="Your Email" />                     <h:outputText value="Your Question:"/>                     <h:inputTextarea id="question" value= "#{questionJSFBean.question.question}"  title="Your Question" rows ="5" cols="35" />                 </h:panelGrid>                 <h:commandLink action="#{questionJSFBean.create}" value="Create"/>                 <br>                 <a href="/JSF_EJB_App-war/index.jsp">Back to index</a>             </h:form>         </f:view>     </body> </html>   The next step is to set page navigation. Turning back to the faces-config.xml configuration file, insert the following code there.   <navigation-rule> <navigation-case> <from-outcome>question_create</from-outcome> <to-view-id>/askquestion.jsp</to-view-id> </navigation-case> </navigation-rule> <navigation-rule> <navigation-case> <from-outcome>created</from-outcome> <to-view-id>/welcomeJSF.jsp</to-view-id> </navigation-case> </navigation-rule> Make sure that the above tags are within the <faces-config> and </faces-config> root tags. Check It You are ready now to check the application you just created. To do this, right-click JSF_EJB_App-ejb project in the Project window and choose Deploy Project. After the JSF_EJB_App-ejb project is successfully deployed, right click the JSF_EJB_App-war project and choose Run Project. As a result, the newly created application will run in a browser. As mentioned earlier, the application contains very few pages, actually three ones. For testing purposes, you can submit a question, and then check the questions database table to make sure that everything went as planned. Summary Both JSF and EJB 3 are popular technologies when it comes to building enterprise applications. This simple example illustrates how you can use these technologies together in a complementary way.
Read more
  • 0
  • 0
  • 2462

article-image-consuming-adapter-outside-biztalk-server
Packt
22 Oct 2009
3 min read
Save for later

Consuming the Adapter from outside BizTalk Server

Packt
22 Oct 2009
3 min read
In addition to infrastructure-related updates such as the aforementioned platform modernization, Windows Server 2008 Hyper-V virtualization support, and additional options for fail over clustering, BizTalk Server also includes new core functionality. You will find better EDI and AS2 capabilities for B2B situations and a new platform for mobile development of RFID solutions. One of the benefits of the new WCF SQL Server Adapter that I mentioned earlier was the capability to use this adapter outside of a BizTalk Server solution. Let's take a brief look at three options for using this adapter by itself and without BizTalk as a client or service. Called directly via WCF service reference If your service resides on a machine where the WCF SQL Server Adapter (and thus, the sqlBinding) is installed, then you may actually add a reference directly to the adapter endpoint. I have a command-line application, which serves as my service client. If we right-click this application, and have the WCF LOB Adapter SDK installed, then Add Adapter Service Reference appears as an option. Choosing this option opens our now-beloved wizard for browsing adapter metadata. As before, we add the necessary connection string details and browse the BatchMaster table and opt for the Select operation. Unlike the version of this wizard that opens for BizTalk Server projects, notice the Advanced options button at the bottom. This button opens a property window that lets us select a variety of options such as asynchronous messaging support and suppression of an accompanying configuration file. After the wizard is closed, we end up with a new endpoint and binding in our existing configuration file, and a .NET class containing the data and service contracts necessary to consume the service. We should now call this service as if we were calling any typical WCF service. Because the auto-generated namespace for the data type definition is a bit long, I first added an alias to that namespace. Next, I have a routine, which builds up the query message, executes the service, and prints a subset of the response. using DirectReference = schemas.microsoft.com.Sql._2008._05.Types.Tables.dbo; … private static void CallReferencedSqlAdapterService() { Console.WriteLine("Calling referenced adapter service");TableOp_dbo_BatchMasterClient client = new TableOp_dbo_BatchMasterClient("SqlAdapterBinding_TableOp_dbo_BatchMaster"); try{string columnString = "*";string queryString = "WHERE BatchID = 1";DirectReference.BatchMaster[] batchResult =client.Select(columnString, queryString);Console.WriteLine("Batch results ...");Console.WriteLine("Batch ID: " + batchResult[0].BatchID.ToString());Console.WriteLine("Product: " + batchResult[0].ProductName);Console.WriteLine("Manufacturing Stage: " + batchResult[0].ManufStage);client.Close(); Console.ReadLine(); } catch (System.ServiceModel.CommunicationException){client.Abort(); } catch (System.TimeoutException) { client.Abort(); } catch (System.Exception) { client.Abort(); throw; } } Once this quick block of code is executed, I can confirm that my database is accessed and my expected result set returned.
Read more
  • 0
  • 0
  • 1507
article-image-oracle-web-rowset-part1
Packt
22 Oct 2009
6 min read
Save for later

Oracle Web RowSet - Part1

Packt
22 Oct 2009
6 min read
The ResultSet interface requires a persistent connection with a database to invoke the insert, update, and delete row operations on the database table data. The RowSet interface extends the ResultSet interface and is a container for tabular data that may operate without being connected to the data source. Thus, the RowSet interface reduces the overhead of a persistent connection with the database. In J2SE 5.0, five new implementations of RowSet—JdbcRowSet, CachedRowSet, WebRowSet, FilteredRowSet, and JoinRowSet—were introduced. The WebRowSet interface extends the RowSet interface and is the XML document representation of a RowSet object. A WebRowSet object represents a set of fetched database table rows, which may be modified without being connected to the database. Support for Oracle Web RowSet is a new feature in Oracle Database 10g driver. Oracle Web RowSet precludes the requirement for a persistent connection with the database. A connection is required only for retrieving data from the database with a SELECT query and for updating data in the database after all the required row operations on the retrieved data has been performed. Oracle Web RowSet is used for queries and modifications on the data retrieved from the database. Oracle Web RowSet, as an XML document representation of a RowSet facilitates the transfer of data. In Oracle Database 10g and 11g JDBC drivers, Oracle Web RowSet is implemented in the oracle.jdbc.rowset package. The OracleWebRowSet class represents a Oracle Web RowSet. The data in the Web RowSet may be modified without connecting to the database. The database table may be updated with the OracleWebRowSet class after the modifications to the Web RowSet have been made. A database JDBC connection is required only for retrieving data from the database and for updating the database. An XML document representation of the data in a Web RowSet may be obtained for data exchange. In this article, the Web RowSet feature in Oracle 10g database JDBC driver is implemented in JDeveloper 10g. An example Web RowSet will be created from a database. The Web RowSet will be modified and stored in the database table. In this article, we will learn the following: Creating a Oracle Web RowSet object Adding a row to Oracle Web RowSet Modifying the database table with Web RowSet In the second half of the article, we will cover the following : Reading a row from Oracle Web RowSet Updating a row in Oracle Web RowSet Deleting a row from Oracle Web RowSet Updating Database Table with modified Oracle Web RowSet Setting the Environment We will use Oracle database to generate an updatable OracleWebRowSet object. Therefore, install Oracle database 10g including the sample schemas. Connect to the database with the OE schema: SQL> CONNECT OE/<password> Create an example database table, Catalog, with the following SQL script: CREATE TABLE OE.Catalog(Journal VARCHAR(25), Publisher Varchar(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'July-August 2005', 'Tuning Undo Tablespace','Kimberly Floss');INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'March-April 2005', 'Starting with Oracle ADF', 'SteveMuench'); Configure JDeveloper 10g for Web RowSet implementation. Create a project in JDeveloper. Select File | New | General | Application. In the Create Application window specify an Application Name and click on Next. In the Create Project window, specify a Project Name and click on Next. A project is added in the Applications Navigator. Next, we will set the project libraries. Select Tools | ProjectProperties and in the Project Properties window, select Libraries | Add Library to add a library. Add the Oracle JDBC library to project libraries. If the Oracle JDBC drivers version prior to the Oracle database 10g (R2) JDBC drivers version is used, create a library from the Oracle Web RowSet implementation classes JAR file: C:JDeveloper10.1.3jdbclibocrs12.jar. The ocrs12.jar is required only for JDBC drivers prior to Oracle database 10g (R2) JDBC drivers. In Oracle database 10g (R2) JDBC drivers OracleRowSet implementation classes are packaged in the ojdbc14.jar. In Oracle database 11g JDBC drivers Oracle RowSet implementation classes are packaged in ojdbc5.jar and ojdbc6.jar. In the Add Library window select the User node and click on New. In the Create Library window specify a Library Name, select the Class Path node and click on Add Entry. Add an entry for ocrs12.jar. As Web RowSet was introduced in J2SE 5.0, if J2SE 1.4 is being used we also need to add an entry for the RowSet implementations JAR file, rowset.jar. Download the JDBC RowSet Implementations 1.0.1 zip file, jdbc_rowset_tiger-1_0_1-mrel-ri.zip, from http://java.sun.com/products/jdbc/download.html#rowset1_0_1 and extract the JDBC RowSet zip file to a directory. Click on OK in the Create Library window. Click on OK in the Add Library window. A library for the Web RowSet application is added. Now configure an OC4J data source. Select Tools | Embedded OC4J Server Preferences. A data source may be configured globally or for the current workspace. If a global data source is created using Global | Data Sources, the data source is configured in the C:JDeveloper10.1.3jdevsystemoracle.j2ee.10.1.3.36.73embedded-oc4jconfig data-sources.xml file. If a data source is configured for the current workspace using Current Workspace | Data Sources, the data source is configured in the data-sources.xml file. For example, the data source file for the WebRowSetApp application is WebRowSetApp-data-sources.xml. In the Embedded OC4J Server Preferences window configure either a global data source or a data source in the current workspace. A global data source definition is available to all applications deployed in the OC4J server instance. A managed-data-source element is added to the data-sources.xml file. <managed-data-source name='OracleDataSource' connection-pool-name='Oracle Connection Pool' jndi-name='jdbc/OracleDataSource'/><connection-pool name='Oracle Connection Pool'><connection-factory factory-class='oracle.jdbc.pool.OracleDataSource' user='OE' password='pw'url="jdbc:oracle:thin:@localhost:1521:ORCL"></connection-factory></connection-pool> Add a JSP, GenerateWebRowSet.jsp, to the WebRowSet project. Select File | New | Web Tier | JSP | JSP. Click on OK. Select J2EE 1.3 or J2EE 1.4 in the Web Application window and click on Next. In the JSP File window specify a File Name and click on Next. Select the default settings in the Error Page Options page and click on Next. Select the default settings in the Tag Librarieswindow and click on Next. Select the default options in the HTML Options window and click on Next. Click on Finish in the Finish window. Next, configure the web.xml deployment descriptor to include a reference to the data source resource configured in the data-sources.xml file as shown in following listing: <resource-ref><res-ref-name>jdbc/OracleDataSource</res-ref-name><res-type>javax.sql.DataSource</res-type><res-auth>Container</res-auth></resource-ref>
Read more
  • 0
  • 0
  • 1859

article-image-configuring-jdbc-oracle-jdeveloper
Packt
21 Oct 2009
14 min read
Save for later

Configuring JDBC in Oracle JDeveloper

Packt
21 Oct 2009
14 min read
Introduction Unlike Eclipse IDE, which requires a plug-in, JDeveloper has a built-in provision to establish a JDBC connection with a database. JDeveloper is the only Java IDE with an embedded application server, the Oracle Containers for J2EE (OC4J). This database-based web application may run in JDeveloper without requiring a third-party application server. However, JDeveloper also supports third-party application servers. Starting with JDeveloper 11, application developers may point the IDE to an application server instance (or OC4J instance), including third-party application servers that they want to use for testing during development. JDeveloper provides connection pooling for the efficient use of database connections. A database connection may be used in an ADF BC application, or in a JavaEE application. A database connection in JDeveloper may be configured in the Connections Navigator. A Connections Navigator connection is available as a DataSource registered with a JNDI naming service. The database connection in JDeveloper is a reusable named connection that developers configure once and then use in as many of their projects as they want. Depending on the nature of the project and the database connection, the connection is configured in the bc4j.xcfg file or a JavaEE data source. Here, it is necessary to distinguish between data source and DataSource. A data source is a source of data; for example an RDBMS database is a data source. A DataSource is an interface that represents a factory for JDBC Connection objects. JDeveloper uses the term Data Source or data source to refer to a factory for connections. We will also use the term Data Source or data source to refer to a factory for connections, which in the javax.sql package is represented by the DataSource interface. A DataSource object may be created from a data source registered with the JNDI (Java Naming and Directory) naming service using JNDI lookup. A JDBC Connection object may be obtained from a DataSource object using the getConnection method. As an alternative to configuring a connection in the Connections Navigator a data source may also be specified directly in the data source configuration file data-sources.xml. In this article we will discuss the procedure to configure a JDBC connection and a JDBC data source in JDeveloper 10g IDE. We will use the MySQL 5.0 database server and MySQL Connector/J 5.1 JDBC driver, which support the JDBC 4.0 specification. In this article you will learn the following: Creating a database connection in JDeveloper Connections Navigator. Configuring the Data Source and Connection Pool associated with the connection configured in the Connections Navigator. The common JDBC Connection Errors. Before we create a JDBC connection and a data source we will discuss connection pooling and DataSource. Connection Pooling and DataSource The javax.sql package provides the API for server-side database access. The main interfaces in the javax.sql package are DataSource, ConnectionPoolDataSource, and PooledConnection. The DataSource interface represents a factory for connections to a database. DataSource is a preferred method of obtaining a JDBC connection. An object that implements the DataSource interface is typically registered with a Java Naming and Directory API-based naming service. DataSource interface implementation is driver-vendor specific. The DataSource interface has three types of implementations: Basic implementation: In basic implementation there is 1:1 correspondence between a client's Connection object and the connection with the database. This implies that for every Connection object, there is a connection with the database. With the basic implementation, the overhead of opening, initiating, and closing a connection is incurred for each client session. Connection pooling implementation: A pool of Connection objects is available, from which connections are assigned to the different client sessions. A connection pooling manager implements the connection pooling. When a client session does not require a connection, the connection is returned to the connection pool and becomes available to other clients. Thus, the overheads of opening, initiating, and closing connections are reduced. Distributed transaction implementation: Distributed transaction implementation produces a Connection object that is mostly used for distributed transactions and is always connection pooled. A transaction manager implements the distributed transactions. An advantage of using a data source is that code accessing a data source does not have to be modified when an application is migrated to a different application server. Only the data source properties need to be modified. A JDBC driver that is accessed with a DataSource does not register itself with a DriverManager. A DataSource object is created using a JNDI lookup and subsequently a Connection object is created from the DataSource object. For example, if a data source JNDI name is jdbc/OracleDS a DataSource object may be created using JNDI lookup. First, create an InitialContext object and subsequently create a DataSource object using the InitialContext lookup method. From the DataSource object create a Connection object using the getConnection() method: InitialContext ctx=new InitialContext(); DataSource ds=ctx.lookup("jdbc/OracleDS"); Connection conn=ds.getConnection(); The JNDI naming service, which we used to create a DataSource object is provided by J2EE application servers such as the Oracle Application Server Containers for J2EE (OC4J) embedded in the JDeveloper IDE. A connection in a pool of connections is represented by the PooledConnection interface, not the Connection interface. The connection pool manager, typically the application server, maintains a pool of PooledConnection objects. When an application requests a connection using the DataSource.getConnection() method, as we did using the jdbc/OracleDS data source example, the connection pool manager returns a Connection object, which is actually a handle to an object that implements the PooledConnection interface. A ConnectionPoolDataSource object, which is typically registered with a JNDI naming service, represents a collection of PooledConnection objects. The JDBC driver provides an implementation of the ConnectionPoolDataSource, which is used by the application server to build and manage a connection pool. When an application requests a connection, if a suitable PooledConnection object is available in the connection pool, the connection pool manager returns a handle to the PooledConnection object as a Connection object. If a suitable PooledConnection object is not available, the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource to create a new PooledConnection object. For example, if connectionPoolDataSource is a ConnectionPoolDataSource object a new PooledConnection gets created as follows: PooledConnection pooledConnection=connectionPoolDataSource.getPooledConnection(); The application does not have to invoke the getPooledConnection() method though; the connection pool manager invokes the getPooledConnection() method and the JDBC driver implementing the ConnectionPoolDataSource creates a new PooledConnection, and returns a handle to it. The connection pool manager returns a Connection object, which is a handle to a PooledConnection object, to the application requesting a connection. When an application closes a Connection object using the close() method, as follows, the connection does not actually get closed. conn.close(); The connection handle gets deactivated when an application closes a Connection object with the close() method. The connection pool manager does the deactivation. When an application closes a Connection object with the close() method any client info properties that were set using the setClientInfo method are cleared. The connection pool manager is registered with a PooledConnection object using the addConnectionEventListener() method. When a connection is closed, the connection pool manager is notified and the connection pool manager deactivates the handle to the PooledConnection object, and returns the PooledConnection object to the connection pool to be used by another application. The connection pool manager is also notified if a connection has an error. A PooledConnection object is not closed until the connection pool is being reinitialized, the server is shutdown, or a connection becomes unusable. In addition to connections being pooled, PreparedStatement objects are also pooled by default if the database supports statement pooling. It can be discovered if a database supports statement pooling using the supportsStatementPooling() method of the DatabaseMetaData interface. The PeparedStatement pooling is also managed by the connection pool manager. To be notified of PreparedStatement events such as a PreparedStatement getting closed or a PreparedStatement becoming unusable, a connection pool manager is registered with a PooledConnection manager using the addStatementEventListener() method. A connection pool manager deregisters a PooledConnection object using the removeStatementEventListener() method. Methods addStatementEventListener and removeStatementEventListener are new methods in the PooledConnection interface in JDBC 4.0. Pooling of Statement objects is another new feature in JDBC 4.0. The Statement interface has two new methods in JDBC 4.0 for Statement pooling: isPoolable() and setPoolable(). The isPoolable method checks if a Statement object is poolable and the setPoolable method sets the Statement object to poolable. When an application closes a PreparedStatement object using the close() method the PreparedStatement object is not actually closed. The PreparedStatement object is returned to the pool of PreparedStatements. When the connection pool manager closes a PooledConnection object by invoking the close() method of PooledConnection all the associated statements also get closed. Pooling of PreparedStatements provides significant optimization, but if a large number of statements are left open, it may not be an optimal use of resources. Thus, the following procedure is followed to obtain a connection in an application server using a data source: Create a data source with a JNDI name binding to the JNDI naming service. Create an InitialContext object and look up the JNDI name of the data source using the lookup method to create a DataSource object. If the JDBC driver implements the DataSource as a connection pool, a connection pool becomes available. Request a connection from the connection pool. The connection pool manager checks if a suitable PooledConnection object is available. If a suitable PooledConnection object is available, the connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. If a PooledConnection object is not available the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource, which is implemented by the JDBC driver. The JDBC driver implementing the ConnectionPoolDataSource creates a PooledConnection object and returns a handle to it. The connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. When an application closes a connection, the connection pool manager deactivates the handle to the PooledConnection object and returns the PooledConnection object to the connection pool. ConnectionPoolDataSource provides some configuration properties to configure a connection pool. The configuration pool properties are not set by the JDBC client, but are implemented or augmented by the connection pool. The properties can be set in a data source configuration. Therefore, it is not for the application itself to change the settings, but for the administrator of the pool, who also happens to be the developer sometimes, to do so. Connection pool properties supported by ConnectionPoolDataSource are discussed in following table: Connection Pool Property Type Description maxStatements int Maximum number of statements the pool should keep open. 0 (zero) indicates that statement caching is not enabled. initialPoolSize int The initial number of connections the pool should have at the time of creation. minPoolSize int The minimum number of connections in the pool. 0 (zero) indicates that connections are created as required. maxPoolSize int The maximum number of connections in the connection pool. 0 indicates that there is no maximum limit. maxIdleTime int Maximum duration (in seconds) a connection can be kept open without being used before the connection is closed. 0 (zero) indicates that there is no limit. propertyCycle int The interval in seconds the pool should wait before implementing the current policy defined by the connection pool properties. maxStatements int The maximum number of statements the pool can keep open. 0 (zero) indicates that statement caching is not enabled.     Setting the Environment Before getting started, we have to install the JDeveloper 10.1.3 IDE and the MySQL 5.0 database. Download JDeveloper from: http://www.oracle.com/technology/software/products/jdev/index.html. Download the MySQL Connector/J 5.1, the MySQL JDBC driver that supports JDBC 4.0 specification. To install JDeveloper extract the JDeveloper ZIP file to a directory. Log in to the MySQL database and set the database to test. Create a database table, Catalog, which we will use in a web application. The SQL script to create the database table is listed below: CREATE TABLE Catalog(CatalogId VARCHAR(25)PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO Catalog VALUES('catalog1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss');INSERT INTO Catalog VALUES('catalog2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); MySQL does not support ROWID, for which support has been added in JDBC 4.0. Having installed the JDeveloper IDE, next we will configure a JDBC connection in the Connections Navigator. Select the Connections tab and right-click on the Database node to select New Database Connection. Click on Next in Create Database Connection Wizard. In the Create Database Connection Type window, specify a Connection Name—MySQLConnection for example—and set Connection Type to Third Party JDBC Driver, because we will be using MySQL database, which is a third-party database for Oracle JDeveloper and click on Next. If a connection is to be configured with Oracle database select Oracle (JDBC) as the Connection Type and click on Next. In the Authentication window specify Username as root (Password is not required to be specified for a root user by default), and click on Next. In the Connection window, we will specify the connection parameters, such as the driver name and connection URL; click on New to specify a Driver Class. In the Register JDBC Driver window, specify Driver Class as com.mysql.jdbc.Driver and click on Browse to select a Library for the Driver Class. In the Select Library window, click on New to create a new library for the MySQL Connector/J 5.1 JAR file. In the Create Library window, specify Library Name as MySQL and click on Add Entry to add a JAR file entry for the MySQL library. In the Select Path Entry window select mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar and click on Select. In the Create Library window, after a Class Path entry gets added to the MySQL library, click on OK. In the Select Library window, select the MySQL library and click on OK. In the Register JDBC Driver window, the MySQL library gets specified in the Library field and the mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar gets specified in the Classpath field. Now, click on OK. The Driver Class, Library, and Classpath fields get specified in the Connection window. Specify URL as jdbc:mysql://localhost:3306/test, and click on Next. In the Test window click on Test Connection to test the connection that we have configured. A connection is established and a success message gets output in the Status text area. Click on Finish in the Test window. A connection configuration, MySQLConnection, gets added to the Connections navigator. The connection parameters are displayed in the structure view. To modify any of the connection settings, double-click on the Connection node. The Edit Database Connection window gets displayed. The connection Username, Password, Driver Class, and URL may be modified in the Edit window. A database connection configured in the Connections navigator has a JNDI name binding in the JNDI naming service provided by OC4J. Using the JNDI name binding, a DataSource object may be created in a J2EE application. To view, or modify the configuration settings of the JDBC connection select Tools | Embedded OC4J Server Preferences in JDeveloper. In the window displayed, select Global | Data Sources node, and to update the data-sources.xml file with the connection defined in the Connections navigator, click on the Refresh Now button. Checkboxes may be selected to Create data-source elements where not defined, and to Update existing data-source elements. The connection pool and data source associated with the connection configured in the Connections navigator get listed. Select the jdev-connection-pool-MySQLConnection node to list the connection pool properties as Property Set A and Property Set B. The tuning properties of the JDBC connection pool may be set in the Connection Pool window. The different tuning attributes are listed in following table:        
Read more
  • 0
  • 0
  • 5792