Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-application-session-and-request-scope-coldfusion-9
Packt
27 Jul 2010
8 min read
Save for later

Application, Session, and Request Scope in ColdFusion 9

Packt
27 Jul 2010
8 min read
(For more resources on ColdFusion, see here.) The start methods We will have a look at the start methods and make some observations now. Each method has its own set of arguments. All Application.cfc methods return a Boolean value of true or false to declare if they completed correctly or not. Any code you place inside a method will execute when the start event occurs. These are the events that match with the name of the method. We will also include some basic code that will help you build an application core that is good for reuse and discuss what those features provide. Application start method—onApplicationStart() The following is the code structure of the application start method. You could actually place these methods in any order in the CFC, as the order does not matter. Code that uses CFCs only require the methods to exist. If they exist, then it will call them. We place them in our code so that it helps us to read and understand the structure from a human perspective. <cffunction name="onApplicationStart" output="false"> <cfscript> // create default stat structure and pre-request values application._stat = structNew(); application._stat.started = now(); application._stat.thisHit = now(); application._stat.hits = 0; application._stat.sessions = 0; </cfscript></cffunction> There are no arguments for the onApplicationStart() method. We have included some extra code to show you an example of what can be done in this function. Please note that if we change the code in this method, it will only run at the very first time when an application running in ColdFusion is hit. To hit it again, we need to either change the application name or restart the ColdFusion server. The Application variables section that was previously explained shows how to change the application's name. From the start methods, we can see that we can access the variable scopes that allow persistence of key information. To understand the power of this object, we will be creating some statistics that can be used in most situations. We could use them for debugging, logging, or in any other appropriate use case. Again, we have to be aware that this only gets hit the first time a request is made to a ColdFusion server for that application. We will be updating many of our statistics in the request methods. We will also be updating one of our variables in the session end method. Session start method—onSessionStart() The session start method only gets called when a request is made for a new session. It is good that ColdFusion can keep track of these things. The following is example code that allows us to keep a record of the session-based statistics that is similar to the application-based statistics: <cffunction name="onSessionStart" output="false"> <cfscript> // create default session stat structure and pre-request values session._stat.started = now(); session._stat.thisHit = now(); session._stat.hits = 0; // at start of each session update count for application stat application._stat.sessions += 1; </cfscript></cffunction> You might have noticed that in the previous code we used +=. In ColdFusion prior to version 8, you had to type that particular line in a different way. The following two examples are the same in functionality (example one works in all versions and two works only in version 8 and higher): Example 1: myTotal = myTotal +3 Example 2: myTotal += 3 This is common in JavaScript, ActionScript, and many other languages. This syntax was added in ColdFusion version 8. We change the application-based setting because sessions are hidden from one another and cannot see each other. Therefore, we use the application CFC to either count or add a count every time a new session starts. Request start method—onRequestStart() This is one of the longest methods in the article. The first thing you will notice is that the script that is called is passed to the onRequestStart() method by ColdFusion. In this example, we will instruct ColdFusion to block any scripts from execution that begin with an underscore when called remotely. This means that you can call the server and request any .cfm page or .cfc page with an underscore at the start, and this protects it from being called outside the local server. The files can still be run if called from pages inside the server. This makes all these files locally accessible: <cffunction name="onRequestStart" output="false"> <cfargument name="thePage" type="string" required="true"> <cfscript> var myReturn = true; //fancy code to block pages that start with underscore if(left(listLast(arguments.thePage,"/"),1) EQ "_") { myReturn = false; } // update application stat on each request application._stat.lastHit = application._stat.thisHit; application._stat.thisHit = now(); application._stat.hits += 1; // update session stat on each request session._stat.lastHit = session._stat.thisHit; session._stat.thisHit = now(); session._stat.hits += 1; </cfscript> <cfreturn myReturn></cffunction> The methods in the following sections are used to update all the application and session statistics variables that need to be updated with each request. You should also notice that we are recording the last time the application or session was requested. The end methods Previously, some of the methods in this object were impossible to achieve with the earlier versions of ColdFusion. It was possible to code an end request function, but only a few programmers made use of it. We find that by using this object many more people are taking advantage of these features. The new methods that are added have the ability to run code specifically when a session ends, and when an application ends. This allows us to do things that we could not do previously. We can keep a record of how long a user is online without having to access the database with each request. When the session starts, you can store it in the session scope. When the session ends, you can take all that information and store it in the session log table if logging is desired in your site. Request end method—onRequestEnd() We are not going to use every method that is available to us. As we have the concept from the other sections, this would be redundant. The concepts of this method are very similar to the onRequestStart() method with the exception that it occurs after the requested page has been called. If you create content in this method and set the output attribute to true, then it will be sent back to browser requests. Here you can place the code that logs information about our requests: <cffunction name="onRequestEnd" returnType="void" output="false"> <cfargument name="thePage" type="string" required="true"></cffunction> Session end method—onSessionEnd() In the session end method, we can perform logging functions for analytical statistics that are specific to the end of a session if desired for your site. You need to use the argument's scope variables to read both the application and session variables. If you are changing application variables as in our example code, then you must use the argument's scope for that. <cffunction name="onSessionEnd" returnType="void" output="false"> <cfargument name="SessionScope" type="struct" required="true"> <cfargument name="ApplicationScope" type="struct" required="false"> <cfscript> // NOTE: You must use the variable scope below to access the // application structure inside this method. arguments.ApplicationScope._stat.sessions -= 1; </cfscript></cffunction> Application end method—onApplicationEnd This is our end method for applications. Here is where you can do the logging activity. As in the session method, you need to use the argument's scope in order to read variables for the application. It is also good to note that at this point, you can no longer access the session scope. <cffunction name="onApplicationEnd" returnType="void" output="false"> <cfargument name="applicationScope" required="true"></cffunction> On Error method—onError() The following code demonstrates how we can be flexible in managing errors sent to this method. If the error comes from Application.cfc, then the event (or method that had an issue) will be contained in the value of the arguments.eventname variable. Otherwise, it will be an empty string. In our code, we change the label on our dump statement, so that it is a bit more obvious where it was generated. <cffunction name="onError" returnType="void" output="true"> <cfargument name="exception" required="true"> <cfargument name="eventname" type="string" required="true"> <cfif arguments.eventName NEQ ""> <cfdump var="#arguments.exception#" label="Application core exception"> <cfelse> <cfdump var="#arguments.exception#" label="Application exception"> </cfif></cffunction>
Read more
  • 0
  • 0
  • 4069

article-image-introduction-applicationcfc-object-and-application-variables-coldfusion-9
Packt
26 Jul 2010
9 min read
Save for later

Introduction to the Application.cfc Object and Application Variables in ColdFusion 9

Packt
26 Jul 2010
9 min read
(For more resources on ColdFusion, see here.) Life span Each of our shared scopes has a life span. They come into existence at a given point, and cease to exist at a predictable point. Learning to recognize these points is very important. It is also the first aspect of "Scope". The request scope is created when a request is made to your ColdFusion server from any source. This could either be a web browser, or any type of web application that can make an HTTP request to your server. Any variable placed into the request structure persists until the request processing is complete. Variable persistence is the property of data remaining available for a set period of time. Without persistence, we would have to make information last by passing all the information from one web page to another, in all forms and in all links. You may have heard people say that web pages are stateless. If you have passed all the information into the browser, they would be closer to stateful applications, but would be difficult to manage. In this article, we will learn how to create a stateful web application. Here is a chart of the "life spans" of the key scopes: Scope Begins Ends Request Begins when a server receives a request from any source.Created before any session or an application is created. Ends when the processing for this request is complete.Ending has nothing to do with the end of applications or sessions. Application Begins before a session but after the request.Begins only when an Application.cfc file is first run with the current unique application name, or when the <cfapplication> tag is called in older code CF applications. Ends when the amount of time since a request is greater than the expiration time set for the application. Session Begins after an application is created.Created inside the same sources as the application. Ends when the amount of time since a request is greater than the expiration time set for the session. Client Begins when a unique visitor first visits the server. If you want them to expire, you can store client variables in encrypted cookies. Cookies have limited storage space and are not always available. We will be discussing the scopes in more detail later in this article series. All the scopes, except the client scope, expire if you shut down your server. When we close our browser window or reboot the client side, a session does not come to an end, but our connectivity to that particular session scope ends. The information and resources for storing that session are held until the session expires. Then, when connecting the server starts a new session and we are unable to reconnect to the former session. Introducing the Application.cfc object The first thing we need to do is to understand how this application page is called. When a .cfm or .cfc file is called, the server looks for an Application.cfc file in the current directory where the web page is being called from. It also looks for an Application.cfm file. We do not create an application or session scope with the .cfm version because .cfc provides many advantages and is much more powerful than the .cfm version. It provides better encapsulation and code reuse. If the application file is found, ColdFusion runs it. If the file is not found, then it moves up one directory towards the sever root directory in order to search for an Application.cfc file. The search stops either when a file is found, or once it reaches the root directory and a file is not found. There are several methods in the Application.cfc file. It is worth noting that this file does not exist by default, the developer must create it. The following table gives the method names and the details as to when those methods are called: Method name When a method is called onApplicationEnd The application ends; the application times out. onApplicationStart The application first starts: the first request for a page is processed or the first CFC method is invoked by an event gateway instance, a web service, or Macromedia Flash Remoting CFC. onCFCRequest HTTP or AMF (remote special Flash) calls. onError An exception occurs that is not caught by a try or catch block. onMissingTemplate ColdFusion receives a request for a non-existent page. onRequest The onRequestStart() method finishes(this method can filter request contents). onRequestEnd All pages in the request have been processed. onRequestStart A request starts. onSessionEnd A session ends. onSessionStart A session starts. onServerStart A ColdFusion server starts. When the Application.cfc file runs for the first time, these methods are called in the order as shown in the following diagram. The request variable scope is available at all times. Yet, to make the code flow right, the designers of this object made sure of the order in which the server runs the code. You will also find that for technical reasons, there are some issues that arise when we use the onRequest() method. Therefore, we will not be using this method. The steps in the previous screenshot are explained as follows: Browser Request: The browser sends a request to the server. The server passes the processing to the Application.cfc file, if it exists. It skips the step if it does not exist. The Application.cfc file has methods that execute if they exist too. The first method is onApplicationStart(). This executes on the basis of the application name. If the unique named application is not currently running on the server, then this method is called. Application Start: The next thing that Application.cfc does is to check if the request to the server is a pre-existing application. If the request is to an application which has not started, then it calls the onApplicationStart() method, if the method exists. Session Start: On every request to the server, if the onSessionStart() method exists, then it is called at this particular point in the processing. Request Start: On every request to the server, if the onRequestStart() method exists, then it is called at this particular point in the processing. OnRequest: This step normally occurs after the onRequestStart() method. If the onRequest() method was used, then by default it prevented the calling of CFCs. We do not say that it is always wrong to use this method. However, we will avoid it as much as possible. Requested Page: Now, the actual page requested is called and processed. Request End: After the requested page is processed, the control is passed back to the onRequestEnd() method, if it exists in Application.cfc. return response to browser: This is the point when ColdFusion has completed its work of processing information to respond to the browser request. At this point, you could either send HTML to the browser, a redirect, or any other response. Session End: The onSessionEnd() method is called if the method exists but only when the time since the user has last made a request to the server is less than the time for the session timeout. Application End: The onApplicationEnd() method is called if it exists when the time since the last request was received by the server is greater than the timeout for the application. The application and session scopes have already been created on the server and they do not need to be reinitialized. Once the application is created and other requests are made to the server, the following methods are called with each request: onRequestStart() onRequest() onRequestEnd() In previous versions of ColdFusion, when the onRequest() method of the Application.cfc was called, it blocked CFCs from operating correctly. You may see some fancy code in older frameworks that check if the current request is calling a CFC. They would then delete the onRequest() method for that request. Now there is a new method called onCFCRequest(). If you need backwards capability to previous versions of ColdFusion, then you would delete the onRequest() method. You can use either of these approaches depending on whether you need the code to run on prior versions of ColdFusion. The onCFCRequest() method will execute at the same point as the onRequest() method in the previous examples. You can add this code in or not depending on your own preferences. The previous example still operates as expected if you leave the method out. The OnRequestEnd.cfm side of using Application.cfm based page calls does not execute if the page runs a <cflocation> tag before the OnRequestEnd.cfm is run. It is also not a part of Application.cfc based applications and was intended for use with Application.cfm in older versions of ColdFusion. Here is a representation of the complete process that is less granular. We can see that the application behaves just as it did in the earlier illustration; we just do not go into explicit detail about every method that is called internally. We also see that the requested page can call additional code segments. These code segments can be a CFC, a custom tag, or any included page. Those pages can also include other pages, so that they create a proper hierarchy of functionality. Always try to make sure that functionality is layered, so the separation of layers provides a structured and simpler approach to creation, maintenance, and long-term support for your web applications. The variables set in Application.cfc can be modified before the requested page is called, and even later. Let us say for example, you want to record the time at which the session was last hit. You could choose to set a variable, <cfset session._stat.lasthit = now()>. This could be set either at the beginning of a request, or at the end. Here, the question is where you would put this function. At first, you might think that you would add this function to the OnSessionStart() method or OnSessionEnd() method. This would cause an issue because these two methods are only called when a session has begun or when it has come to an end. You would actually put the code into the OnRequestStart() or OnRequestEnd() code segment of the Application.cfc file for the function to work properly. The request methods are called with every server request. We will have a complete Application.cfc to use as a model for creating our own variations in future projects. Remember to place the code in the right place and test your code using CFDump or by some other means to make sure it works when creating changes.
Read more
  • 0
  • 0
  • 2235

article-image-connecting-microsoft-sql-server-compact-35-visual-studio
Packt
13 Jul 2010
3 min read
Save for later

Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio

Packt
13 Jul 2010
3 min read
SQL Server Compact Edition 3.5 can be used to create applications that are useful for a number of business uses such as: Portable applications; Occasionally connected clients and embedded applications and devices. SQL Server Compact differs from other SQL Servers in that there is just one file which can be password protected and features 128-bit file level encryption. It is referential integrity compliant; supports multiple connections; has transactions support with rich data types. In this tutorial by Jayaram Krishnaswamy, various scenarios where you may need to connect to SQL Server Compact using Visual Studio IDE (both 2008 and 2010) are described in detail. Connecting to SQL Server Compact 4.5 using Visual Studio 2010 Express (free version of Visual Studio) is also described. The connection is the starting point for any database related program and therefore mastering the connection task is crucial to work with the SQL Server Compact. (For more resources on Microsoft, see here.) If you are familiar with SQL Server you already know much of SQL Server Compact. It can be administered from SSMS and, using SQL Syntax and ADO.NET technology you can be immediately productive with SQL Server Compact.It is free to download (also free to deploy and redistribute) and comes in the form of just one code-free file. Its small foot print makes it easily deployable to a variety of device sizes and requires no administration. It also supports a subset of T-SQL and a rich set of data types. It can be used in creating desktop/web applications using Visual Studio 2008 and Visual Studio 2010. It also comes with a sample Northwind database. Download details Microsoft SQL Server Compact 3.5 may be downloaded from this site here. Make sure you download detailed features of this program from the same site. Also several bugs have been fixed in the program as detailed in the two SP's. Link to the latest service pack SP2 is here. By applying SP2 the installed version on the machine is upgraded to the latest version. Connecting to SQL Server Compact from Windows and Web projects You can use the Server Explorer in Visual Studio to drag and drop objects from SQL Server Compact provided you add a connection to the SQL Server Compact. In fact, in Visual Studio 2008 IDE you can configure a data connection without even starting a project from the View menu as shown here. When you click Add Connection... the following window will be displayed. This brings up the Add Connection dialog shown here. Click Change... to choose the correct data source for SQL Server Compact. The default is SQL Server client. The Change Data Source window is displayed as shown. Highlight Microsoft SQL Server Compact 3.5 and click OK. You are returned to Add Connection where you can browse or create a database or, choose also from a ActiveSync connected device such as a Smart phone which has a SQL Server Compact for devices installed. Presently connect to one on the computer (My Computer default option)-the sample database Northwind. Click Browse.... The Select SQL Server Compact 3.5 Database File dialog opens where your sample database Northwind is displayed as shown. Click Open. The database file is entered in the Add Connection dialogue. You may test the connection. You should get a Test connection succeeded message from Microsoft Visual Studio. Click OK. The Northwind.sdf file is displayed as a tree with Tables and View as shown in the next figure. Right click Northwind.sdf in the Server Explorer above and click Properties drop-down menu item. You will see the connection string for this conneciton as shown here.
Read more
  • 0
  • 1
  • 11422
Visually different images

article-image-building-your-hello-mediation-project-using-ibm-websphere-process-server-7-and-enterpr
Packt
13 Jul 2010
7 min read
Save for later

Building Your Hello Mediation Project using IBM WebSphere Process Server 7 and Enterprise Service Bus 7

Packt
13 Jul 2010
7 min read
This article will help you build your first HelloWorld WebSphere Enterprise Service Bus (WESB) mediation application and WESB mediation flow-based application. This article will then give an overview of Service Message Object (SMO) standards and how they relate to building applications using an SOA approach. At the completion of this article, you will have an understanding of how to begin building, testing, and deploying WESB mediations including: Overview of WESB-based programming fundamentals including WS-*? standards and Service Message Objects (SMOs) Building the first mediation module project in WID Using mediation flows Deploying the module on a server and testing the project Logging, debugging, and troubleshooting basics Exporting projects from WID WS standards Before we get down to discussing mediation flows, it is essential to take a moment and acquaint ourselves with some of the Web Service (WS) standards that WebSphere Integration Developers (WIDs) comply with. By using WIDs, user-friendly visual interfaces and drag-and-drop paradigms, developers are automatically building process flows that are globally standardized and compliant. This becomes critical in an ever-changing business environment that demands flexible integration with business partners. Here are some of the key specifications that you should be aware of as defined by the Worldwide Web Consortium (W3C): WS-Security: This is one of the secure standards used to secure Web Service at the message level, independent of the transport protocol. To learn more about WID and WS-Security refer to http://publib.boulder.ibm.com/infocenter/dmndhelp/v7r0mx/topic/com.ibm.wbit.help.runtime.doc/deploy/topics/cwssecurity.html. WS-Policy: A framework that helps define characteristics about Web Services through policies. To learn more about WS-Policy refer to: http://www.w3.org/TR/ws-policy/. WS-Addressing: This specification aids interoperability between Web Services, by standardizing ways to address Web Services and by providing addressing information in messages. In version 7.0, enhancements were made to WID to provide Web Services support for WS-Addressing and Attachments. For more details about WS-Addressing refer to http://www.w3.org/Submission/ws-addressing/. What are mediation flows? SOA, service consumers and service providers use an ESB as a communication vehicle. When services are loosely coupled through an ESB, the overall system has more flexibility and can be easily modified and enhanced to meet changing business requirements. We also saw that an ESB by itself is an enabler of many patterns and enables protocol transformations and provides mediation services, which can inspect, modify, augment and transform a message as it flows from requestor to provider. In WebSphere Enterprise Service, mediation modules provide the ESB functionality. The heart of a mediation module is the mediation flow component, which provides the mediation logic applied to a message as it flows from a service consumer to a provider. The mediation flow component is a type of SCA component that is typically used, but not limited to a mediation module. A mediation flow component contains a source interface and target references similar to other SCA components. The source interface is described using WSDL interface and must match the WSDL definition of the export to which it is wired. The target references are described using WSDL and must match the WSDL definitions of the imports or Java components to which they are wired. The mediation flow component handles most of the ESB functions including: Message filtering which is the capability to filter messages based on the content of the incoming message. Dynamic routing and selection of service provider, which is the capability to route incoming messages to the appropriate target at runtime based on predefined policies and rules. Message transformation, which is the capability to transform messages between source and target interfaces. This transformation can be defined using XSL stylesheets or business object maps. Message manipulation/enrichment, which is the capability to manipulate or enrich incoming message fields before they are sent to the target. This capability also allows you to do database lookups as needed. If the previous functionalities do not fit your requirements, you have the capability of defining a custom mediation behavior in JAVA. The following diagram describes the architectural layout of a mediation flow:   In the diagram, on the left-hand side you see the single service requester or source interface and on the right-hand side are the multiple service providers or target references. The mediation flow is the set of logical processing steps that efficiently route messages from the service requestor to the most appropriate service provider and back to the service requestor for that particular transaction and business environment. A mediation flow can be a request flow or a request and response flow. In a request flow message the sequence of processing steps is defined only from the source to the target. No message is returned to the source. However, in a request and response flow message the sequence of processing steps are defined from the single source to the multiple targets and back from the multiple targets to the single source. In the next section we take a deeper look into message objects and how they are handled in the mediation flows. Mediation primitives What are the various mediation primitives available in WESB? Mediation primitives are the core building blocks used to process the request and response messages in a mediation flow. Built-in primitives, which perform some predefined function that is configurable through the use of properties. Custom mediation primitives, which allow you to implement the function in Java. Mediation primitives have input and output terminals, through which the message flows. Almost all primitives have only one input terminal, but multiple input terminals are possible for custom mediation primitives. Primitives can have zero, one, or more output terminals. There is also a special terminal called the fail terminal through which the message is propagated when the processing of a primitive results in an exception. The different types of mediation primitives that are available in a mediation flow, along with their purpose, are summarized in the following table: Service invocation   Service Invoke Invoke external service, message modified with result Routing primitives   Message Filter Selectively forward messages based on element values Type Filter Selectively forward messages based on element types Routing primitives   Endpoint Lookup Find potential endpoint from a registry query Fan Out Starts iterative or split flow for aggregation Fan In Check completion of a split or aggregated flow Policy Resolution Set policy constrains from a registry query Flow Order Executes or fires the output terminal in a defined order Gateway Endpoint Lookup Finds potential endpoint in special cases from a registry SLA Check Verifies if the message complies with the SLA UDDI Endpoint Lookup Finds potential endpoints from a UDDI registry query Transformation primitives   XSL Transformation Update and modify messages using XSLT Business Object Map Update and modify messages using business object maps Message element setter Set, update, copy, and delete message elements Set message type Set elements to a more specific type Database Lookup Set elements from contents within a database Data Handler Update and modify messages using a data handler Custom Mediation Read, update, and modify messages using Java code SOAP header setter Read, update, copy, and delete SOAP header elements HTTP header setter Read, update, copy, and delete HTTP header elements JMS header setter Read, update, copy, and delete JMS header elements MQ header setter Read, update, copy, and delete MQ header elements Tracing primitives   Message logger Write a log message to a database or a custom destination Event emitter Raise a common base event to CEI Trace Send a trace message to a file or system out for debugging Error handling primitives   Stop Stop a single path in flow without an exception Fail Stop the entire flow and raise an exception Message Validator Validate a message against a schema and assertions Mediation subflow primitive   Subflow Represents a user-defined subflow
Read more
  • 0
  • 0
  • 2202

article-image-fine-tune-view-layer-your-fusion-web-application
Packt
08 Jul 2010
5 min read
Save for later

Fine Tune the View layer of your Fusion Web Application

Packt
08 Jul 2010
5 min read
(For more resources on Oracle, see here.) The following diadram illustrates the roles of each layer. Use AJAX to boost up the performance of your web pages Asynchronous JavaScript and XML (AJAX) avoid the full page refresh and minimize the data that being transferred between client and server during each round trip. ADF Faces is packaged with 150+ AJAX enabled components which adds AJAX capability to your applications with zero effort. Certain events on an ADF Faces component trigger Partial Page Rendering (PPR) by default. However action components, by default, triggers full page refresh which is quite expensive and may not be required in most of the cases. Make sure that you set partialSubmit attribute to true whenever possible to optimize the page lifecycle. When partialSubmit is set to true, then only the components that have values for their partialTriggers attribute will be processed through the lifecycle. Avoid mixing of html tags with ADF Faces components Don’t mix html tags and ADF Faces components though JSF design let you to do so using <f:verbatim> tag. Mixing row html contents with ADF Faces components may produce undesired output, especially when you have complex layout design for your page. It's highly discouraged to use <f:verbatim> to embed JavaScript or CSS, instead you can use <af:resource> which adds the resource to the document element and optimizes further processing during tree rendering. Avoid long Ids for User Interface components It's always recommended to use short Ids for your User Interface (UI) components. Your JSF page finally boils down to html contents, whose size decides the network bandwidth usage for your web application. If you use long Ids for UI (User Interface) component, that increases the size of the generated html content. This becomes even worse, if you have many UI elements with long Ids. If there's no Id is set for a UI component explicitly, ADF Faces runtime auto generates Ids for you. ADF Faces let you to control the auto generation of component ids by setting context parameter 'oracle.adf.view.rich.SUPPRESS_IDS' in the web.xml file. The <context-param> entry in the web.xml file may look like as shown below. <context-param> <param-name> oracle.adf.view.rich.SUPPRESS_IDS </param-name> <param-value>auto or explicit </param-value></context-param> Possible values for oracle.adf.view.rich.SUPPRESS_IDS are listed below. auto: Components can suppress auto generated IDs, but explicitly set ID will be honored. explicit: This is the default value for oracle.adf.view.rich.SUPPRESS_IDS parameter. In this case both auto generated IDs and explicitly set IDs would get suppressed. Avoid inline usage of JavaScript/Cascading Style Sheets (CSS) whenever possible If you need to use custom JavaScript functions or CSS in your application, try using external files to hold the same. Avoid inline usage of JavaScripts/CSS as much as possible. A better idea is to logically group them in external files and embed the required one in the candidate page using <af:resource> tag. If you keep JavaScript and CSS in external files, they are cached by the browser. Apparently, subsequent requests for these resources are served from the cache. This in turn reduces the network usage and improves the performance. Avoid mixing JSF/ADF Faces and JavaServer Pages Standard Tag Library (JSTL) tags Stick on JSF/ADF Faces components for building your UI as much as you can. JSF component may not work properly with some JSTL tags as they are not designed to co-exist. Relying on JSF/ADF Faces components may give you better extensibility and portability for your application as bonus. Don't generate client component unless it's really needed ADF Faces runtime generates the client components only when they are really required on the client. However you can override this behavior by setting the attribute clientComponent to true, as shown in the following code snippet. <af:commandButton text="DoSomething" clientComponent="true"> <af:clientListener method="doSomething" type="action"/></af:commandButton> Set clientComponent to true only if you need to access the component on the client side using JavaScript. Otherwise this may result in increased Document Object Model (DOM) size at the client side and may affect the performance of your web page. The following diagram shows the runtime coordination between client side and server side component trees. In the above diagram, you can see that no client side component is generated for the server component whose clientComponent attribute is set to false. Prefer not to render the components over hiding components from DOM tree If you need to hide UI components conditionally on a page, try achieving this with rendered property of the component instead of using visible property. Because the later creates the component instance and then hides the same from client side DOM tree, where as the first approach skips the component creation at the server side itself and client side DOM does not have this element added. Apparently setting rendered to false, reduces the client content size and gives better performance as bonus.
Read more
  • 0
  • 0
  • 2199

article-image-java-oracle-database
Packt
07 Jul 2010
5 min read
Save for later

Java in Oracle Database

Packt
07 Jul 2010
5 min read
"The views expressed in this article are the author's own and do not necessarily reflect the views of Oracle." (For more resources on Oracle, see here.) Introduction This article is better understood by people who have some familiarity with Oracle database, SQL, PL/SQL, and of course Java (including JDBC). Beginners can also understand the article to some extent, because it does not contain many specifics/details. The article can be useful to software developers, designers and architects working with Java. Oracle database provides a Java runtime in its database server process. Because of this, it is possible not only to store Java sources and Java classes in an Oracle database, but also to run the Java classes within the database server. Such Java classes will be 'executed' by the Java Virtual Machine embedded in the database server. The Java platform provided is J2SE-compliant, and in addition to the JVM, it includes all the Java system classes. So, conceptually, whatever Java code that can be run using the JREs (like Sun's JRE) on the operating system, can be run within the Oracle database too. Java stored procedure The key unit of the Java support inside the Oracle database is the 'Java Stored Procedure' (that may be referred to as JSP, as long as it is not confused with JavaServer Pages). A Java stored procedure is an executable unit stored inside the Oracle database, and whose implementation is in Java. It is similar to PL/SQL stored procedures and functions. Creation Let us see an example of how to create a simple Java stored procedure. We will create a Java stored procedure that adds two given numbers and returns the sum. The first step is to create a Java class that looks like the following: public class Math{ public static int add(int x, int y) { return x + y; }} This is a very simple Java class that just contains one static method that returns the sum of two given numbers. Let us put this code in a file called Math.java, and compile it (say, by doing 'javac Math.java') to get Math.class file. The next step is to 'load' Math.class into the Oracle database. That is, we have to put the class file located in some directory into the database, so that the class file gets stored in the database. There are a few ways to do this, and one of them is to use the command-line tool called loadjava provided by Oracle, as follows: loadjava -v -u scott/tiger Math.class Generally, in Oracle database, things are always stored in some 'schema' (also known as 'user'). Java classes are no exception. So, while loading a Java class file into the database, we need to specify the schema where the Java class should be stored. Here, we have given 'scott' (along with the password). There are a lot of other things that can be done using loadjava, but we will not go into them here. Next, we have to create a 'PL/SQL wrapper' as follows: SQL> connect scott/tigerConnected.SQL>SQL> create or replace function addition(a IN number, b IN number) return number 2 as language java name 'Math.add(int, int) return int'; 3 /Function created.SQL> We have created the PL/SQL wrapper called 'addition', for the Java method Math.add(). The syntax is same as the one used to create a PL/SQL function/procedure, but here we have specified that the implementation of the function is in the Java method Math.add(). And that's it. We've created a Java stored procedure! Basically, what we have done is, implemented our requirement in Java, and then exposed the Java implementation via PL/SQL. Using Jdeveloper, an IDE from Oracle, all these steps (creating the Java source, compiling it, loading it into the database, and creating the PL/SQL wrapper) can be done easily from within the IDE. One thing to remember is that, we can create Java stored procedures for Java static methods only, but not for instance methods. This is not a big disadvantage, and in fact makes sense, because even the main() method, which is the entry point for a Java program, is also 'static'. Here, since Math.add() is the entry point, it has to be 'static'. So, we can write as many static methods in our Java code as needed and make them entry points by creating the PL/SQL wrappers for them. Invocation We can call the Java stored procedure we have just created, just like any PL/SQL procedure/function is called, either from SQL or PL/SQL: SQL> select addition(10, 20) from dual;ADDITION(10,20)--------------- 30SQL>SQL> declare 2 s number; 3 begin 4 s := addition(10, 20); 5 dbms_output.put_line('SUM = ' || s); 6 end; 7 /SUM = 30PL/SQL procedure successfully completed.SQL> Here, the 'select' query, as well as the PL/SQL block, invoked the PL/SQL function addition(), which in turn invoked the underlying Java method Math.add(). A main feature of the Java stored procedure is that, the caller (like the 'select' query above) has no idea that the procedure is indeed implemented in Java. Thus, the stored procedures implemented in PL/SQL and Java can be called alike, without requiring to know the language in which the underlying implementation is. So, in general, whatever Java code we have, can be seamlessly integrated into the PL/SQL code via the PL/SQL wrappers. Putting in other words, we now have more than one language option to implement a stored procedure - PL/SQL and Java. If we have any project where stored procedures are to be implemented, then Java is a good option, because today it is relatively easier to find a Java programmer.
Read more
  • 0
  • 0
  • 5047
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-creating-catalyst-application-catalyst-58
Packt
30 Jun 2010
7 min read
Save for later

Creating a Catalyst Application in Catalyst 5.8

Packt
30 Jun 2010
7 min read
Creating the application skeleton Catalyst comes with a script called catalyst.pl to make this task as simple as possible. catalyst.pl takes a single argument, the application's name, and creates an application with that specified name. The name can be any valid Perl module name such as MyApp or MyCompany::HR::Timesheets. Let's get started by creating MyApp, which is the example application for this article: $ catalyst.pl MyAppcreated "MyApp"created "MyApp/script"created "MyApp/lib"created "MyApp/root"created "MyApp/root/static"created "MyApp/root/static/images"created "MyApp/t"created "MyApp/lib/MyApp"created "MyApp/lib/MyApp/Model"created "MyApp/lib/MyApp/View"18 ]created "MyApp/lib/MyApp/Controller"created "MyApp/myapp.conf"created "MyApp/lib/MyApp.pm"created "MyApp/lib/MyApp/Controller/Root.pm"created "MyApp/README"created "MyApp/Changes"created "MyApp/t/01app.t"created "MyApp/t/02pod.t"created "MyApp/t/03podcoverage.t"created "MyApp/root/static/images/catalyst_logo.png"created "MyApp/root/static/images/btn_120x50_built.png"created "MyApp/root/static/images/btn_120x50_built_shadow.png"created "MyApp/root/static/images/btn_120x50_powered.png"created "MyApp/root/static/images/btn_120x50_powered_shadow.png"created "MyApp/root/static/images/btn_88x31_built.png"created "MyApp/root/static/images/btn_88x31_built_shadow.png"created "MyApp/root/static/images/btn_88x31_powered.png"created "MyApp/root/static/images/btn_88x31_powered_shadow.png"created "MyApp/root/favicon.ico"created "MyApp/Makefile.PL"created "MyApp/script/myapp_cgi.pl"created "MyApp/script/myapp_fastcgi.pl"created "MyApp/script/myapp_server.pl"created "MyApp/script/myapp_test.pl"created "MyApp/script/myapp_create.pl"Change to application directory, and run "perl Makefile.PL" to make sureyour installation is complete. At this point it is a good idea to check if the installation is complete by switching to the newly-created directory (cd MyApp) and running perl Makefile.PL. You should see something like the following: $ perl Makefile.PLinclude /Volumes/Home/Users/solar/Projects/CatalystBook/MyApp/inc/Module/Install.pminclude inc/Module/Install/Metadata.pminclude inc/Module/Install/Base.pmCannot determine perl version info from lib/MyApp.pminclude inc/Module/Install/Catalyst.pm*** Module::Install::Catalystinclude inc/Module/Install/Makefile.pmPlease run "make catalyst_par" to create the PAR package!*** Module::Install::Catalyst finished.include inc/Module/Install/Scripts.pminclude inc/Module/Install/AutoInstall.pminclude inc/Module/Install/Include.pminclude inc/Module/AutoInstall.pm*** Module::AutoInstall version 1.03*** Checking for Perl dependencies...[Core Features]- Test::More ...loaded. (0.94 >= 0.88)- Catalyst::Runtime ...loaded. (5.80021 >= 5.80021)- Catalyst::Plugin::ConfigLoader ...loaded. (0.23)- Catalyst::Plugin::Static::Simple ...loaded. (0.29)- Catalyst::Action::RenderView ...loaded. (0.14)- Moose ...loaded. (0.99)- namespace::autoclean ...loaded. (0.09)- Config::General ...loaded. (2.42)*** Module::AutoInstall configuration finished.include inc/Module/Install/WriteAll.pminclude inc/Module/Install/Win32.pminclude inc/Module/Install/Can.pminclude inc/Module/Install/Fetch.pmWriting Makefile for MyAppWriting META.yml Note that it mentions that all the required modules are available. If any modules are missing, you may have to install those modules using cpan.You can also alternatively install the missing modules by running make followed by make install. We will discuss what each of these files do but for now, let's just change to the newly-created MyApp directory (cd MyApp) and run the following command: $ perl script/myapp_server.pl This will start up the development web server. You should see some debugging information appear on the console, which is shown as follows: [debug] Debug messages enabled[debug] Loaded plugins:.--------------------------------------------.| Catalyst::Plugin::ConfigLoader 0.23| Catalyst::Plugin::Static::Simple 0.21 |'--------------------------------------------'[debug] Loaded dispatcher "Catalyst::Dispatcher" [debug] Loaded engine"Catalyst::Engine::HTTP"[debug] Found home "/home/jon/projects/book/chapter2/MyApp"[debug] Loaded Config "/home/jon/projects/book/chapter2/MyApp/myapp.conf"[debug] Loaded components:.-----------------------------+----------.| Class | Type +-------------------------------+----------+| MyApp::Controller::Root | instance |'----------------------------------+----------'[debug] Loaded Private actions:.----------------------+-------- --------+----.| Private | Class | Method |+-----------+------------+--------------+| /default | MyApp::Controller::Root | default || /end | MyApp::Controller::Root | end|/index | MyApp::Controller::Root | index'--------------+--------------+-------'[debug] Loaded Path actions:.----------------+--------------.| Path | Private|+-------------------+-------------------------+| / | /default|| / | /index|'----------------+--------------------------------'[info] MyApp powered by Catalyst 5.80004You can connect to your server at http://localhost:3000 This debugging information contains a summary of plugins, Models, Views, and Controllers that your application uses, in addition to showing a map of URLs to actions. As we haven't added anything to the application yet, this isn't particularly helpful, but it will become helpful as we add features. To see what your application looks like in a browser, simply browse to http://localhost:3000. You should see the standard Catalyst welcome page as follows: Let's put the application aside for a moment, and see the usage of all the files that were created. The list of files is as shown in the following screenshot: Before we modify MyApp, let's take a look at how a Catalyst application is structured on a disk. In the root directory of your application, there are some support files. If you're familiar with CPAN modules, you'll be at home with Catalyst. A Catalyst application is structured in exactly the same way (and can be uploaded to the CPAN unmodified, if desired). This article will refer to MyApp as your application's name, so if you use something else, be sure to substitute properly. Latest helper scripts Catalyst 5.8 is ported to Moose and the helper scripts for Catalyst were upgraded much later. Therefore, it is necessary for you to check if you have the latest helper scripts. We will discuss helper scripts later. For now, catalyst.pl is a helper script and if you're using an updated helper script, then the lib/MyApp.pm file (or lib/whateverappname.pm) will have the following line: use Moose; If you don't see this line in your application package in the lib directory, then you will have to update the helper scripts. You can do that by executing the following command: cpan Catalyst::Helper Files in the MyApp directory The MyApp directory contains the following files: Makefile.PL: This script generates a Makefile to build, test, and in stall your application. It can also contain a list of your application's CPAN dependencies and automatically install them. To run Makefile.PL and generate a Makefile, simply type perl Makefile.PL. After that, you can run make to build the application, make test to test the application (you can try this right now, as some sample tests have already been created), make install to install the application, and so on. For more details, see the Module::Install documentation. It's important that you don't delete this file. Catalyst looks for it to determine where the root of your application is. Changes: This is simply a free-form text file where you can document changes to your application. It's not required, but it can be helpful to end users or other developers working on your application, especially if you're writing an open source application. README: This is just a text file with information on your application. If you're not going to distribute your application, you don't need to keep it around. myapp.conf: This is your application's main confi guration file, which is loaded when you start your application. You can specify configuration directly inside your application, but this file makes it easier to tweak settings without worrying about breaking your code. myapp.conf is in Apache-style syntax, but if you rename the file to myapp.pl, you can write it in Perl (or myapp.yml for YML format; see the Config::Any manual for a complete list). The name of this file is based on your application's name. Everything is converted to lowercase, double colons are replaced with underscores, and the .conf extension is appended. Files in the lib directory The heart of your application lives in the lib directory. This directory contains a file called MyApp.pm. This file defines the namespace and inheritance that are necessary to make this a Catalyst application. It also contains the list of plugins to load application-specific configurations. These configurations can also be defined in the myapp.conf file mentioned previously. However, if the same configuration is mentioned in both the files, then the configuration mentioned here takes precedence. Inside the lib directory, there are three key directories, namely MyApp/Controller, MyApp/Model, and MyApp/View. Catalyst loads the Controllers, Models, and Views from these directories respectively.
Read more
  • 0
  • 0
  • 1854

article-image-modeling-relationships-gorm
Packt
09 Jun 2010
6 min read
Save for later

Modeling Relationships with GORM

Packt
09 Jun 2010
6 min read
(For more resources on Groovy DSL, see here.) Storing and retrieving simple objects is all very well, but the real power of GORM is that it allows us to model the relationships between objects, as we will now see. The main types of relationships that we want to model are associations, where one object has an associated relationship with another, for example, Customer and Account, composition relationships, where we want to build an object from sub components, and inheritance, where we want to model similar objects by describing their common properties in a base class. Associations Every business system involves some sort of association between the main business objects. Relationships between objects can be one-to-one, one-to-many, or many-to-many. Relationships may also imply ownership, where one object only has relevance in relation to another parent object. If we model our domain directly in the database, we need to build and manage tables, and make associations between the tables by using foreign keys. For complex relationships, including many-to-many relationships, we may need to build special tables whose sole function is to contain the foreign keys needed to track the relationships between objects. Using GORM, we can model all of the various associations that we need to establish between objects directly within the GORM class definitions. GORM takes care of all of the complex mappings to tables and foreign keys through a Hibernate persistence layer. One-to-one The simplest association that we need to model in GORM is a one-to-one association. Suppose our customer can have a single address; we would create a new Address domain class using the grails create-domain-class command, as before. class Address { String street String city static constraints = { }} To create the simplest one-to-one relationship with Customer, we just add an Address field to the Customer class. class Customer { String firstName String lastName Address address static constraints = { }} When we rerun the Grails application, GORM will recreate a new address table. It will also recognize the address field of Customer as an association with the Address class, and create a foreign key relationship between the customer and address tables accordingly. This is a one-directional relationship. We are saying that a Customer "has an" Address but an Address does not necessarily "have a" Customer. We can model bi-directional associations by simply adding a Customer field to the Address. This will then be reflected in the relational model by GORM adding a customer_id field to the address table. class Address { String street String city Customer customer static constraints = { } }mysql> describe address;+-------------+--------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+-------------+--------------+------+-----+---------+----------------+| id | bigint(20) | NO | PRI | NULL | auto_increment || version | bigint(20) | NO | | | || city | varchar(255) | NO | | | || customer_id | bigint(20) | YES | MUL | NULL | || street | varchar(255) | NO | | | |+-------------+--------------+------+-----+---------+----------------+5 rows in set (0.01 sec)mysql> These basic one-to-one associations can be inferred by GORM just by interrogating the fields in each domain class via reflection and the Groovy metaclasses. To denote ownership in a relationship, GORM uses an optional static field applied to a domain class, called belongsTo. Suppose we add an Identity class to retain the login identity of a customer in the application. We would then use class Customer { String firstName String lastName Identity ident}class Address { String street String city}class Identity { String email String password static belongsTo = Customer} Classes are first-class citizens in the Groovy language. When we declare static belongsTo = Customer, what we are actually doing is storing a static instance of a java.lang.Class object for the Customer class in the belongsTo field. Grails can interrogate this static field at load time to infer the ownership relation between Identity and Customer. Here we have three classes: Customer, Address, and Identity. Customer has a one-to-one association with both Address and Identity through the address and ident fields. However, the ident field is "owned" by Customer as indicated in the belongsTo setting. What this means is that saves, updates, and deletes will be cascaded to identity but not to address, as we can see below. The addr object needs to be saved and deleted independently of Customer but id is automatically saved and deleted in sync with Customer. def addr = new Address(street:"1 Rock Road", city:"Bedrock")def id = new Identity(email:"email", password:"password")def fred = new Customer(firstName:"Fred", lastName:"Flintstone", address:addr,ident:id)addr.save(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0fred.save(flush:true)assert Customer.list().size == 1assert Address.list().size == 1assert Identity.list().size == 1fred.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0addr.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 0assert Identity.list().size == 0 Constraints You will have noticed that every domain class produced by the grails create-domain- class command contains an empty static closure, constraints. We can use this closure to set the constraints on any field in our model. Here we apply constraints to the e-mail and password fields of Identity. We want an e-mail field to be unique, not blank, and not nullable. The password field should be 6 to 200 characters long, not blank, and not nullable. class Identity { String email String password static constraints = { email(unique: true, blank: false, nullable: false) password(blank: false, nullable:false, size:6..200) }} From our knowledge of builders and the markup pattern, we can see that GORM could be using a similar strategy here to apply constraints to the domain class. It looks like a pretended method is provided for each field in the class that accepts a map as an argument. The map entries are interpreted as constraints to apply to the model field. The Builder pattern turns out to be a good guess as to how GORM is implementing this. GORM actually implements constraints through a builder class called ConstrainedPropertyBuilder. The closure that gets assigned to constraints is in fact some markup style closure code for this builder. Before executing the constraints closure, GORM sets an instance of ConstrainedPropertyBuilder to be the delegate for the closure. We are more accustomed to seeing builder code where the Builder instance is visible. def builder = new ConstrainedPropertyBuilder()builder.constraints {} Setting the builder as a delegate of any closure allows us to execute the closure as if it was coded in the above style. The constraints closure can be run at any time by Grails, and as it executes the ConstrainedPropertyBuilder, it will build a HashMap of the constraints it encounters for each field. We can illustrate the same technique by using MarkupBuilder and NodeBuilder. The Markup class in the following code snippet just declares a static closure named markup. Later on we can use this closure with whatever builder we want, by setting the delegate of the markup to the builder that we would like to use. class Markup { static markup = { customers { customer(id:1001) { name(firstName:"Fred", surname:"Flintstone") address(street:"1 Rock Road", city:"Bedrock") } customer(id:1002) { name(firstName:"Barney", surname:"Rubble") address(street:"2 Rock Road", city:"Bedrock") } } }}Markup.markup.setDelegate(new groovy.xml.MarkupBuilder())Markup.markup() // Outputs xmlMarkup.markup.setDelegate(new groovy.util.NodeBuilder())def nodes = Markup.markup() // builds a node tree
Read more
  • 0
  • 0
  • 3437

article-image-grails-object-relational-mapping-gorm
Packt
08 Jun 2010
5 min read
Save for later

The Grails Object Relational Mapping (GORM)

Packt
08 Jun 2010
5 min read
(For more resources on Groovy DSL, see here.) The Grails framework is an open source web application framework built for the Groovy language. Grails not only leverages Hibernate under the covers as its persistence layer, but also implements its own Object Relational Mapping layer for Groovy, known as GORM. With GORM, we can take a POGO class and decorate it with DSL-like settings in order to control how it is persisted. Grails programmers use GORM classes as a mini language for describing the persistent objects in their application. In this section, we will do a whistle-stop tour of the features of Grails. This won't be a tutorial on building Grails applications, as the subject is too big to be covered here. Our main focus will be on how GORM implements its Object model in the domain classes. Grails quick start Before we proceed, we need to install Grails and get a basic app installation up and running. The Grails' download and installation instructions can be found at http://www.grails.org/Installation. Once it has been installed, and with the Grails binaries in your path, navigate to a workspace directory and issue the following command: grails create-app GroovyDSL This builds a Grails application tree called GroovyDSL under your current workspace directory. If we now navigate to this directory, we can launch the Grails app. By default, the app will display a welcome page at http://localhost:8080/GroovyDSL/. cd GroovyDSLgrails run-app The grails-app directory The GroovyDSL application that we built earlier has a grails-app subdirectory, which is where the application source files for our application will reside. We only need to concern ourselves with the grails-app/domain directory for this discussion, but it's worth understanding a little about some of the other important directories. grails-app/conf: This is where the Grails configuration files reside. grails-app/controllers: Grails uses a Model View Controller (MVC) architecture. The controller directory will contain the Groovy controller code for our UIs. grails-app/domain: This is where Grails stores the GORM model classes of the application. grails-app/view: This is where the Groovy Server Pages (GSPs), the Grails equivalent to JSPs are stored. Grails has a number of shortcut commands that allow us to quickly build out the objects for our model. As we progress through this section, we will take a look back at these directories to see what files have been generated in these directories for us. In this section, we will be taking a whistle-stop tour through GORM. You might like to dig deeper into both GORM and Grails yourself. You can find further online documentation for GORM at http://www.grails.org/GORM. DataSource configuration Out of the box, Grails is configured to use an embedded HSQL in-memory database. This is useful as a means of getting up and running quickly, and all of the example code will work perfectly well with the default configuration. Having an in-memory database is helpful for testing because we always start with a clean slate. However, for the purpose of this section, it's also useful for us to have a proper database instance to peek into, in order to see how GORM maps Groovy objects into tables. We will configure our Grails application to persist in a MySQL database instance. Grails allows us to have separate configuration environments for development, testing, and production. We will configure our development environment to point to a MySQL instance, but we can leave the production and testing environments as they are. First of all we need to create a database, by using the mysqladmin command. This command will create a database called groovydsl, which is owned by the MySQL root user. mysqladmin -u root create groovydsl Database configuration in Grails is done by editing the DataSource.groovy source file in grails-app/conf. We are interested in the environments section of this file. environments { development { dataSource { dbCreate = "create-drop" url = "jdbc:mysql://localhost/groovydsl" driverClassName = "com.mysql.jdbc.Driver" username = "root" password = "" } } test { dataSource { dbCreate = "create-drop" url = "jdbc:hsqldb:mem:testDb" } } production { dataSource { dbCreate = "update" url = "jdbc:hsqldb:mem:testDb" } }} The first interesting thing to note is that this is a mini Groovy DSL for describing data sources. In the previous version, we have edited the development dataSource entry to point to the MySQL groovydsl database that we created. In early versions of Grails, there were three separate DataSource files that need to be configured for each environment, for example, DevelopmentDataSource.groovy. The equivalent DevelopmentDataSource.groovy file would be as follows: class DevelopmentDataSource { boolean pooling = true String dbCreate = "create-drop" String url = " jdbc:mysql://localhost/groovydsl " String driverClassName = "com.mysql.jdbc.Driver" String username = "root" String password = ""} The dbCreate field tells GORM what it should do with tables in the database, on startup. Setting this to create-drop will tell GORM to drop a table if it exists already, and create a new table, each time it runs. This will keep the database tables in sync with our GORM objects. You can also set dbCreate to update or create. DataSource.groovy is a handy little DSL for configuring the GORM database connections. Grails uses a utility class—groovu.utl. ConfigSlurper—for this DSL. The ConfigSlurper class allows us to easily parse a structured configuration file and convert it into a java.util.Properties object if we wish. Alternatively, we can navigate the ConfigObject returned by using dot notation. We can use the ConfigSlurper to open and navigate DataSource.groovy as shown in the next code snippet. ConfigSlurper has a built-in ability to partition the configuration by environment. If we construct the ConfigSlurper for a particular environment, it will only load the settings appropriate to that environment. def development = new ConfigSlurper("development").parse(newFile('DataSource.groovy').toURL())def production = new ConfigSlurper("production").parse(newFile('DataSource.groovy').toURL())assert development.dataSource.dbCreate == "create-drop"assert production.dataSource.dbCreate == "update"def props = development.toProperties()assert props["dataSource.dbCreate"] == "create-drop"
Read more
  • 0
  • 0
  • 2737

article-image-working-jrockit-runtime-analyzer-sequel
Packt
03 Jun 2010
7 min read
Save for later

Working with JRockit Runtime Analyzer- A Sequel

Packt
03 Jun 2010
7 min read
Code The Code tab group contains information from the code generator and the method sampler. It consists of three tabs —the Overview, Hot Methods, and Optimizations tab. Overview This tab aggregates information from the code generator with sample information from the code optimizer. This allows us to see which methods the Java program spends the most time executing. Again, this information is available virtually "for free", as the code generation system needs it anyway. For CPU-bound applications, this tab is a good place to start looking for opportunities to optimize your application code. By CPU-bound, we mean an application for which the CPU is the limiting factor; with a faster CPU, the application would have a higher throughput.   In the first section, the amount of exceptions thrown per second is shown. This number depends both on the hardware and on the application—faster hardware may execute an application more quickly, and consequently throw more exceptions. However, a higher value is always worse than a lower one on identical setups. Recall that exceptions are just that, rare corner cases. As we have explained, theJVM typically gambles that they aren't occurring too frequently. If an application throws hundreds of thousands exceptions per second, you should investigate why. Someone may be using exceptions for control flow, or there may be a configuration error. Either way, performance will suffer. In JRockit Mission Control 3.1, the recording will only provide information about how many exceptions were thrown. The only way to find out where the exceptions originated is unfortunately by changing the verbosity of the log. An overview of where the JVM spends most of the time executing Java code can be found in the Hot Packages and Hot Classes sections. The only difference between them is the way the sample data from the JVM code optimizer is aggregated. In Hot Packages, hot executing code is sorted on a per-package basis and in Hot Classes on a per-class basis. For more fine-grained information, use the Hot Methods tab. As shown in the example screenshot, most of the time is spent executing code in the weblogic.servlet.internal package. There is also a fair amount of exceptions being thrown. Hot Methods This tab provides a detailed view of the information provided by the JVM code optimizer. If the objective is to find a good candidate method for optimizing the application, this is the place to look. If a lot of the method samples are from one particular method, and a lot of the method traces through that method share the same origin, much can potentially be gained by either manually optimizing that method or by reducing the amount of calls along that call chain. In the following example, much of the time is spent in the method com.bea.wlrt.adapter.defaultprovider.internal.CSVPacketReceiver.parseL2Packet(). It seems likely that the best way to improve the performance of this particular application would be to optimize a method internal to the application container (WebLogic Event Server) and not the code in the application itself, running inside the container. This illustrates both the power of the JRockit Mission Control tools and a dilemma that the resulting analysis may reveal—the answers provided sometimes require solutions beyond your immediate control.   Sometimes, the information provided may cause us to reconsider the way we use data structures. In the next example, the program frequently checks if an object is in a java. util.LinkedList. This is a rather slow operation that is proportional to the size of the list (time complexity O(n)), as it potentially involves traversing the entire list, looking for the element. Changing to another data structure, such as a HashSet would most certainly speed up the check, making the time complexity constant (O(1)) on average, given that the hash function is good enough and the set large enough.   Optimizations This tab shows various statistics from the JIT-compiler. The information in this tab is mostly of interest when hunting down optimization-related bugs in JRockit. It shows how much time was spent doing optimizations as well as how much time was spent JIT-compiling code at the beginning and at the end of the recording. For each method optimized during the recording, native code size before and after optimization is shown, as well as how long it took to optimize the particular method.   Thread/Locks The Thread/Locks tab group contains tabs that visualize thread- and lock-related data. There are five such tabs in JRA—the Overview, Thread, Java Locks, JVM Locks, and Thread Dumps tab. Overview The Overview tab shows fundamental thread and hardware-related information, such as the number of hardware threads available on the system and the number of context switches per second.   A dual-core CPU has two hardware threads, and a hyperthreaded core also counts as two hardware threads. That is, a dual-core CPU with hyperthreading will be displayed as having four hardware threads. A high amount of context switches per second may not be a real problem, but better synchronization behavior may lead to better total throughput in the system. There is a CPU graph showing both the total CPU load on the system, as well as the CPU load generated by the JVM. A saturated CPU is usually a good thing — you are fully utilizing the hardware on which you spent a lot of money! As previously mentioned, in some CPU-bound applications, for example batch jobs, it is normally a good thing for the system to be completely saturated during the run. However, for a standard server-side application it is probably more beneficial if the system is able to handle some extra load in addition to the expected one. The hardware provisioning problem is not simple, but normally server-side systems should have some spare computational power for when things get hairy. This is usually referred to as overprovisioning, and has traditionally just involved buying faster hardware. Virtualization has given us exciting new ways to handle the provisioning problem. Threads This tab shows a table where each row corresponds to a thread. The tab has more to offer than first meets the eye. By default, only the start time, the thread duration, and the Java thread ID are shown for each thread. More columns can be made visible by changing the table properties. This can be done either by clicking on the Table Settings icon, or by using the context menu in the table. As can be seen in the example screenshot, information such as the thread group that the thread belongs to, allocation-related information, and the platform thread ID can also be displayed. The platform thread ID is the ID assigned to the thread by the operating system, in case we are working with native threads. This information can be useful if you are using operating system-specific tools together with JRA.   Java Locks This tab displays information on how Java locks have been used during the recording. The information is aggregated per type (class) of monitor object.   This tab is normally empty. You need to start JRockit with the system property jrockit.lockprofiling set to true, for the lock profiling information to be recorded. This is because lock profiling may cause anything from a small to a considerable overhead, especially if there is a lot of synchronization. With recent changes to the JRockit thread and locking model, it would be possible to dynamically enable lock profiling. This is unfortunately not the case yet, not even in JRockit Flight Recorder. For R28, the system property jrockit.lockprofiling has been deprecated and replaced with the flag -XX:UseLockProfiling. JVM Locks This tab contains information on JVM internal native locks. This is normally useful for the JRockit JVM developers and for JRockit support. An example of a native lock would be the code buffer lock that the JVM acquires in order to emit compiled methods into a native code buffer. This is done to ensure that no other code generation threads interfere with that particular code emission. Thread Dumps The JRA recordings normally contain thread dumps from the beginning and the end of the recording. By changing the Thread dump interval parameter in the JRA recording wizard, more thread dumps can be made available at regular intervals throughout the recording.  
Read more
  • 0
  • 0
  • 1283
article-image-working-jrockit-runtime-analyzer
Packt
03 Jun 2010
9 min read
Save for later

Working with JRockit Runtime Analyzer

Packt
03 Jun 2010
9 min read
The JRockit Runtime Analyzer, or JRA for short, is a JRockit-specific profiling tool that provides information about both the JRockit runtime and the application running in JRockit. JRA was the main profiling tool for JRockit R27 and earlier, but has been superseded in later versions by the JRockit Flight Recorder. Because of its extremely low overhead, JRA is suitable for use in production. This article is mainly targeted at R27.x/3.x versions of JRockit and Mission Control. The need for feedback In order to make JRockit an industry-leading JVM, there has been a great need for customer collaboration. As the focus for JRockit consistently has been on performance and scalability in server-side applications, the closest collaboration has been with customers with large server installations. An example is the financial industry. The birth of the JRockit Runtime Analyzer, or JRA, originally came from the need for gathering profiling information on how well JRockit performed at customer sites. One can easily understand that customers were rather reluctant to send us, for example, their latest proprietary trading applications to play with in our labs. And, of course, allowing us to poke around in a customer's mission critical application in production was completely out of the question. Some of these applications shuffle around billions of dollars per week. We found ourselves in a situation where we needed a tool to gather as much information as possible on how JRockit, and the application running on JRockit, behaved together; both to find opportunities to improve JRockit and to find erratic behavior in the customer application. This was a bit of a challenge, as we needed to get high quality data. If the information was not accurate, we would not know how to improve JRockit in the areas most needed by customers or perhaps at all. At the same time, we needed to keep the overhead down to a minimum. If the profiling itself incurred significant overhead, we would no longer get a true representation of the system. Also, with anything but near-zero overhead, the customer would not let us perform recordings on their mission critical systems in production. JRA was invented as a method of recording information in a way that the customer could feel confident with, while still providing us with the data needed to improve JRockit. The tool was eventually widely used within our support organization to both diagnose problems and as a tuning companion for JRockit. In the beginning, a simple XML format was used for our runtime recordings. A human-readable format made it simple to debug, and the customer could easily see what data was being recorded. Later, the format was upgraded to include data from a new recording engine for latency-related data. When the latency data came along, the data format for JRA was split into two parts, the human-readable XML and a binary file containing the latency events. The latency data was put into JRockit internal memory buffers during the recording, and to avoid introducing unnecessary latencies and performance penalties that would surely be incurred by translating the buffers to XML, it was decided that the least intrusive way was to simply dump the buffers to disk. To summarize, recordings come in two different flavors having either the .jra extension (recordings prior to JRockit R28/JRockit Mission Control 4.0) or the .jfr (JRockit Flight Recorder) extension (R28 or later). Prior to the R28 version of JRockit, the recording files mainly consisted of XML without a coherent data model. As of R28, the recording files are binaries where all data adheres to an event model, making it much easier to analyze the data. To open a JFR recording, a JRockit Mission Control of version 3.x must be used. To open a Flight Recorder recording, JRockit Mission Control version 4.0 or later must be used. Recording The recording engine that starts and stops recordings can be controlled in several different ways: By using the JRCMD command-line tool. By using the JVM command-line parameters. For more information on this, see the -XXjra parameter in the JRockit documentation. From within the JRA GUI in JRockit Mission Control. The easiest way to control recordings is to use the JRA/JFR wizard from within the JRockit Mission Control GUI. Simply select the JVM on which to perform a JRA recording in the JVM Browser and click on the JRA button in the JVM Browser toolbar. You can also click on Start JRA Recording from the context menu. Usually, one of the pre-defined templates will do just fine, but under special circumstances it may be necessary to adjust them. The pre-defined templates in JRockit Mission Control 3.x are: Full Recording: This is the standard use case. By default, it is configured to do a five minute recording that contains most data of interest. Minimal Overhead Recording: This template can be used for very latency-sensitive applications. It will, for example, not record heap statistics, as the gathering of heap statistics will, in effect, cause an extra garbage collection at the beginning and at the end of the recording. Real Time Recording: This template is useful when hunting latency-related problems, for instance when tuning a system that is running on JRockit Real Time. This template provides an additional text field for setting the latency threshold. The latency threshold is explained later in the article in the section on the latency analyzer. The threshold is by default lowered to 5 milliseconds for this type of recording, from the default 20 milliseconds, and the default recording time is longer. Classic Recording: This resembles a classic JRA recording from earlier versions of Mission Control. Most notably, it will not contain any latency data. Use this template with JRockit versions prior to R27.3 or if there is no interest in recording latency data.   All recording templates can be customized by checking the Show advanced options check box. This is usually not needed, but let's go through the options and why you may want to change them: Enable GC sampling: This option selects whether or not GC-related information should be recorded. It can be turned off if you know that you will not be interested in GC-related information. It is on by default, and it is a good idea to keep it enabled. Enable method sampling: This option enables or disables method sampling. Method sampling is implemented by using sample data from the JRockit code optimizer. If profiling overhead is a concern (it is usually very low, but still), it is usually a good idea to use the Method sample interval option to control how much method sampling information to record. Enable native sampling: This option determines whether or not to attempt to sample time spent executing native code as a part of the method sampling. This feature is disabled by default, as it is mostly used by JRockit developers and support. Most Java developers probably do fine without it. Hardware method sampling: On some hardware architectures, JRockit can make use of special hardware counters in the CPU to provide higher resolution for the method sampling. This option only makes sense on such architectures. Stack traces: Use this option to not only get sample counts but also stack traces from method samples. If this is disabled, no call traces are available for sample points in the methods that show up in the Hot Methods list. Trace depth: This setting determines how many stack frames to retrieve for each stack trace. For JRockit Mission Control versions prior to 4.0, this defaulted to the rather limited depth of 16. For applications running in application containers or using large frameworks, this is usually way too low to generate data from which any useful conclusions can be drawn. A tip, when profiling such an application, would be to bump this to 30 or more. Method sampling interval: This setting controls how often thread samples should be taken. JRockit will stop a subset of the threads every Method sample interval milliseconds in a round robin fashion. Only threads executing when the sample is taken will be counted, not blocking threads. Use this to find out where the computational load in an application takes place. See the section, Hot Methods for more information. Thread dumps: When enabled, JRockit will record a thread stack dump at the beginning and the end of the recording. If the Thread dump interval setting is also specified, thread dumps will be recorded at regular intervals for the duration of the recording. Thread dump interval: This setting controls how often, in seconds, to record the thread stack dumps mentioned earlier. Latencies: If this setting is enabled, the JRA recording will contain latency data. For more information on latencies, please refer to the section Latency later in this article. Latency threshold: To limit the amount of data in the recording, it is possible to set a threshold for the minimum latency (duration) required for an event to actually be recorded. This is normally set to 20 milliseconds. It is usually safe to lower this to around 1 millisecond without incurring too much profiling overhead. Less than that and there is a risk that the profiling overhead will become unacceptably high and/or that the file size of the recording becomes unmanageably large. Latency thresholds can be set as low as nanosecond values by changing the unit in the unit combo box. Enable CPU sampling: When this setting is enabled, JRockit will record the CPU load at regular intervals. Heap statistics: This setting causes JRockit to do a heap analysis pass at the beginning and at the end of the recording. As heap analysis involves forcing extra garbage collections at these points in order to collect information, it is disabled in the low overhead template. Delay before starting a recording: This option can be used to schedule the recording to start at a later time. The delay is normally defined in minutes, but the unit combo box can be used to specify the time in a more appropriate unit — everything from seconds to days is supported. Before starting the recording, a location to which the finished recording is to be downloaded must be specified. Once the JRA recording is started, an editor will open up showing the options with which the recording was started and a progress bar. When the recording is completed, it is downloaded and the editor input is changed to show the contents of the recording. Analyzing JRA recordings Analyzing JRA recordings may easily seem like black magic to the uninitiated, so just like we did with the Management Console, we will go through each tab of the JRA editor to explain the information in that particular tab, with examples on when it is useful. Just like in the console, there are several tabs in different tab groups.
Read more
  • 0
  • 0
  • 2050

article-image-metaprogramming-and-groovy-mop
Packt
31 May 2010
6 min read
Save for later

Metaprogramming and the Groovy MOP

Packt
31 May 2010
6 min read
(For more resources on Groovy DSL, see here.) In a nutshell, the term metaprogramming refers to writing code that can dynamically change its behavior at runtime. A Meta-Object Protocol (MOP) refers to the capabilities in a dynamic language that enable metaprogramming. In Groovy, the MOP consists of four distinct capabilities within the language: reflection, metaclasses, categories, and expandos. The MOP is at the core of what makes Groovy so useful for defining DSLs. The MOP is what allows us to bend the language in different ways in order to meet our needs, by changing the behavior of classes on the fly. This section will guide you through the capabilities of MOP. Reflection To use Java reflection, we first need to access the Class object for any Java object in which are interested through its getClass() method. Using the returned Class object, we can query everything from the list of methods or fields of the class to the modifiers that the class was declared with. Below, we see some of the ways that we can access a Class object in Java and the methods we can use to inspect the class at runtime. import java.lang.reflect.Field;import java.lang.reflect.Method;public class Reflection { public static void main(String[] args) { String s = new String(); Class sClazz = s.getClass(); Package _package = sClazz.getPackage(); System.out.println("Package for String class: "); System.out.println(" " + _package.getName()); Class oClazz = Object.class; System.out.println("All methods of Object class:"); Method[] methods = oClazz.getMethods(); for(int i = 0;i < methods.length;i++) System.out.println(" " + methods[i].getName()); try { Class iClazz = Class.forName("java.lang.Integer"); Field[] fields = iClazz.getDeclaredFields(); System.out.println("All fields of Integer class:"); for(int i = 0; i < fields.length;i++) System.out.println(" " + fields[i].getName()); } catch (ClassNotFoundException e) { e.printStackTrace(); } }} We can access the Class object from an instance by calling its Object.getClass() method. If we don't have an instance of the class to hand, we can get the Class object by using .class after the class name, for example, String.class. Alternatively, we can call the static Class.forName, passing to it a fully-qualified class name. Class has numerous methods, such as getPackage(), getMethods(), and getDeclaredFields() that allow us to interrogate the Class object for details about the Java class under inspection. The preceding example will output various details about String, Integer, and Double. Groovy Reflection shortcuts Groovy, as we would expect by now, provides shortcuts that let us reflect classes easily. In Groovy, we can shortcut the getClass() method as a property access .class, so we can access the class object in the same way whether we are using the class name or an instance. We can treat the .class as a String, and print it directly without calling Class.getName(), as follows: The variable greeting is declared with a dynamic type, but has the type java.lang.String after the "Hello" String is assigned to it. Classes are first class objects in Groovy so we can assign String to a variable. When we do this, the object that is assigned is of type java.lang.Class. However, it describes the String class itself, so printing will report java.lang.String. Groovy also provides shortcuts for accessing packages, methods, fields, and just about all other reflection details that we need from a class. We can access these straight off the class identifier, as follows: println "Package for String class"println " " + String.packageprintln "All methods of Object class:"Object.methods.each { println " " + it }println "All fields of Integer class:"Integer.fields.each { println " " + it } Incredibly, these six lines of code do all of the same work as the 30 lines in our Java example. If we look at the preceding code, it contains nothing that is more complicated than it needs to be. Referencing String.package to get the Java package of a class is as succinct as you can make it. As usual, String.methods and String.fields return Groovy collections, so we can apply a closure to each element with the each method. What's more, the Groovy version outputs a lot more useful detail about the package, methods, and fields. When using an instance of an object, we can use the same shortcuts through the class field of the instance. def greeting = "Hello"assert greeting.class.package == String.package Expandos An Expando is a dynamic representation of a typical Groovy bean. Expandos support typical get and set style bean access but in addition to this they will accept gets and sets to arbitrary properties. If we try to access, a non-existing property, the Expando does not mind and instead of causing an exception it will return null. If we set a non-existent property, the Expando will add that property and set the value. In order to create an Expando, we instantiate an object of class groovy.util.Expando. def customer = new Expando()assert customer.properties == [:]assert customer.id == nullassert customer.properties == [:]customer.id = 1001customer.firstName = "Fred"customer.surname = "Flintstone"customer.street = "1 Rock Road"assert customer.id == 1001assert customer.properties == [ id:1001, firstName:'Fred', surname:'Flintstone', street:'1 Rock Road']customer.properties.each { println it } The id field of customer is accessible on the Expando shown in the preceding example even when it does not exist as a property of the bean. Once a property has been set, it can be accessed by using the normal field getter: for example, customer.id. Expandos are a useful extension to normal beans where we need to be able to dump arbitrary properties into a bag and we don't want to write a custom class to do so. A neat trick with Expandos is what happens when we store a closure in a property. As we would expect, an Expando closure property is accessible in the same way as a normal property. However, because it is a closure we can apply function call syntax to it to invoke the closure. This has the effect of seeming to add a new method on the fly to the Expando. customer.prettyPrint = { println "Customer has following properties" customer.properties.each { if (it.key != 'prettyPrint') println " " + it.key + ": " + it.value }}customer.prettyPrint() Here we appear to be able to add a prettyPrint() method to the customer object, which outputs to the console: Customer has following properties surname: Flintstone street: 1 Rock Road firstName: Fred id: 1001
Read more
  • 0
  • 0
  • 6066

article-image-3rd-international-soa-symposium
Packt
28 May 2010
2 min read
Save for later

3rd International SOA Symposium

Packt
28 May 2010
2 min read
With 80 Speaking sessions across 16 tracks, the International SOA and Cloud Symposium will feature the top experts from around the world. The conference will take place on October 5-6, 2010. There will also be a series of SOACP post-conference certification workshops running from October  7-13, 2010, including (for the first time in Europe) the Certified Cloud Computing Specialist Workshop.   The Agenda for 2010 is now online and contains internationally recognized speakers from organizations such as Microsoft, HP, IBM, Oracle, SAP, Amazon, Red Hat, Vordel, Layer7, TIBCO, Logica, SOA Systems, US Department of Defense, and CGI. International experts including Thomas Erl, Dirk Krafzig, Stefan Tilkov, Mark Little, Brian Loesgen, John deVadoss, Nicolai Josuttis, Tony Shan, Toufic Boubez, Paul C. Brown, Clemens Utschig, Satadru Roy, David Chou and many more will provide new and exclusive coverage of the latest SOA and Cloud Computing topics and innovations. Themes and Topics Exploring Modern Service Technologies and Practices Scaling Your Business Into the Cloud •    Service Governance & Scalability •    Service Architecture & Service Engineering Innovation•    SOA Case Studies & Strategic Planning•    REST Service Design & RESTful SOA•    Service Security & Policies•    Semantic Services & Patterns•    Service Modelling & BPM •    The Latest Cloud Computing Technology •    Building and Working with Cloud-Based Services •    Cloud Computing Business Strategies •    Case Studies & Business Models •    Understanding SOA & Cloud Computing •    Cloud-based Infrastructure & Products •    Semantic Web and the Cloud For further information, see http://soasymposium.com/agenda2010.php. Aside from the opportunity to network with 500 practitioners and 80 experts, the Symposium will feature the exclusive Pattern Review Committee, multiple book launches and galleys. Following the success of previous years’ International SOA Symposia in Amsterdam and Rotterdam, this edition will be held in Berlin. Every SOA Symposium and Cloud Symposium event is dedicated to providing valuable contents specifically for IT practitioners. Register by August 31st, and receive an early bird discount.So get a 10 % discount and pay only €981 excl. VAT for two Conference Days. Register now at www.soasymposium.com or www.cloudsymposium.com
Read more
  • 0
  • 0
  • 954
article-image-different-types-mapping-nhibernate-2
Packt
19 May 2010
8 min read
Save for later

Different types of Mapping in Nhibernate 2

Packt
19 May 2010
8 min read
(Read more interesting articles on Nhibernate 2 Beginner's Guide here.) What is mapping? Simply put, we need to tell NHibernate about the database that we created and how the fields and tables match up to the properties and classes we created. We need to tell NHibernate how we will be assigning Primary Keys, the data types that we will be using to represent data, what variables we will store them in, and so on. You could say this is one of the most important exercises we will perform in our pursuit of NHibernate. Don't worry though, it's pretty easy. Types of mapping There are two basic ways to map data for NHibernate: the traditional XML mapping in an hbm.xml file, or the newer "Fluent NHibernate" style, which is similar to the interface pattern introduced with the .NET 3.5 framework (see http://www.martinfowler.com/bliki/FluentInterface.html). In both cases, we will create a document for each of our tables. We will map each field from our database to the property we created to display it in our class. XML mapping XML mapping is undoubtedly the most common method of mapping entities with NHibernate. Basically, we create an XML document that contains all of the information about our classes and how it maps to our database tables. These documents have several advantages: They are text files, so they are small They are very readable They use a very small number of tags to describe the data The two biggest complaints about XML mapping is the verbosity of the text and that it is not compiled. We can handle some of the verbosity by limiting the amount of data we put into the document. There are a number of optional parameters that do not absolutely need to be mapped, but that provide additional information about the database that can be included. We'll discuss more about that in the Properties section. You should copy the nhibernate-mapping.xsd and nhibernate-configuration.xsd files from the NHibernate ZIP file into your Visual Studio schemas directory (that is C:Program FilesMicrosoft Visual Studio 9.0Common7Packagesschemasxml). This will give you IntelliSense and validation in the .NET XML editor when editing NHibernate mapping and configuration files. Without compilation, when the database changes or the classes change, it's difficult to detect mismatches until the application is actually executed and NHibernate tries to reconcile the database structure with the mapping classes. While this can be an issue there are a number of ways to mitigate it, such as careful monitoring of changes, writing tests for our persistence layer, using a Visual Studio plugin, or using a code generation tool. Getting started The XML mapping document begins like any XML document, with an XML declaration. No magic here, just a simple xml tag, and two attributes, version and encoding. <?xml version="1.0" encoding="utf-8" ?> The next tag we are going to see in our document is the hibernate-mapping tag. This tag has an attribute named ><hibernate-mapping namespace="BasicWebApplication.Common.DataObjects" assembly="BasicWebApplication"> </hibernate-mapping> These three properties within the hibernate-mapping tag make up the basic XML mapping document. Classes The next tag we need to define in our document is the class tag. This is a KEY tag, because it tells NHibernate two things—the class this mapping document is meant to represent and which table in the database that class should map to. The class tag has two attributes we need to be concerned with—name and table <class name="" table=""> </class> The name attribute contains the fully-qualified POCO (or VB.NET) class that we want to map to, including the assembly name. While this can be specified in the standard fully-qualified dotted class name, a comma, and then the assembly name, the preferred method is to define the namespace and assembly in the <hibernate-mapping> tag, as shown in the previous code. The table attribute specifies the table in the database that this mapping file represents. It can be as simple as the name of the table Address or as complex as needed to adequately describe the table. If you need to include the owner of the table, such as dbo.Address, then you can add the schema attribute as follows: schema="dbo" If we were going to map the Address class in our application to the Address table in the database, then we would use a tag as follows: <class name="Address" table="Address"> </class> Technically, as the table name is the same as our class name, we could leave out the table attribute. Properties We can map properties from our class to fields in the database using the id tag and the property tag. These tags are for the standard fields in the database, not the Foreign Key fields. We'll get to those in a minute. The id and property tags follow a standard pattern and have a number of optional parameters. They follow the basic format of defining the property on the class that they are mapping to and the data type that is used to represent that data. This will generally look as follows: <property name="Address1" type="String"> <column name="Address1" length="255" sql-type="varchar" not-null="true"/> </property> This is the fully-verbose method of mapping the properties, and the one I personally use. If something happens to your database, you can re-generate the database from this information. It's also very helpful when you are troubleshooting because all of the information about the data is right there. Alternately, you can map the property as follows: <property name="Address1" /> Both methods will provide the same mapping to NHibernate, but as I stated earlier, the more verbose method gives you a lot more flexibility. One of the optional attributes that I generally use on the id and property tags is the type attribute. With this attribute I can tell NHibernate that I am using a particular type of data to store that information in my class. Adding this data type, our property tag would look as follows: <property name="Address1" type="String" /> I also like to use the column tag, just to explicitly link the field with the property in the class, but that again is just preference. The previous code is completely adequate. ID columns The first property from our class that we want to map is the Id property. This tag has a number of attributes we can optionally set, but the simplest way we can map the Id property is as follows: <id name="Id"> <generator class="hilo"/> </id> This tells NHibernate that we have a property in our class named Id which maps to a field in the database called Id and also that we use the hilo method to automatically generate a value for this field. Simple enough! An optional attribute that I generally use on the id tag is the unsaved-value attribute. This attribute specifies what value should be returned in a new object before it is persisted (saved) to the database. Adding this attribute, as well as the type attribute we talked about, the code would look as follows: <id name="Id" type="Int32" unsaved-value="null"> <generator class="hilo"/> </id> As long as our field is named Id in the database, we are good to go. But what if it was named id or address_id? This simply wouldn't handle it. In that case, we would have to add the optional column tag to identify it: <id name="Id"> <column name="address_id"/> <generator class="hilo"/> </id> Now we have mapped our address_id field from the database into a more standard Id property on our class. Some of the additional attributes that are commonly used on the column tag are as follows: name: Define the name of the column in the database length: The length of the field, as defined in the database sql-type: The database definition of the column type not-null: Whether or not the database column allows nulls. not-null="true" specifies a required field Again, these optional attributes simply allow you to further define how your database is created. Some people don't even define the database. They just define the hbm.xml files and use the NHibernate.Tool.hbm2ddl to create a SQL script to do this work!
Read more
  • 0
  • 0
  • 2298

article-image-nhibernate-2-mapping-relationships-and-fluent-mapping
Packt
19 May 2010
8 min read
Save for later

NHibernate 2: Mapping relationships and Fluent Mapping

Packt
19 May 2010
8 min read
(Read more interesting articles on Nhibernate 2 Beginner's Guide here.) Relationships Remember all those great relationships we created in our database to relate our tables together?. Basically, the primary types of relationships are as follows: One-to-many (OTM) Many-to-one (MTO) One-to-one (OTO) Many-to-many (MTM) We won't focus on the OTO relationship because it is really uncommon. In most situations, if there is a need for a one-to-one relationship, it should probably be consolidated into the main table. One-to-many relationships The most common type of relationship we will map is a one-to-many (OTM) and the other way—many-to-one (MTO). If you remember, these are just two different sides of the same relationship, as seen in the following screenshot: This is a simple one-to-many (OTM) relationship where a Contact can be associated with zero to many OrderHeader records (because the relationship fields allow nulls). Notice that the Foreign Key for the relationship is stored on the "many" side, ShipToContact_Id and BillToContact_Id on the OrderHeader table. In our mapping files, we can map this relationship from both sides. If you remember, our classes for these objects contain placeholders for each side of this relationship. On the OrderHeader side, we have a Contact object called BillToContact: private Contact _billToContact; public Contact BillToContact { get { return _billToContact; } set { _billToContact = value; } } On the Contact side, we have the inverse relationship mapped. From this vantage point, there could be SEVERAL OrderHeaders objects that this Contact object is associated with, so we needed a collection to map it: private IList<OrderHeader> _billTOrderHeaders; public IList<OrderHeader> BillTOrderHeaders { get { return _billTOrderHeaders; } set { _billTOrderHeaders = value; } } As we have mapped this collection in two separate classes, we also need to map it in two separate mapping files. Let's start with the OrderHeader side. As this is the "many" side of the one-to-many relationship, we need to use a many-to-one type to map it. Things to note here are the name and class attributes. name, again, is the property in our class that this field maps to, and class is the "other end" of the Foreign Key relationship or the Contact type in this case. <many-to-one name="BillToContact" class="Contact"> <column name="BillToContact_Id" length="4" sql-type="int" not-null="false"/> </many-to-one> Just like before, when we mapped our non-relational fields, the length, sql-type, and not-null attributes are optional. Now that we have the "one" side mapped, we need to map the "many" side. In the contact mapping file, we need to create a bag element to hold all of these OrderHeaders. A bag is the NHibernate way to say that it is an unordered collection allowing duplicated items. We have a name element to reference the class property just like all of our other mapping elements and a key child element to tell NHibernate which database column this field is meant to represent. <bag name="BillToOrderHeaders" inverse="true cascade="all-delete-orphan"> <key column="BillToContact_Id"/> <one-to-many class="BasicWebApplication.Common.DataObjects.OrderHeader, BasicWebApplication"/> </bag> If you look at the previous XML code, you will see that the one-to-many tag looks very similar to the many-to-one tag we just created for the other side. That's because this is the inverse side of the relationship. We even tell NHibernate that the inverse relationship exists by using the inverse attribute on the bag element. The class attribute on this tag is just the name of the class that represents the other side of the relationship. The cascade attribute tells NHibernate how to handle objects when we delete them. Another attribute we can add to the bag tag is the lazy attribute. This tells NHibernate to use "lazy loading", which means that the record won't be pulled from the database or loaded into memory until you actually use it. This is a huge performance gain because you only get data when you need it, without having to do anything. When I say "get Contact record with Id 14", NHibernate will go get the Contact record, but it won't retrieve the associated BillToOrderHeaders (OrderHeader records) until I reference Contact.BillToOrderHeaders to display or act on those objects in my code. By default, "lazy loading" is turned on, so we only need to specify this tag if we want to turn "lazy loading" off by using lazy="false". Many-to-many relationships The other relationship that is used quite often is the many-to -many (MTM) relationship. In the following example, the Contact_Phone table is used to join the Contact and Phone tables. NHibernate is smart enough to manage these MTM relationships for us, and we can "optimize out" the join table from our classes and just let NHibernate take care of it. Just like the one-to-many relationship, we represent the phones on the Contact class with a collection of Phone objects as follows: private IList<Phone> _phones; public IList<Phone> Phones { get { return _ phones; } set { _ phones = value; } } Mapping the MTM is very similar to the OTM, just a little more complex. We still use a bag and we still have a key. We need to add the table attribute to the bag element to let NHibernate know which table we are really storing the relationship data in. Instead of a one-to-many and a many-to-one attribute, both sides use a many-to-many element (makes sense, it is an MTM relationship, right?). The many-to-many element structure is the same as the one-to-many element, with a class attribute and a column child element to describe the relationship. <bag name="Phones" table="Contact_Phone" inverse="false" lazy="true" cascade="none"> <key> <column name="Contact_Id" length="4" sql-type="int" not-null="true"/> </key> <many-to-many class=" Phone"> <column name="Phone_Id" length="4" sql-type="int" not-null="true"/> </many-to-many> </bag> From the Phone side, it looks remarkably similar, as it's just the opposite view of the same relationship: <bag name="Contacts" table="Contact_Phone" inverse="false" lazy="true" cascade="none"> <key> <column name="Phone_Id" length="4" sql-type="int" not-null="true"/> </key> <many-to-many class=" Contact "> <column name="Contact_Id" length="4" sql-type="int" not-null="true"/> </many-to-many> </bag> Getting started This should be enough information to get us rolling on the path to becoming NHibernate superstars! Now that we have all of the primary fields mapped, let's map the Foreign Key fields. Time for action – mapping relationships If you look at the following database diagram, you will see that there are two relationships that need to be mapped, BillToContact and ShipToContact (represented by BillToContact_Id and ShipToContact_Id in the following screenshot). Let's map these two properties into our hbm.xml files. Open the OrderHeader.hbm.xml file, which should look something as follows: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping namespace="Ordering.Data" assembly="Ordering.Data"> <class name="OrderHeader" table="OrderHeader "> <id name="Id"> <column name="Id"/> <generator class="hilo"/> </id> <property name="Number" type="String"/> <property name="OrderDate" type="DateTime"/> <property name="ItemQty" type="Int32"/> <property name="Total" type="Decimal"/> </class> </hibernate-mapping> After the Total property, just before the end of the class tag (</class>), add a many-to-one element to map the BillToContact to the Contact class. <many-to-one name="BillToContact" class="Ordering.Data.Contact, Ordering.Data"> <column name="BillToContact_Id" /> </many-to-one> Next, open the Contact.hbm.xml file, which should look as follows: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping namespace="Ordering.Data" assembly="Ordering.Data"> <class name=" Contact " table="Contact"> <id name="Id"> <column name="Id"/> <generator class="hilo"/> </id> <property name="FirstName" type="String"/> <property name="LastName" type="String"/> <property name="Email" type="String"/> </class> </hibernate-mapping> After the Email property, just before the end of the class tag (</class>), add a one-to-many element to map the BillToOrderHeaders to the OrderHeader class. <bag name="BillToOrderHeaders" inverse="true" lazy="true" cascade="all-delete-orphan"> <key column="BillToContact_Id"/> <one-to-many class="OrderHeader "/> </bag> That's it! You just mapped your first one-to-many property! Your finished Contact.hbm.xml class should look as shown in the following screenshot: What just happened? By adding one-to-many and many-to-one child elements to the bag tag, we were able to map the relationships to the Contact object, allowing us to use dotted notation to access child properties of our objects within our code. Like the great cartographers before us, we have the knowledge and experience to go forth and map the world! Have a go hero – flushing out the rest of our map Now that you have some experience mapping fields and Foreign Keys from the database, why not have a go at the rest of our database! Start off with the Contact-to-Phone MTM table, and map the rest of the tables to the classes we created earlier, so that we will be ready to actually connect to the database
Read more
  • 0
  • 0
  • 4698