Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-getting-started-bookshelf-project-apache-felix
Packt
08 Nov 2010
4 min read
Save for later

Getting Started with Bookshelf Project in Apache Felix

Packt
08 Nov 2010
4 min read
  OSGi and Apache Felix 3.0 Beginner's Guide Build your very own OSGi applications using the flexible and powerful Felix Framework Build a completely operational real-life application composed of multiple bundles and a web front end using Felix Get yourself acquainted with the OSGi concepts, in an easy-to-follow progressive manner Learn everything needed about the Felix Framework and get familiar with Gogo, its command-line shell to start developing your OSGi applications Simplify your OSGi development experience by learning about Felix iPOJO A relentlessly practical beginner's guide that will walk you through making real-life OSGi applications while showing you the development tools (Maven, Eclipse, and so on) that will make the journey more enjoyable        A simple Bookshelf project The case study we will construct here is a three-tiered web-based bookshelf application. Each tier consists of a functional area of the application. The first tier is the data inventory tier, which is responsible for storing the books as well as providing management functionality. The second tier, the main bookshelf service, holds the business logic around the bookshelf functionality. The third tier is the user interaction tier. It provides user access to the application through a command-line interface at first, then through a simple servlet, and later through a JSP web application. This split between the user interface, business logic, and inventory is good practice. It adds flexibility to the design by allowing the upgrade or replacement of each of the layer implementations without impacting the others and thus reducing regression testing. Let's look at each of those layers in more detail. The data inventory tier For our case study, we will need a data inventory layer for storing, searching, and retrieving books. The Book interface defines the read-only book bean and it provides the user access to the bean attributes. This interface is used when the Book entry does not require any updates. The MutableBook interface exposes the attribute-setting methods for the book bean. It is used when the caller needs to update the bean attributes. This separation between Book and MutableBook is especially useful when developing a multi-threaded, multi-session implementation of the data inventory repository. It allows us to keep track of changes by monitoring the beans as they change and notify components of those changes when needed. We will define a BookInventory interface that abstracts over the repository implementation specifics. In addition to the CRUD functionality, the book inventory interface also offers a factory method for creating new book entries. This factory method gives the caller a mutable book. What's CRUD? CRUD is short for Create-Retrieve-Update-Delete. It is the typical functionality-set expected from an inventory service: Create: Add a new book to the inventory. This operation typically checks the repository for an item with the same identifier (unique reference) and throws an exception if there's an attempt to add an item that already exists. Retrieve: Load a book based on its unique reference, also get a list of references of items that match a set of filter criteria Update: Modify an existing book properties, based on its unique reference Delete: Remove an existing book from the inventory based on its unique reference We'll separate the inventory API definition from its implementation, packaging each of them in its own bundle. It is recommended that you write another implementation for those interfaces—one that's based on permanent storage, when you're more comfortable with the process. The separation between the API and implementation will allow you to swap an implementation with another one when it is ready. We will focus on the mock implementation(out of the scope of this article) and leave it to you to implement other potential flavors of it (in the previous dashed boxes).
Read more
  • 0
  • 0
  • 1289

article-image-apache-ofbiz-entity-engine
Packt
08 Nov 2010
8 min read
Save for later

Apache OFBiz Entity Engine

Packt
08 Nov 2010
8 min read
  Apache OfBiz Cookbook Over 60 simple but incredibly effective recipes for taking control of OFBiz Optimize your OFBiz experience and save hours of frustration with this timesaving collection of practical recipes covering a wide range of OFBiz topics. Get answers to the most commonly asked OFBiz questions in an easy-to-digest reference style of presentation. Discover insights into OFBiz design, implementation, and best practices by exploring real-life solutions. Each recipe in this Cookbook is crafted to describe not only "how" to accomplish a specific task, but also "why" the technique works to ensure you get the most out of your OFBiz implementation.         Read more about this book       (For more resources on Apache, see here.) Introduction Secure and reliable data storage is the key business driver behind any data management strategy. That OFBiz takes data management seriously and does not leave all the tedious and error-prone data management tasks to the application developer or the integrator is evident from the visionary design and implementation of the Entity Engine. The Entity Engine is a database agnostic application development and deployment framework seamlessly integrated into the OFBiz project code. It handles all the day-to-day data management tasks necessary to securely and reliably operate an enterprise. These tasks include, but are not limited to support for: Simultaneously connecting to an unlimited number of databases Managing an unlimited number of database connection pools Overseeing database transactions Handling database error conditions The true power of the Entity Engine is that it provides OFBiz Applications with all the tools, utilities, and an Application Programming Interface (API) necessary to easily read and write data to all configured data sources in a consistent and predictable manner without concern for database connections, the physical location of the data, or the underlying data type. To best understand how to effectively use the Entity Engine to meet all your data storage needs, a quick review of Relational Database Management Systems (RDBMS) is in order: RDBMS tables are the basic organizational structure of a relational database. An OFBiz entity is a model of a database table. As a model, entities describe a table's structure, content format, and any applicable associations a table may have with other tables. Database tables are further broken down into one or more columns. Table columns have data type and format characteristics constrained by the underlying RDBMS and assigned to them as part of a table's definition. The entity model describes a mapping of table columns to entity fields. Physically, data is stored in tables as one or more rows. A record is a unique instance of the content within a table's row. Users access table records by reading and writing one or more rows as mapped by an entity's model. In OFBiz, records are called entity values. Keys are a special type of field. Although there are several types of keys, OFBiz is primarily concerned with primary keys and foreign keys. A table's primary key is a column or group of columns that uniquely identifies a row within a table. The value of the primary key uniquely identifies a table's row throughout the entire database. A foreign key is a key used in one table to represent the value of a primary key in a related table. Foreign keys are used to establish unique and referentially correct relationships between one or more tables. Relationships are any associations that tables may have with one another. Views are "virtual" tables composed of columns from one or more tables in the database. OFBiz has a similar construct (although it differs from the traditional RDBMS definition of a "view") in the view-entity. Note: while this discussion has focused on RDMS, there is nothing to preclude you from using the Entity Engine in conjunction with any other types of data source(s). The Entity Engine provides all the tools and utilities necessary to effectively and securely access an unlimited number of databases regardless of the physical location of the data source, as shown in the following figure: Changing the default database Out-of-the-box, OFBiz is integrated with the Apache Derby database system (http://db.apache.org/derby). While Derby is sufficient to handle OFBiz during software development, evaluation, and functional testing, it is not recommended for environments that experience high transaction volumes. In particular, it is not recommended for use in production environments. Getting ready Before configuring an external database, the following few steps have to be ensured: Before changing the OFBiz Entity Engine configuration to use a remote data source, you must first create the remote database; the remote database must exist. Note: if you are not going to install the OFBiz schema and/or seed data on the remote database, but rather intend to use it as is, you will not need to create a database. You will need, however, to define entities for each remote database table you wish to access, and assign those entities to one or more entity groups. Add a user/owner for the remote database. OFBiz will access the database as this user. Make sure the user has all necessary privileges to create and remove database tables. Add a user/owner password (if desired or necessary) to the remote database. Ensure that the IP port the database is listening on for remote connections is open and clear of any firewall obstructions (for example, by default, PostgreSQL listens for connections on port 5432). Add the appropriate database driver to the ~framework/entity/lib/jdbc directory. For example, if you are using PostgreSQL version 8.3, download the postgresql-8.3-605.jdbc2.jar driver from the PostgreSQL website (http://jdbc.postgresql.org/download.html). How to do it... To configure another external database, follow these few steps: Open the Entity Engine's configuration file located at:~framework/entity/config/entityengine.xml Within the entityengine.xml file, configure the remote database's usage settings. A suggested method for doing this is to take an existing datasource element entry and modify that to reflect the necessary settings for a remote database. There are examples provided for most of the commonly used databases.For example, to configure a remote PostgreSQL database with the name of myofbiz_db, with a username ofbiz and password of ofbiz, edit the localpostnew configuration entry as shown here: <datasource name="localpostnew" helper-class= "org.ofbiz.entity.datasource.GenericHelperDAO" schema-name="public" field-type-name="postnew" check-on-start="true" add-missing-on-start="true" use-fk-initially-deferred="false" alias-view-columns="false" join-style="ansi" result-fetch-size="50" use-binary-type-for-blob="true"> <read-data reader-name="seed"/> <read-data reader-name="seed-initial"/> <read-data reader-name="demo"/> <read-data reader-name="ext"/> <inline-jdbc jdbc-driver="org.postgresql.Driver" jdbc-uri="jdbc:postgresql://127.0.0.1/myofbiz_db" jdbc-username="ofbiz" jdbc-password="ofbiz" isolation-level="ReadCommitted" pool-minsize="2" pool-maxsize="250"/> </datasource> Configure the default delegator for this data source: <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="false"> <group-map group-name="org.ofbiz" datasource-name="localpostnew"/> <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/> </delegator> Save and close the entityengine.xml file. From the OFBiz install directory, rebuild OFBiz by running the ant run-install command. Start OFBiz. Test by observing that the database was created and populated. You may use the WebTools entity reference page (https://localhost:8443/webtools/control/entityref) to search for your newly created entities, or a third-party tool designed to work with your specific database. How it works... The Entity Engine is configured using the entityengine.xml file. Whenever OFBiz is restarted, the Entity Engine initializes itself by first referencing this file, and then building and testing all the designated database connections. In this way, an unlimited number of data source connections, database types, and even low-level driver combinations may be applied at runtime without affecting the higher-level database access logic. By abstracting the connection using one or more delegators, OFBiz further offloads lowlevel database connection management from the developer, and handles all connection maintenance, data mappings, and the default transaction configuration for an unlimited number of target databases. To configure one or more database connections, add datasource element declarations with settings as shown here: To specify that the Entity Engine should be connected to a database using a JDBC driver and to configure the specific connection parameters to pass, set the inline-jdbc element attributes as detailed here: Connecting to a remote database A "remote" database is any data source that is not the default Derby database. A remote database may be network connected and/or installed on the local server. The Entity Engine supports simultaneous connections to an unlimited number of remote databases in addition to, or as a replacement for, the default instance of Derby. Each remote database connection requires a datasource element entry in the entityengine.xml file. Adding and removing database connections may be performed at any time; however, entityengine.xml file changes are only effective upon OFBiz restart.
Read more
  • 0
  • 0
  • 3739

article-image-using-osgi-bundle-repository-osgi-and-apache-felix-30
Packt
03 Nov 2010
5 min read
Save for later

Using the OSGi Bundle Repository in OSGi and Apache Felix 3.0

Packt
03 Nov 2010
5 min read
OSGi and Apache Felix 3.0 Beginner's Guide Introduction The OSGi Bundle Repository (OBR) is a draft specification from the OSGi alliance for a service that would allow getting access to a set of remote bundle repositories. Each remote repository, potentially a front for a federation of repositories, provides a list of bundles available for download, along with some additional information related to them. The access to the OBR repository can be through a defined API to a remote service or as a direct connection to an XML repository file. The bundles declared in an OBR repository can then be downloaded and installed to an OSGi framework like Felix. We will go through this install process a bit later. The OSGi specification for OBRs is currently in the draft state, which means that it may change before it is released. The following diagram shows the elements related to the OBR, in the context of the OSGi framework: The OBR bundle exposes a service that is registered with the framework. This interface can be used by other components on the framework to inspect repositories, download bundles, and install them. The Gogo command bundle also registers commands that interact with the OBR service to achieve the same purpose. Later in this article, we will cover those commands. API-based interaction with the service is not covered, as it is beyond the scope of this article. The OBR service currently implements remote XML repositories only. However, the Repository interface defined by the OBR service can be implemented for other potential types of repositories as well as for a direct API integration. There are a few OSGi repositories out there, here are some examples: Apache Felix: http://felix.apache.org/obr/releases.xml Apache Sling: http://sling.apache.org/obr/sling.xml Paremus: http://sigil.codecauldron.org/spring-external.obr and http://sigil.codecauldron.org/spring-release.obr Those may be of use later, as a source for the dependencies of your project. The repository XML Descriptor We already have an OBR repository available to us, our releases repository. Typically, you'll rarely need to look into the repository XML file. However, it's a good validation step when investigating issues with the deploy/install process. Let's inspect some of its contents: <repository lastmodified='20100905070524.031'> Not included above in the automatically created repository file is the optional repository name attribute. The repository contains a list of resources that it makes available for download. Here, we're inspecting the entry for the bundle com.packt.felix.bookshelf-inventory-api: <resource id='com.packtpub.felix.bookshelf-inventory-api/1.4.0' symbolicname='com.packtpub.felix.bookshelf-inventory-api' presentationname='Bookshelf Inventory API' uri='file:/C:/projects/felixbook/releases/com/packtpub/felix/ com.packtpub.felix.bookshelf-inventory-api/1.4.0/com.packtpub.felix. bookshelf-inventory-api-1.4.0.jar' version='1.4.0'> <description> Defines the API for the Bookshelf inventory.</description> <size>7781</size> <category id='sample'/> <capability name='bundle'> <p n='symbolicname' v='com.packtpub.felix.bookshelf-inventory-api'/> <p n='presentationname' v='Bookshelf Inventory API'/> <p n='version' t='version' v='1.4.0'/> <p n='manifestversion' v='2'/> </capability> <capability name='package'> <p n='package' v='com.packtpub.felix.bookshelf.inventory.api'/> <p n='version' t='version' v='0.0.0'/> </capability> <require name='package' filter= '(&amp;(package=com.packtpub.felix.bookshelf.inventory.api))' extend='false' multiple='false' optional='false'> Import package com.packtpub.felix.bookshelf.inventory.api </require> </resource> Notice that the bundle location (attribute uri), which points to where the bundle can be downloaded, relative to the base repository location. The presentationname is used when listing the bundles and the uri is used to get the bundle when a request to install it is issued. Inside the main resource entry tag are further bundle characteristics, a description of its capabilities, its requirements, and so on. Although the same information is included in the bundle manifest, it is also included in the repository XML for quick access during validation of the environment, before the actual bundle is downloaded. For example, the package capability elements describe the packages that this bundle exports: <capability name="package"> <p n="package" v="com.packtpub.felix.bookshelf.inventory.api"/> <p n="version" t="version" v="0.0.0"/> </capability> The require elements describe the bundle requirements from the target platform: <require extend="false" filter="(&amp;(package=com.packtpub.felix.bookshelf.inventory. api)(version&gt;=0.0.0))" multiple="false" name="package" optional="false"> Import package com.packtpub.felix.bookshelf.inventory.api </require> </resource> <!-- ... –-> </repository> The preceding excerpts respectively correspond to the Export-Package and Import-Package manifest headers. Each bundle may have more than one entry in the repository XML: an entry for every deployed version. Updating the OBR repository The Felix Maven Bundle Plugin attaches to the deploy phase to automate the bundle deployment and the update of the repository.xml file. Using the OBR scope commands The Gogo Command bundle registers a set of commands for the interaction with the OBR service. Those commands allow registering repositories, listing their bundles, and requesting their download and installation. Let's look at those commands in detail.
Read more
  • 0
  • 0
  • 3189
Visually different images

article-image-aspnet-site-performance-reducing-page-load-time
Packt
28 Oct 2010
10 min read
Save for later

ASP.Net Site Performance: Reducing Page Load Time

Packt
28 Oct 2010
10 min read
The JavaScript code for a page falls into two groups—code required to render the page, and code required to handle user interface events, such as button clicks. The code to render the page is used to make the page look better , and to attach event handlers to for example, buttons. Although the rendering code needs to be loaded and executed in conjunction with the page itself, the user interface code can be loaded later, in response to a user interface event, such as a button click. That reduces the amount of code to be loaded, and therefore the time rendering of the page is blocked. It also reduces your bandwidth costs, because the user interface code is loaded only when it's actually needed. On the other hand, it does require separating the user interface code from the rendering code. You then need to invoke code that potentially hasn't loaded yet, tell the visitor that the code is loading, and finally invoke the code after it has loaded. Let's see how to make this all happen. Separating user interface code from render code Depending on how your JavaScript code is structured, this could be your biggest challenge in implementing on-demand loading. Make sure the time you're likely to spend on this and the subsequent testing and debugging is worth the performance improvement you're likely to gain. A very handy tool that identifies which code is used while loading the page is Page Speed, an add-on for Firefox. Besides identifying code that doesn't need to be loaded upfront, it reports many speed-related issues on your web page. Information on Page Speed is available at http://code.google.com/speed/page-speed/. OnDemandLoader library Assuming your user interface code is separated from your render code, it is time to look at implementing actual on-demand loading. To keep it simple, we'll use OnDemandLoader, a simple low-footprint object. You'll find it in the downloaded code bundle in the folder OnDemandLoad in the file OnDemandLoader.js. OnDemandLoader has the following features: It allows you to specify the script, in which it is defined, for each event-handler function. It allows you to specify that a particular script depends on some other script; for example Button1Code.js depends on library code in UILibrary1.js. A script file can depend on multiple other script files, and those script files can in turn be dependent on yet other script files. It exposes function runf, which takes the name of a function, arguments to call it with, and the this pointer to use while it's being executed. If the function is already defined, runf calls it right away. Otherwise, it loads all the necessary script files and then calls the function. It exposes the function loadScript, which loads a given script file and all the script files it depends on. Function runf uses this function to load script files. While script files are being loaded in response to a user interface event, a "Loading..." box appears on top of the affected control. That way, the visitor knows that the page is working to execute their action. If a script file has already been loaded or if it is already loading, it won't be loaded again. If the visitor does the same action repeatedly while the associated code is loading, such as clicking the same button, that event is handled only once. If the visitor clicks a second button or takes some other action while the code for the first button is still loading, both events are handled. A drawback of OnDemandLoader is that it always loads all the required scripts in parallel. If one script automatically executes a function that is defined in another script , there will be a JavaScript error if the other script hasn't loaded yet. However, if your library script files only define functions and other objects, OnDemandLoader will work well. Initializing OnDemandLoader OnDemandLoading.aspx in folder OnDemandLoad in the downloaded code bundle is a worked-out example of a page using on-demand loading. It delays the loading of JavaScript files by five seconds, to simulate slowly loading files. Only OnDemandLoader.js loads at normal speed. If you open OnDemandLoading.aspx, you'll find that it defines two arrays—the script map array and the script dependencies array. These are needed to construct the loader object that will take care of the on-demand loading. The script map array shows the script file, in which it is defined, for each function: var scriptMap = [ { fname: 'btn1a_click', src: 'js/Button1Code.js' }, { fname: 'btn1b_click', src: 'js/Button1Code.js' }, { fname: 'btn2_click', src: 'js/Button2Code.js' } ]; Here, functions btn1a_click and btn1b_click live in script file js/Button1Code. js, while function btn2_click lives in script file js/Button2Code.js. The second array defines which other script files it needs to run for each script file: var scriptDependencies = [ { src: '/js/Button1Code.js', testSymbol: 'btn1a_click', dependentOn: ['/js/UILibrary1.js', '/js/UILibrary2.js'] }, { src: '/js/Button2Code.js', testSymbol: 'btn2_click', dependentOn: ['/js/UILibrary2.js'] }, { src: '/js/UILibrary2.js', testSymbol: 'uifunction2', dependentOn: [] }, { src: '/js/UILibrary1.js', testSymbol: 'uifunction1', dependentOn: ['/js/UILibrary2.js'] } ]; This says that Button1Code.js depends on UILibrary1.js and UILibrary2.js. Further, Button2Code.js depends on UILibrary2.js. Further, UILibrary1.js relies on UILibrary2.js, and UILibrary2.js doesn't require any other script files. The testSymbol field holds the name of a function defined in the script. Any function will do, as long as it is defined in the script. This way, the on-demand loader can determine whether a script has been loaded by testing whether that name has been defined. With these two pieces of information, we can construct the loader object: <script type="text/javascript" src="js/OnDemandLoader.js"> </script> var loader = new OnDemandLoader(scriptMap, scriptDependencies); Now that the loader object has been created, let's see how to invoke user interface handler functions before their code has been loaded. Invoking not-yet-loaded functions The point of on-demand loading is that the visitor is allowed to take an action for which the code hasn't been loaded yet. How do you invoke a function that hasn't been defined yet? Here, you'll see two approaches: Call a loader function and pass it the name of the function to load and execute Create a stub function with the same name as the function you want to execute, and have the stub load and execute the actual function Let's focus on the first approach first. The OnDemandLoader object exposes a loader function runf that takes the name of a function to call, the arguments to call it with, and the current this pointer: function runf(fname, thisObj) { // implementation } Wait a minute! This signature shows a function name parameter and the this pointer, but what about the arguments to call the function with? One of the amazing features of JavaScript is that can you pass as few or as many parameters as you want to a function, irrespective of the signature. Within each function, you can access all the parameters via the built-in arguments array. The signature is simply a convenience that allows you to name some of the arguments. This means that you can call runf as shown: loader.runf('myfunction', this, 'argument1', 'argument2'); If for example, your original HTML has a button as shown: <input id="btn1a" type="button" value="Button 1a" onclick="btn1a_click(this.value, 'more info')" /> To have btn1a_click loaded on demand, rewrite this to the following (file OnDemandLoading.aspx): <input id="btn1a" type="button" value="Button 1a" onclick="loader.runf('btn1a_click', this, this.value, 'more info')" /> If, in the original HTML, the click handler function was assigned to a button programmatically as shown: <input id="btn1b" type="button" value="Button 1b" /> <script type="text/javascript"> window.onload = function() { document.getElementById('btn1b').onclick = btn1b_click; } </script> Then, use an anonymous function that calls loader.runf with the function to execute: <input id="btn1b" type="button" value="Button 1b" /> <script type="text/javascript"> window.onload = function() { document.getElementById('btn1b').onclick = function() { loader.runf('btn1b_click', this); } } </script> This is where you can use the second approach—the stub function. Instead of changing the HTML of your controls, you can load a stub function upfront before the page renders (file OnDemandLoading.aspx): function btn1b_click() { loader.runf('btn1b_click', this); } When the visitor clicks the button, the stub function is executed. It then calls loader.runf to load and execute its namesake that does the actual work, overwriting the stub function in the process. This leaves behind one problem. The on-demand loader checks whether a function with the given name is already defined before initiating a script load. And a function with that same name already exists—the stub function itself.   The solution is based on the fact that functions in JavaScript are objects. And all JavaScript objects can have properties. You can tell the on-demand loader that a function is a stub by attaching the property "stub": btn1b_click.stub = true; To see all this functionality in action, run the OnDemandLoading.aspx page in folder OnDemandLoad in the downloaded code bundle. Click on one of the buttons on the page, and you'll see how the required code is loaded on demand. It's best to do this in Firefox with Firebug installed, so that you can see the script files getting loaded in a Waterfall chart. Preloading Now that you have on-demand loading working, there is one more issue to consider: trading off bandwidth against visitor wait time. Currently, when a visitor clicks a button and the code required to process the click hadn't been loaded, loading starts in response to the click. This can be a problem if loading the code takes too much time. An alternative is to initiate loading the user interface code after the page has been loaded, instead of when a user interface event happens. That way, the code may have already loaded by the time the visitor clicks the button; or at least it will already be partly loaded, so that the code finishes loading sooner. On the other hand, this means expending bandwidth on loading code that may never be used by the visitor. You can implement preloading with the loadScript function exposed by the OnDemandLoader object. As you saw earlier, this function loads a JavaScript file plus any files it depends on, without blocking rendering. Simply add calls to loadScript in the onload handler of the page, as shown (page PreLoad.aspx in folder OnDemandLoad in the downloaded code bundle): <script type="text/javascript"> window.onload = function() { document.getElementById('btn1b').onclick = btn1b_click; loader.loadScript('js/Button1Code.js'); loader.loadScript('js/Button2Code.js'); } </script> You could preload all your user interface code, or just the code you think is likely to be needed. Now that you've looked at the load on demand approach, it's time to consider the last approach—loading your code without blocking page rendering and without getting into stub functions or other complications inherent in on-demand loading.
Read more
  • 0
  • 0
  • 5632

article-image-aspnet-site-performance-improving-javascript-loading
Packt
28 Oct 2010
11 min read
Save for later

ASP.Net Site Performance: Improving JavaScript Loading

Packt
28 Oct 2010
11 min read
  ASP.NET Site Performance Secrets Simple and proven techniques to quickly speed up your ASP.NET website Speed up your ASP.NET website by identifying performance bottlenecks that hold back your site's performance and fixing them Tips and tricks for writing faster code and pinpointing those areas in the code that matter most, thus saving time and energy Drastically reduce page load times Configure and improve compression – the single most important way to improve your site's performance Written in a simple problem-solving manner – with a practical hands-on approach and just the right amount of theory you need to make sense of it all           Read more about this book       One approach to improving page performance is to shift functionality from the server to the browser. Instead of calculating a result or validating a form in C# on the server, you use JavaScript code on the browser. A drawback of this approach is that it involves physically moving code from the server to the browser. Because JavaScript is not compiled, it can be quite bulky. This can affect page load times, especially if you use large JavaScript libraries. You're effectively trading off increased page load times against faster response times after the page has loaded. In this article by Matt Perdeck, author of ASP.NET Site Performance Secret, you'll see how to reduce the impact on page load times by the need to load JavaScript files. It shows: How JavaScript files can block rendering of the page while they are being loaded and executed How to load JavaScript in parallel with other resources How to load JavaScript more quickly (For more resources on ASP.Net, see here.) Problem: JavaScript loading blocks page rendering JavaScript files are static files, just as images and CSS files. However, unlike images, when a JavaScript file is loaded or executed using a <script> tag, rendering of the page is suspended. This makes sense, because the page may contain script blocks after the <script> tag that are dependent on the JavaScript file. If loading of a JavaScript file didn't block page rendering, the other blocks could be executed before the file had loaded, leading to JavaScript errors. Confirming with a test site You can confirm that loading a JavaScript file blocks rendering of the page by running the website in the folder JavaScriptBlocksRendering in the downloaded code bundle. This site consists of a single page that loads a single script, script1.js. It also has a single image, chemistry.png, and a stylesheet style1.css. It uses an HTTP module that suspends the working thread for five seconds when a JavaScript file is loaded. Images and CSS files are delayed by about two seconds. When you load the page, you'll see that the page content appears after only about five seconds. Then after two seconds, the image appears, unless you use Firefox, which often loads images in parallel with the JavaScript. If you make a Waterfall chart, you can see how the image and stylesheet are loaded after the JavaScript file, instead of in parallel: To get the delays, run the test site on IIS 7 in integrated pipeline mode. Do not use the Cassini web server built into Visual Studio. If you find that there is no delay, clear the browser cache. If that doesn't work either, the files may be in kernel cache on the server—remove them by restarting IIS using Internet Information Services (IIS) Manager. To open IIS manager, click on Start | Control Panel, type "admin" in the search box, click on Administrative Tools, and then double-click on Internet Information Services (IIS) Manager. Integrated/Classic Pipeline Mode As in IIS 6, every website runs as part of an application pool in IIS 7. Each IIS 7 application pool can be switched between Integrated Pipeline Mode (the default) and Classic Pipeline Mode. In Integrated Pipeline Mode, the ASP.NET runtime is integrated with the core web server, so that the server can be managed for example, via web.config elements. In Classic Pipeline Mode, IIS 7 functions more like IIS 6, where ASP.NET runs within an ISAPI extension. Approaches to reduce the impact on load times Although it makes sense to suspend rendering the page while a <script> tag loads or executes JavaScript, it would still be good to minimize the time visitors have to wait for the page to appear, especially if there is a lot of JavaScript to load. Here are a few ways to do that: Start loading JavaScript after other components have started loading, such as images and CSS files. That way, the other components load in parallel with the JavaScript instead of after the JavaScript, and so are available sooner when page rendering resumes. Load JavaScript more quickly. Page rendering is still blocked, but for less time. Load JavaScript on demand. Only load the JavaScript upfront that you need to render the page. Load the JavaScript that handles button clicks, and so on, when you need it. Use specific techniques to prevent JavaScript loading from blocking rendering. This includes loading the JavaScript after the page has rendered, or in parallel with page rendering. These approaches can be combined or used on their own for the best tradeoff between development time and performance. Let's go through each approach. Approach: Start loading after other components This approach aims to render the page sooner by loading CSS stylesheets and images in parallel with the JavaScript rather than after the JavaScript. That way, when the JavaScript has finished loading, the CSS and images will have finished loading too and will be ready to use; or at least it will take less time for them to finish loading after the JavaScript has loaded. To load the CSS stylesheets and images in parallel with the JavaScript, you would start loading them before you start loading the JavaScript. In the case of CSS stylesheets that is easy—simply place their <link> tags before the <script> tags: <link rel="Stylesheet" type="text/css" href="css/style1.css" /><script type="text/javascript" src="js/script1.js"></script> Starting the loading of images is slightly trickier because images are normally loaded when the page body is evaluated, not as a part of the page head. In the test page you just saw with the image chemistry.png, you can use a bit of simple JavaScript to get the browser to start loading the image before it starts loading the JavaScript file. This is referred to as "image preloading" (page PreLoadWithJavaScript.aspx in the folder PreLoadImages in the downloaded code bundle): <script type="text/javascript"> var img1 = new Image(); img1.src = "images/chemistry.png";</script><link rel="Stylesheet" type="text/css" href="css/style1.css" /><script type="text/javascript" src="js/script1.js"></script> Run the page now and you'll get the following Waterfall chart: When the page is rendered after the JavaScript has loaded, the image and CSS files have already been loaded; so the image shows up right away. A second option is to use invisible image tags at the start of the page body that preload the images. You can make the image tags invisible by using the style display:none. You would have to move the <script> tags from the page head to the page body after the invisible image tags, as shown (page PreLoadWithCss.aspx in folder PreLoadImages in the downloaded code bundle): <body> <div style="display:none"> <img src="images/chemistry.png" /> </div> <script type="text/javascript" src="js/script1.js"></script> Although the examples we've seen so far preload only one image, chemistry.png, you could easily preload multiple images. When you do, it makes sense to preload the most important images first, so that they are most likely to appear right away when the page renders. The browser loads components, such as images, in the order in which they appear in the HTML, so you'd wind up with something similar to the following code: <script type="text/javascript"> var img1 = new Image(); img1.src = "images/important.png"; var img1 = new Image(); img2.src = "images/notsoimportant.png"; var img1 = new Image(); img3.src = "images/unimportant.png";</script> Approach: Loading JavaScript more quickly The second approach is to simply spend less time loading the same JavaScript, so that visitors spend less time waiting for the page to render. There are a number of ways to achieve just that: Techniques used with images, such as caching and parallel download Free Content Delivery Networks GZIP compression Minification Combining or breaking up JavaScript files Removing unused code Techniques used with images JavaScript files are static files, just like images and CSS files. This means that many techniques that apply to images apply to JavaScript files as well, including the use of cookie-free domains, caching, and boosting parallel loading. Free Content Delivery Networks Serving static files from a Content Delivery Network (CDN) can greatly reduce download times, by serving the files from a server that is close to the visitor. A CDN also saves you bandwidth because the files are no longer served from your own server. A number of companies now serve popular JavaScript libraries from their CDNs for free. Here are their details: Google AJAX Libraries API http://code.google.com/apis/ajaxlibs/ Serves a wide range of libraries including jQuery, jQuery UI, Prototype, Dojo, and Yahoo! User Interface Library (YUI) Microsoft Ajax Content Delivery Network http://www.asp.net/ajaxlibrary/cdn.ashx Serves libraries used by the ASP.NET and ASP.NET MVC frameworks including the jQuery library and the jQuery Validation plugin jQuery CDN http://docs.jquery.com/Downloading_jQuery Serves the jQuery library In ASP.NET 4.0 and later, you can get the ScriptManager control to load the ASP. NET AJAX script files from the Microsoft AJAX CDN instead of your web server, by setting the EnableCdn property to true: <asp:ScriptManager ID="ScriptManager1" EnableCdn="true" runat="server" /> One issue with loading libraries from a CDN is that it creates another point of failure—if the CDN goes down, your site is crippled. GZIP compression IIS has the ability to compress content sent to the browser, including JavaScript and CSS files. Compression can make a dramatic difference to a JavaScript file as it goes over the wire from the server to the browser. Take for example the production version of the jQuery library:   Uncompressed Compressed jQuery library 78 KB 26 KB   Compression for static files is enabled by default in IIS 7. This immediately benefits CSS files. It should also immediately benefit JavaScript files, but it doesn't because of a quirk in the default configuration of IIS 7. Not all static files benefit from compression; for example JPEG, PNG, and GIF files are already inherently compressed because of their format. To cater to this, the IIS 7 configuration file applicationHost.config contains a list of mime types that get compressed when static compression is enabled: <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /></staticTypes> To allow IIS to figure out what mime type a particular file has, applicationHost.config also contains default mappings from file extensions to mime types, including this one: <staticContent lockAttributes="isDocFooterFileName"> ... <mimeMap fileExtension=".js" mimeType="application/x-javascript" /> ...</staticContent> If you look closely, you'll see that the .js extension is mapped by default to a mime type that isn't in the list of mime types to be compressed when static file compression is enabled. The easiest way to solve this is to modify your site's web.config, so that it maps the extension .js to mime type text/javascript. This matches text/* in the list of mime types to be compressed. So, IIS 7 will now compress JavaScript files with the extension .js (folder Minify in the downloaded code bundle): <system.webServer> <staticContent> <remove fileExtension=".js" /> <mimeMap fileExtension=".js" mimeType="text/javascript" /> </staticContent></system.webServer> Keep in mind that IIS 7 only applies static compression to files that are "frequently" requested. This means that the first time you request a file, it won't be compressed! Refresh the page a couple of times and compression will kick in.
Read more
  • 0
  • 0
  • 9829

article-image-building-your-first-bean-coldfusion
Packt
22 Oct 2010
10 min read
Save for later

Building Your First Bean in ColdFusion

Packt
22 Oct 2010
10 min read
  Object-Oriented Programming in ColdFusion Break free from procedural programming and learn how to optimize your applications and enhance your skills using objects and design patterns Fast-paced easy-to-follow guide introducing object-oriented programming for ColdFusion developers Enhance your applications by building structured applications utilizing basic design patterns and object-oriented principles Streamline your code base with reusable, modular objects Packed with example code and useful snippets         What is a Bean? Although the terminology evokes the initial reaction of cooking ingredients or a tin of food from a supermarket, the Bean is an incredibly important piece of the object-oriented design pattern. The term 'Bean' originates from the Java programming language, and for those developers out there who enjoy their coffee as much as I do, the thought process behind it will make sense: Java = Coffee = Coffee Bean = Bean. A Bean is basically the building block for your object. Think of it as a blueprint for the information you want each object to hold and contain. In relation to other ColdFusion components, the Bean is a relatively simple CFC that primarily has two roles in life: to store information or a collection of values to return the information or collection of values when required But what is it really? Typically, a ColdFusion bean is a single CFC built to encapsulate and store a single record of data, and not a record-set query result, which would normally hold more than one record. This is not to say that the information within the Bean should only be pulled from one record within a database table, or that the data needs to be only a single string—far from it. You can include information in your Bean from any source at your disposal; however, the Bean can only ever contain one set of information. Your Bean represents a specific entity. This entity could be a person, a car, or a building. Essentially, any 'single' object can be represented by a bean in terms of development. The Bean holds information about the entity it is written for. Imagine we have a Bean to represent a person, and this Bean will hold details on that individual's name, age, hair color, and so on. These details are the properties for the entity, and together they make up the completed Bean for that person. In reality, the idea of the Bean itself is incredibly similar to a structure. You could easily represent the person entity in the form of a structure, as follows: <!---Build an empty structure to emulate a Person entity.---> <cfset stuPerson = { name = '', age = '', hairColor = ''} /> Listing: 3.1 – Creating an entity structure This seems like an entirely feasible way to hold your data, right? To some extent it is. You have a structure, complete with properties for the object/entity, wrapped up into one tidy package. You can easily update the structure to hold the properties for the individual, and retrieve the information for each property, as seen in the following code example: <!---Build an empty structure to emulate a Person entity.---> <cfset stuPerson = { name = '', age = '', hairColor = ''} /> <cfdump var="#stuPerson#" label="Person - empty data" /> <!---Update the structure with data and display the output---> <cfset StructUpdate( stuPerson, 'name', 'Matt Gifford') /> <cfset StructUpdate( stuPerson, 'hairColor', 'Brown') /> <br /> <cfdump var="#stuPerson#" label="Person - data added" /> <br /> <cfoutput> Name: #stuPerson.name#<br /> Hair: #stuPerson.hairColor# </cfoutput> Listing 3.2 – Populating the entity structure Although the structure is an incredibly simple method of retaining and accessing data, particularly when looking at the code, they do not suit the purpose of a blueprint for an entity very well, and as soon as you have populated the structure it is no longer a blueprint, but a specific entity. Imagine that you were reading data from the database and wanted to use the structure for every person who was drawn out from the query. Sure enough, you could create a standard structure that was persistent in the Application scope, for example. You could then loop through the query and populate the structure with the recordset results. For every person object you wanted, you could run ColdFusion's built-in Duplicate() function to create a copy of the original 'base' structure, and apply it to a new variable. Or perhaps, the structure might need to be written again on every page it is required in, or maybe written on a separate .cfm page that is included into the template using cfinclude. Perhaps over time, your application will grow, requirements will change, and extra details will need to be stored. You would then be faced with the task of changing and updating every instance of the structures across your entire application to include additional keys and values, or remove some from the structure. This route could possibly have you searching for code, testing, and debugging at every turn, and would not be the best method to optimize your development time and to enhance the scalability of your application. Taking the time to invest in your code base and development practices from the start will greatly enhance your application, development time, and go some way to reduce unnecessary headaches caused by spaghetti code and lack of structure. The benefit of using beans By creating a Bean for each entity within your application, you have created a specific blueprint for the data we wish to hold for that entity. The rules of encapsulation are adhered to, and nothing is hardcoded into our CFC. We have already seen how our objects are created, and how we can pass variables and data into them, which can be through the init() method during instantiation or perhaps as an argument when calling an included function within the component. Every time you need to use the blueprint for the Person class, you can simply create an instance of the Bean. You instantly have a completely fresh new object ready to populate with specific properties, and you can create a fully populated Person object in just one line of code within your application. The main purpose of a Bean in object-oriented development is to capture and encapsulate a variety of different objects, be they structures, arrays, queries, or strings for example, into one single object, which is the Bean itself. The Bean can then be passed around your application where required, containing all of the included information, instead of the application itself sending the many individual objects around or storing each one in, for example, the Application or Session scope, which could get messy. This creates a nicely packaged container that holds all of the information we need to send and use within our applications, and acts as a much easier way to manage the data we are passing around. If your blueprints need updating, for example more properties need to be added to the objects, you only have one file to modify, the CFC of the Bean itself. This instantly removes the problematic issues of having to search for every instance or every structure for your object throughout your entire code base. A Bean essentially provides you with a consistent and elegant interface, which will help you to organize your data into objects, removing the need to create, persist, and replicate ad-hoc structures. Creating our first Bean Let's look at creating a Bean for use with the projects table in the database. We'll continue with the Person Bean as the primary example, and create the CFC to handle person objects. An introduction to UML Before we start coding the component, let's have a quick look at a visual representation of the object using Unified Modeling Language (UML). UML is a widely-used method to display, share, and reference objects, Classes, workflows, and structures in the world of object-oriented programming, which you will come into contact with during your time with OOP development. The modeling language itself is incredibly detailed and in-depth, and can express such a wide array of details and information. Person object in UML In this example, let's take a look at the basics of UML and the visual representation of the Person component that we will create, which looks like this: At first glances, you can instantly see what variables and functions our component consists of. With most UML objects, it is broken into segments for easier digestion. The actual name of the component is clearly visible within the top section of the diagram. In the second section, we include the variables that will be included within our object. These have a '-' character in front of them, to indicate that these variables are private and are hidden within the component (they are not accessible externally). These variables are followed by the variable type, separated by a colon (':'). This lets you easily see which variable type is expected. In this example, we can see that all of the variables are strings. In the bottom section of the diagram we include the function references, which contain all methods within the component. All of the functions are prefixed with a '+' to indicate that they are publically accessible, and so are available to be called externally from the component itself. For any functions that require parameters, they are included inside the parenthesis. If a function returns a value, the returnType is specified after the ':'. Based upon this UML diagram, let's create the core wrapper for the CFC, and create the constructor method—the init() function. Create a new file called Person.cfc and save this file in the following location within your project folder: com/packtApp/oop/beans. <cfcomponent displayname="Person" output="false" hint="I am the Person Class."> <cfproperty name="firstName" type="string" default="" /> <cfproperty name="lastName" type="string" default="" /> <cfproperty name="gender" type="string" default="" /> <cfproperty name="dateofbirth" type="string" default="" /> <cfproperty name="hairColor" type="string" default="" /> <!--- Pseudo-constructor ---> <cfset variables.instance = { firstName = '', lastName = '', gender= '', dateofbirth = '', hairColor = '' } /> <cffunction name="init" access="public" output="false" returntype="any" hint="I am the constructor method for the Person Class."> <cfargument name="firstName" required="true" type="String" default="" hint="I am the first name." /> <cfargument name="lastName" required="true" type="String" default="" hint="I am the last name." /> <cfargument name="gender" required="true" type="String" default="" hint="I am the gender." /> <cfargument name="dateofbirth" required="true" type="String" default="" hint="I am the date of birth." /> <cfargument name="hairColor" required="true" type="String" default="" hint="I am the hair color." /> <cfreturn this /> </cffunction> </cfcomponent> Listing 3.3 - com/packtApp/oop/beans/Person.cfc Here, we have the init() method for the Person.cfc and the arguments defined for each property within the object. The bean will hold the values of its properties within the variables.instance structure, which we have defined above the init() method as a pseudo-constructor.
Read more
  • 0
  • 0
  • 3874
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-creating-customized-dialog-boxes-wix
Packt
22 Oct 2010
5 min read
Save for later

Creating Customized Dialog Boxes with WiX

Packt
22 Oct 2010
5 min read
        Read more about this book       The WiX toolset ships with several User Interface wizards that are ready to use out of the box. We'll briefly discuss each of the available sets and then move on to learning how to create your own from scratch. In this article by Nick Ramirez, author of the book WiX: A Developer's Guide to Windows Installer XML, you'll learn about: Adding dialogs into the InstallUISequence Linking one dialog to another to form a complete wizard Getting basic text and window styling working Including necessary dialogs like those needed to display errors (For more resources on WiX, see here.) WiX standard dialog sets The wizards that come prebuilt with WiX won't fit every need, but they're a good place to get your feet wet. To add any one of them, you first have to add a project reference to WixUIExtension.dll, which can be found in the bin directory of your WiX program files. Adding this reference is sort of like adding a new source file. This one contains dialogs. To use one, you'll need to use a UIRef element to pull the dialog into the scope of your project. For example, this line, anywhere inside the Product element, will add the "Minimal" wizard to your installer: <UIRef Id="WixUI_Minimal" /> It's definitely minimal, containing just one screen. It gives you a license agreement, which you can change by adding a WixVariable element with an Id of WixUILicenseRtf and a Value attribute that points to a Rich Text Format (.rtf) file containing your new license agreement: <WixVariable Id="WixUILicenseRtf" Value="newLicense.rtf" /> You can also override the background image (red wheel on the left, white box on the right) by setting another WixVariable called WixUIDialogBmp to a new image. The dimensions used are 493x312. The other available wizards offer more and we'll cover them in the following sections. WixUI_Advanced The "Advanced" dialog set offers more: It has a screen that lets the user choose to install for just the current user or for all users, another where the end user can change the folder that files are installed to and a screen with a feature tree where features can be turned on or off. As in the following screenshot: You'll need to change your UIRef element to use WixUI_Advanced. This can be done by adding the following line: <UIRef Id="WixUI_Advanced" /> You'll also have to make sure that your install directory has an Id of APPLICATIONFOLDER, as in this example: <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="APPLICATIONFOLDER" Name="My Program" /> </Directory></Directory> Next, set two properties: ApplicationFolderName and WixAppFolder. The first sets the name of the install directory as it will be displayed in the UI. The second sets whether this install should default to being per user or per machine. It can be either WixPerMachineFolder or WixPerUserFolder. <Property Id="ApplicationFolderName" Value="My Program" /><Property Id="WixAppFolder" Value="WixPerMachineFolder" /> This dialog uses a bitmap that the Minimal installer doesn't: the white banner at the top. You can replace it with your own image by setting the WixUIBannerBmp variable. Its dimensions are 493x58. It would look something like this: <WixVariable Id="WixUIBannerBmp" Value="myBanner.bmp" /> WixUI_FeatureTree The WixUI_FeatureTree wizard shows a feature tree like the Advanced wizard, but it doesn't have a dialog that lets the user change the install path. To use it, you only need to set the UIRef to WixUI_FeatureTree, like so: <UIRef Id="WixUI_FeatureTree" /> This would produce a window that would allow you to choose features as show in the following screenshot: Notice that in the image, the Browse button is disabled. If any of your Feature elements have the ConfigurableDirectory attribute set to the Id of a Directory element, then this button will allow you to change where that feature is installed to. The Directory element's Id must be all uppercase. WixUI_InstallDir WixUI_InstallDir shows a dialog where the user can change the installation path. Change the UIRef to WixUI_InstallDir. Like so: <UIRef Id="WixUI_InstallDir" /> Here, the user can chose the installation path. This is seen in the following screenshot: You'll have to set a property called WIXUI_INSTALLDIR to the Id you gave your install directory. So, if your directory structure used INSTALLLDIR for the Id of the main install folder, use that as the value of the property. <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLDIR" Name="My Program" /> </Directory></Directory> <Property Id="WIXUI_INSTALLDIR" Value="INSTALLDIR" /> WixUI_Mondo The WixUI_Mondo wizard gives the user the option of installing a "Typical", "Complete" or "Custom" install. Typical sets the INSTALLLEVEL property to 3 while Complete sets it to 1000. You can set the Level attribute of your Feature elements accordingly to include them in one group or the other. Selecting a Custom install will display a feature tree dialog where the user can choose exactly what they want. To use this wizard, change your UIRef element to WixUI_Mondo. <UIRef Id="WixUI_Mondo" /> This would result in a window like the following:
Read more
  • 0
  • 0
  • 8419

article-image-nhibernate-30-using-linq-specifications-data-access-layer
Packt
21 Oct 2010
4 min read
Save for later

NHibernate 3.0: Using LINQ Specifications in the data access layer

Packt
21 Oct 2010
4 min read
  NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on NHibernate, see here.) Getting ready Download the LinqSpecs library from http://linqspecs.codeplex.com. Copy LinqSpecs.dll from the Downloads folder to your solution's libs folder. Complete the Setting up an NHibernate Repository recipe. How to do it... In Eg.Core.Data and Eg.Core.Data.Impl, add a reference to LinqSpecs.dll. Add these two methods to the IRepository interface. IEnumerable<T> FindAll(Specification<T> specification);T FindOne(Specification<T> specification); Add the following three methods to NHibernateRepository: public IEnumerable<T> FindAll(Specification<T> specification){ var query = GetQuery(specification); return Transact(() => query.ToList());}public T FindOne(Specification<T> specification){ var query = GetQuery(specification); return Transact(() => query.SingleOrDefault());}private IQueryable<T> GetQuery( Specification<T> specification){ return session.Query<T>() .Where(specification.IsSatisfiedBy());} Add the following specification to Eg.Core.Data.Queries: public class MoviesDirectedBy : Specification<Movie>{ private readonly string _director; public MoviesDirectedBy(string director) { _director = director; } public override Expression<Func<Movie, bool>> IsSatisfiedBy() { return m => m.Director == _director; }} Add another specification to Eg.Core.Data.Queries, using the following code: public class MoviesStarring : Specification<Movie>{ private readonly string _actor; public MoviesStarring(string actor) { _actor = actor; } public override Expression<Func<Movie, bool>> IsSatisfiedBy() { return m => m.Actors.Any(a => a.Actor == _actor); }} How it works... The specification pattern allows us to separate the process of selecting objects from the concern of which objects to select. The repository handles selecting objects, while the specification objects are concerned only with the objects that satisfy their requirements. In our specification objects, the IsSatisfiedBy method of the specification objects returns a LINQ expression to determine which objects to select. In the repository, we get an IQueryable from the session, pass this LINQ expression to the Where method, and execute the LINQ query. Only the objects that satisfy the specification will be returned. For a detailed explanation of the specification pattern, check out http://martinfowler.com/apsupp/spec.pdf. There's more... To use our new specifications with the repository, use the following code: var movies = repository.FindAll( new MoviesDirectedBy("Stephen Spielberg")); Specification composition We can also combine specifications to build more complex queries. For example, the following code will find all movies directed by Steven Speilberg starring Harrison Ford: var movies = repository.FindAll( new MoviesDirectedBy("Steven Spielberg") & new MoviesStarring("Harrison Ford")); This may result in expression trees that NHibernate is unable to parse. Be sure to test each combination. Summary In this article we covered: Using LINQ Specifications in the data access layer Further resources on this subject: NHibernate 3.0: Working with the Data Access Layer NHibernate 3.0: Using Named Queries in the Data Access Layer NHibernate 3.0: Using ICriteria and Paged Queries in the data access layer NHibernate 3.0: Testing Using NHibernate Profiler and SQLite Using the Fluent NHibernate Persistence Tester and the Ghostbusters Test
Read more
  • 0
  • 0
  • 2079

article-image-nhibernate-30-using-icriteria-and-paged-queries-data-access-layer
Packt
21 Oct 2010
4 min read
Save for later

NHibernate 3.0: Using ICriteria and Paged Queries in the Data Access Layer

Packt
21 Oct 2010
4 min read
NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Using ICriteria in the data access layer For queries where the criteria are not known in advance, such as a website's advanced product search, ICriteria queries are more appropriate than named HQL queries. This article by Jason Dentler, author of NHibernate 3.0 Cookbook, shows how to use the same DAL infrastructure with ICriteria and QueryOver queries. In an effort to avoid overwhelming the user, and increase application responsiveness, large result sets are commonly broken into smaller pages of results. This article also shows how we can easily add paging to a QueryOver query object in our DAL. Getting ready Complete the previous recipe, Using Named Queries in the data access layer. How to do it... In Eg.Core.Data.Impl.Queries, add a new, empty, public interface named ICriteriaQuery. Add a class named CriteriaQueryBase with the following code: public abstract class CriteriaQueryBase<TResult> : NHibernateQueryBase<TResult>, ICriteriaQuery { public CriteriaQueryBase(ISessionFactory sessionFactory) : base(sessionFactory) { } public override TResult Execute() { var criteria = GetCriteria(); return Transact(() => Execute(criteria)); } protected abstract ICriteria GetCriteria(); protected abstract TResult Execute(ICriteria criteria); } In Eg.Core.Data.Queries, add the following enum: public enum AdvancedProductSearchSort { PriceAsc, PriceDesc, Name } Add a new interface named IAdvancedProductSearch with the following code: public interface IAdvancedProductSearch : IQuery<IEnumerable<Product>> { string Name { get; set; } string Description { get; set; } decimal? MinimumPrice { get; set; } decimal? MaximumPrice { get; set; } AdvancedProductSearchSort Sort { get; set; } } In Eg.Core.Data.Impl.Queries, add the following class: public class AdvancedProductSearch : CriteriaQueryBase<IEnumerable<Product>>, IAdvancedProductSearch { public AdvancedProductSearch(ISessionFactory sessionFactory) : base(sessionFactory) { } public string Name { get; set; } public string Description { get; set; } public decimal? MinimumPrice { get; set; } public decimal? MaximumPrice { get; set; } public AdvancedProductSearchSort Sort { get; set; } protected override ICriteria GetCriteria() { return GetProductQuery().UnderlyingCriteria; } protected override IEnumerable<Product> Execute( ICriteria criteria) { return criteria.List<Product>(); } private IQueryOver GetProductQuery() { var query = session.QueryOver<Product>(); AddProductCriterion(query); return query; } private void AddProductCriterion<T>( IQueryOver<T, T> query) where T : Product { if (!string.IsNullOrEmpty(Name)) query = query.WhereRestrictionOn(p => p.Name) .IsInsensitiveLike(Name, MatchMode.Anywhere); if (!string.IsNullOrEmpty(Description)) query.WhereRestrictionOn(p => p.Description) .IsInsensitiveLike(Description, MatchMode.Anywhere); if (MinimumPrice.HasValue) query.Where(p => p.UnitPrice >= MinimumPrice); if (MaximumPrice.HasValue) query.Where(p => p.UnitPrice <= MaximumPrice); switch (Sort) { case AdvancedProductSearchSort.PriceDesc: query = query.OrderBy(p => p.UnitPrice).Desc; break; case AdvancedProductSearchSort.Name: query = query.OrderBy(p => p.Name).Asc; break; default: query = query.OrderBy(p => p.UnitPrice).Asc; break; } } } How it works... In this recipe, we reuse the same repository and query infrastructure from the Using Named Queries in The Data Access Layer recipe. Our simple base class for ICriteria-based query objects splits query creation from query execution and handles transactions for us automatically. The example query we use is typical for an "advanced product search" use case. When a user fills in a particular field on the UI, the corresponding criterion is included in the query. When the user leaves the field blank, we ignore it. We check each search parameter for data. If the parameter has data, we add the appropriate criterion to the query. Finally, we set the order by clause based on the Sort parameter and return the completed ICriteria query. The query is executed inside a transaction, and the results are returned. There's more... For this type of query, typically, each query parameter would be set to the value of some field on your product search UI. On using this query, your code looks like this: var query = repository.CreateQuery<IAdvancedProductSearch>(); query.Name = searchCriteria.PartialName; query.Description = searchCriteria.PartialDescription; query.MinimumPrice = searchCriteria.MinimumPrice; query.MaximumPrice = searchCriteria.MaximumPrice; query.Sort = searchCriteria.Sort; var results = query.Execute();
Read more
  • 0
  • 0
  • 1594

article-image-getting-started-windows-installer-xml-wix
Packt
20 Oct 2010
10 min read
Save for later

Getting Started with Windows Installer XML (WiX)

Packt
20 Oct 2010
10 min read
       Introducing Windows Installer XML In this section, we'll dive right in and talk about what WiX is, where to get it, and why you'd want to use it when building an installation package for your software. We'll follow up with a quick description of the WiX tools and the new project types made available in Visual Studio. What is WiX? Although it's the standard technology and has been around for years, creating a Windows Installer, or MSI package, has always been a challenging task. The package is actually a relational database that describes how the various components of an application should be unpacked and copied to the end user's computer. In the past you had two options: You could try to author the database yourself—a path that requires a thorough knowledge of the Windows Installer API. You could buy a commercial product like InstallShield to do it for you. These software products will take care of the details, but you'll forever be dependent on them. There will always be parts of the process that are hidden from you. WiX is relatively new to the scene, offering a route that exists somewhere in the middle. Abstracting away the low-level function calls while still allowing you to write much of the code by hand, WiX is an architecture for building an installer in ways that mere mortals can grasp. Best of all, it's free. As an open source product, it has quickly garnered a wide user base and a dedicated community of developers. Much of this has to do not only with its price tag but also with its simplicity. It can be authored in a simple text editor (such as Notepad) and compiled with the tools provided by WiX. As it's a flavor of XML, it can be read by humans, edited without expensive software, and lends itself to being stored in source control where it can be easily merged and compared. The examples in this article will show how to create a simple installer with WiX using Visual Studio. Is WiX for you? To answer the question "Is WiX for you?" we have to answer "What can WiX do for you?" It's fairly simple to copy files to an end user's computer. If that's all your product needs, then the Windows Installer technology might be overkill. However, there are many benefits to creating an installable package for your customers, some of which might be overlooked. Following is a list of features that you get when you author a Windows Installer package with WiX: All of your executable files can be packaged into one convenient bundle, simplifying deployment. Your software is automatically registered with Add/Remove Programs. Windows takes care of uninstalling all of the components that make up your product when the user chooses to do so. If files for your software are accidentlly removed, they can be replaced by right-clicking on the MSI file and selecting Repair. You can create different versions of your installer and detect which version has been installed. You can create patches to update only specific areas of your application. If something goes wrong while installing your software, the end user's computer can be rolled back to a previous state. You can create Wizard-style dialogs to guide the user through the installation. Many people today simply expect that your installer will have these features. Not having them could be seen as a real deficit. For example, what is a user supposed to do when they want to uninstall your product but can't find it in the Add/Remove Programs list and there isn't an uninstall shortcut? They're likely to remove files haphazardly and wonder why you didn't make things easy for them. Maybe you've already figured that Windows Installer is the way to go, but why WiX? One of my favorite reasons is that it gives you greater control over how things work. You get a much finer level of control over the development process. Commercial software that does this for you also produces an MSI file, but hides the details about how it was done. It's analogous to crafting a web site. You get much more control when you write the HTML yourself as opposed to using WYSIWYG software. Even though WiX gives you more control, it doesn't make things overly complex. You'll find that making a simple installer is very straightforward. For more complex projects, the parts can be split up into multiple XML source files to make it easier to work with. Going further, if your product is made up of multiple products that will be installed together as a suite, you can compile the different chunks into libraries that can be merged together into a single MSI. This allows each team to isolate and manage their part of the installation package. WiX is a stable technology, having been first released to the public in 2004, so you don't have to worry about it disappearing. It's also had a steady progression of version releases. The most current version is updated for Windows Installer 4.5 and the next release will include changes for Windows Installer 5.0, which is the version that comes preinstalled with Windows 7. These are just some of the reasons why you might choose to use WiX. Where can I get it? You can download WiX from the Codeplex site, http://wix.codeplex.com/, which has both stable releases and source code. The current release is version 3.0. Once you've downloaded the WiX installer package, double-click it to install it to your local hard drive. This installs all of the necessary files needed to build WiX projects. You'll also get the WiX SDK documentation and the settings for Visual Studio IntelliSense, highlighting and project templates. Version 3 supports Visual Studio 2005 and Visual Studio 2008, Standard edition or higher. WiX comes with the following tools:   Tool What it does Candle.exe Compiles WiX source files (.wxs) into intermediate object files (.wixobj). Light.exe Links and binds .wixobj files to create final .msi file. Also creates cabinet files and embeds streams in MSI database. Lit.exe Creates WiX libraries (.wixlib) that can be linked together by Light. Dark.exe Decompiles an MSI file into WiX code. Heat.exe Creates a WiX source file that specifies components from various inputs. Melt.exe Converts a "merge module" (.msm) into a component group in a WiX source file. Torch.exe Generates a transform file used to create a software patch. Smoke.exe Runs validation checks on an MSI or MSM file. Pyro.exe Creates an patch file (.msp) from .wixmsp and .wixmst files. WixCop.exe Converts version 2 WiX files to version 3. In order to use some of the functionality in WiX, you may need to download a more recent version of Windows Installer. You can check your current version by viewing the help file for msiexec.exe, which is the Windows Installer service. Go to your Start Menu and select Run, type cmd and then type msiexec /? at the prompt. This should bring up a window like the following: If you'd like to install a newer version of Windows Installer, you can get one from the Microsoft Download Center website. Go to: http://www.microsoft.com/downloads/en/default.aspx Search for Windows Installer. The current version for Windows XP, Vista, Server 2003, and Server 2008 is 4.5. Windows 7 and Windows Server 2008 R2 can support version 5.0. Each new version is backwards compatible and includes the features from earlier editions. Votive The WiX toolset provides files that update Visual Studio to provide new WiX IntelliSense, syntax highlighting, and project templates. Together these features, which are installed for you along with the other WiX tools, are called Votive. You must have Visual Studio 2005 or 2008 (Standard edition or higher). Votive won't work on the Express versions. If you're using Visual Studio 2005, you may need to install an additional component called ProjectAggregator2. Refer to the WiX site for more information: http://wix.sourceforge.net/votive.html After you've installed WiX, you should see a new category of project types in Visual Studio, labeled under the title WiX. To test it out, open Visual Studio and go to File New | Project|. Select the category WiX. There are six new project templates: WiX Project: Creates a Windows Installer package from one or more WiX source files WiX Library Project: Creates a .wixlib library C# Custom Action Project: Creates a .NET custom action in C# WiX Merge Module Project: Creates a merge module C++ Custom Action Project: Creates an unmanaged C++ custom action VB Custom Action Project: Creates a VB.NET custom action Using these templates is certainly easier than creating them on your own with a text editor. To start creating your own MSI installer, select the template WiX Project. This will create a new .wxs (WiX source file) for you to add XML markup to. Once we've added the necessary markup, you'll be able to build the solution by selecting Build Solution from the Build menu or by right-clicking on the project in the Solution Explorer and selecting Build. Visual Studio will take care of calling candle.exe and light.exe to compile and link your project files. If you right-click on your WiX project in the Solution Explorer and select Properties, you'll see several screens where you can tweak the build process. One thing you'll want to do is set the amount of information that you'd like to see when compiling and linking the project and how non-critical messages are treated. Refer to the following screenshot: Here, we're selecting the level of messages that we'd like to see. To see all warnings and messages, set the Warning Level to Pedantic. You can also check the Verbose output checkbox to get even more information. Checking Treat warnings as errors will cause warning messages that normally would not stop the build to be treated as fatal errors. You can also choose to suppress certain warnings. You'll need to know the specific warning message number, though. If you get a build-time warning, you'll see the warning message, but not the number. One way to get it is to open the WiX source code (available at http://wix.codeplex.com/SourceControl/list/changesets) and view the messages.xml file in the Wix solution. Search the file for the warning and from there you'll see its number. Note that you can suppress warnings but not errors. Another feature of WiX is its ability to run validity checks on the MSI package. Windows Installer uses a suite of tests called Internal Consistency Evaluators (ICEs) for this. These checks ensure that the database as a whole makes sense and that the keys on each table join correctly. Through Votive, you can choose to suppress specific ICE tests. Use the Tools Setting page of the project's properties as shown in the following screenshot: In this example, ICE test 102 is being suppressed. You can specify more than one test by separating them with semicolons. To find a full list of ICE tests, go to MSDN's ICE Reference web page at: http://msdn.microsoft.com/en-us/library/aa369206%28VS.85%29.aspx The Tool Settings screen also gives you the ability to add compiler or linker command-line flags. Simply add them to the text boxes at the bottom of the screen.
Read more
  • 0
  • 0
  • 8253
article-image-fixing-bottlenecks-better-database-access-aspnet
Packt
18 Oct 2010
15 min read
Save for later

Fixing Bottlenecks for Better Database Access in ASP.Net

Packt
18 Oct 2010
15 min read
  ASP.NET Site Performance Secrets Simple and proven techniques to quickly speed up your ASP.NET website Speed up your ASP.NET website by identifying performance bottlenecks that hold back your site's performance and fixing them Tips and tricks for writing faster code and pinpointing those areas in the code that matter most, thus saving time and energy Drastically reduce page load times Configure and improve compression – the single most important way to improve your site's performance Written in a simple problem-solving manner – with a practical hands-on approach and just the right amount of theory you need to make sense of it all           Read more about this book       (For more resources on ASP.Net, see here.) The reader can benefit from the previous article on Pinpointing bottlenecks for better Database Access in ASP.Net. Now that you have pinpointed the bottlenecks to prioritize, skip to the appropriate subsection to find out how to fix those bottlenecks. Missing indexes Just as using an index in a book to find a particular bit of information is often much faster than reading all pages, SQL Server indexes can make finding a particular row in a table dramatically faster by cutting down the number of read operations. This section first discusses the two types of indexes supported by SQL Server: clustered and non-clustered. It also goes into included columns, a feature of nonclustered indexes. After that, we'll look at when to use each type of index. Clustered index Take the following table (missingindexes.sql in the downloaded code bundle): CREATE TABLE [dbo].[Book]( [BookId] [int] IDENTITY(1,1) NOT NULL, [Title] [nvarchar](50) NULL, [Author] [nvarchar](50) NULL, [Price] [decimal](4, 2) NULL) Because this table has no clustered index, it is called a heap table. Its records are unordered, and to get all books with a given title, you have to read all the records. It has a very simple structure: Let's see how long it takes to locate a record in this table. That way, we can compare against the performance of a table with an index. To do that in a meaningful way, first insert a million records into the table (code to do this is in missingindexes.sql in the downloaded code bundle). Tell SQL Server to show I/O and timing details of each query we run: SET STATISTICS IO ONSET STATISTICS TIME ON Also, before each query, flush the SQL Server memory cache: CHECKPOINTDBCC DROPCLEANBUFFERS Now, run the query below with a million records in the Book table: SELECT Title, Author, Price FROM dbo.Book WHERE BookId = 5000 The results on my machine are: reads: 9564, CPU time: 109 ms, elapsed time: 808 ms. SQL Server stores all data in 8-KB pages. This shows that it read 9564 pages, that is, the entire table. Now, add a clustered index: ALTER TABLE BookADD CONSTRAINT [PK_Book] PRIMARY KEY CLUSTERED ([BookId] ASC) This puts the index on column BookId, making WHERE and JOIN statements on BookId faster. It sorts the table by BookId and adds a structure called a B-tree to speed up access: BookId is now used the same way as a page number in a book. Because the pages in a book are sorted by page number, finding a page by page number is very fast. Now, run the same query again to see the difference: SELECT Title, Author, Price FROM dbo.Book WHERE BookId = 5000 The results are: reads: 2, CPU time: 0 ms, elapsed time: 32 ms. The number of reads of 8-KB pages has gone from 9564 to 2, CPU time from 109ms to less than 1 ms, and elapsed time from 808 ms to 32 ms. That's a dramatic improvement. Non-clustered index Now let's select by Title instead of BookId: SELECT Title, Author FROM dbo.Book WHERE Title = 'Don Quixote' These results are pretty similar to what we got with the heap table, which is no wonder, seeing that there is no index on Title. The solution obviously is to put an index on Title. However, because a clustered index involves sorting the table records on the index field, there can be only one clustered index. We've already sorted on BookId, and the table can't be sorted on Title at the same time. The solution is to create a non-clustered index. This is essentially a duplicate of the table records, this time sorted by Title. To save space, SQL Server leaves out the other columns, such as Author and Price. You can have up to 249 non-clustered indexes on a table. Because we still want to access those other columns in queries though, we need a way to get from the non-clustered index records to the actual table records. The solution is to add the BookId to the non-clustered records. Because BookId has the clustered index, once we have found a BookId via the non-clustered index, we can use the clustered index to get to the actual table record. This second step is called a key lookup. Why go through the clustered index? Why not put the physical address of the table record in the non-clustered index record? The answer is that when you update a table record, it may get bigger, causing SQL Server to move subsequent records to make space. If non-clustered indexes contained physical addresses, they would all have to be updated when this happens. It's a tradeoff between slightly slower reads and much slower updates. If there is no clustered index or if it is not unique, then non-clustered index records do have the physical address. To see what a non-clustered index will do for us, first create it as follows: CREATE NONCLUSTERED INDEX [IX_Title] ON [dbo].[Book]([Title] ASC) Now, run the same query again: SELECT Title, Author FROM dbo.Book WHERE Title = 'Don Quixote' The results are: reads: 4, CPU time: 0 ms, elapsed time: 46 ms. The number of reads has gone from 9146 to 4, CPU time from 156 ms to less than 1 ms, and elapsed time from 1653 ms to 46 ms. This means that having a non-clustered index is not quite as good as having a clustered index, but still dramatically better than having no index at all. Included columns You can squeeze a bit more performance out of a non-clustered index by cutting out the key lookup—the second step where SQL Server uses the clustered index to find the actual record. Have another look at the test query—it simply returns Title and Author. Title is already present in the non-clustered index record. If you were to add Author to the non-clustered index record as well, there would be no longer any need for SQL Server to access the table record, enabling it to skip the key lookup. It would look similar to the following: This can be done by including Author in the non-clustered index: CREATE NONCLUSTERED INDEX [IX_Title] ON [dbo].[Book]([Title] ASC)INCLUDE(Author)WITH drop_existing Now, run the query again: SELECT Title, Author FROM dbo.Book WHERE Title = 'Don Quixote' The results are: reads: 2, CPU time: 0 ms, elapsed time: 26 ms. The number of reads has gone from 4 to 2, and elapsed time from 46 ms to 26 ms; that's almost 50 percent improvement. In absolute terms, the gain isn't all that great, but for a query that is executed very frequently, this may be worthwhile. Don't overdo this—the bigger you make the non-clustered index records, the fewer fit on an 8KB page, forcing SQL Server to read more pages. Selecting columns to give an index Because indexes do create overhead, you want to carefully select the columns to give indexes. Before starting the selection process, keep the following in mind: Putting a Primary Key on a column by default gives it a clustered index (unless you override the default). So, you may already have many columns in your database with an index. As you'll see later in the When to use a clustered index section, putting the clustered index on the ID column of a record is almost always a good idea. Putting an index on a table column affects all queries that use that table. Don't focus on just one query. Before introducing an index on your live database, test the index in development to make sure it really does improve performance. Let's look at when and when not to use an index, and when to use a clustered index. When to use an index You can follow this decision process when selecting columns to give an index: Start by looking at the most expensive queries. Look at putting an index on at least one column involved in every JOIN. Consider columns used in ORDER BY and GROUP BY clauses. If there is an index on such a column, than SQL Server doesn't have to sort the column again because the index already keeps the column values in sorted order. Consider columns used in WHERE clauses, especially if the WHERE will select a small number of records. However, keep in mind the following: A WHERE clause that applies a function to the column value can't use an index on that column, because the output of the function is not in the index. Take for example the following: SELECT Title, Author FROM dbo.Book WHERE LEFT(Title, 3) = 'Don' Putting an index on the Title column won't make this query any faster. Likewise, SQL Server can't use an index if you use LIKE in a WHERE clause with a wild card at the start of the search string, as in the following: SELECT Title, Author FROM dbo.Book WHERE Title LIKE '%Quixote' However, if the search string starts with constant text instead of a wild card, an index can be used: SELECT Title, Author FROM dbo.Book WHERE Title LIKE 'Don%' Consider columns that have a UNIQUE constraint. Having an index on the column makes it easier for SQL Server to check whether a new value would not be unique. The MIN and MAX functions benefit from working on a column with an index. Because the values are sorted, there is no need to go through the entire table to find the minimum or maximum. Think twice before putting an index on a column that takes a lot of space. If you use a non-clustered index, the column values will be duplicated in the index. If you use a clustered index, the column values will be used in all nonclustered indexes. The increased sizes of the index records means fewer fit in each 8-KB page, forcing SQL Server to read more pages. The same applies to including columns in non-clustered indexes. When not to use an index Having too many indexes can actually hurt performance. Here are the main reasons not to use an index on a column: The column gets updated often The column has low specificity, meaning it has lots of duplicate values Let's look at each reason in turn. Column updated often When you update a column without an index, SQL Server needs to write one 8KB page to disk, provided there are no page splits. However, if the column has a non-clustered index, or if it is included in a nonclustered index, SQL Server needs to update the index as well, so it has to write at least one additional page to disk. It also has to update the B-tree structure used in the index, potentially leading to more page writes. If you update a column with a clustered index, the non-clustered index records that use the old value need to be updated too, because the clustered index key is used in the non-clustered indexes to navigate to the actual table records. Secondly, remember that the table records themselves are sorted based on the clustered index. If the update causes the sort order of a record to change, that may mean more writes. Finally, the clustered index needs to keep its B-tree up-to-date. This doesn't mean you cannot have indexes on columns that get updated; just be aware that indexes slow down updates. Test the effect of any indexes you add. If an index is critical but rarely used, for example only for overnight report generation, consider dropping the index and recreating it when it is needed. Low specificity Even if there is an index on a column, the query optimizer won't always use it. Remember, each time SQL Server accesses a record via an index, it has to go through the index structure. In the case of a non-clustered index, it may have to do a key lookup as well. If you're selecting all books with price $20, and lots of books happen to have that price, than it might be quicker to simply read all book records rather than going through an index over and over again. In that case, it is said that the $20 price has low specificity. You can use a simple query to determine the average selectivity of the values in a column. For example, to find the average selectivity of the Price column in the Book table, use (missingindexes.sql in downloaded code bundle): SELECT COUNT(DISTINCT Price) AS 'Unique prices', COUNT(*) AS 'Number of rows', CAST((100 * COUNT(DISTINCT Price) / CAST(COUNT(*) AS REAL)) AS nvarchar(10)) + '%' AS 'Selectivity'FROM Book If every book has a unique price, selectivity will be 100 percent. However, if half the books cost $20 and the other half $30, then average selectivity will be only 50 percent. If the selectivity is 85 percent or less, an index is likely to incur more overhead than it would save. Some prices may occur a lot more often than other prices. To see the specificity of each individual price, you would run (missingindexes.sql in downloaded code bundle): DECLARE @c realSELECT @c = CAST(COUNT(*) AS real) FROM BookSELECT Price, COUNT(BookId) AS 'Number of rows', CAST((1 - (100 * COUNT(BookId) / @c)) AS nvarchar(20)) + '%' AS 'Selectivity'FROM BookGROUP BY PriceORDER BY COUNT(BookId) The query optimizer is unlikely to use a non-clustered index for a price whose specificity is below 85 percent. It figures out the specificity of each price by keeping statistics on the values in the table. When to use a clustered index You saw that there are two types of indexes, clustered and non-clustered, and that you can have only one clustered index. How do you determine the lucky column that will have the clustered index? To work this out, let's first look at the characteristics of a clustered index against a non-clustered index: Characteristic Clustered index compared to a non-clustered index Reading Faster: Because there is no need for key lookups. No difference if all the required columns are included in the non-clustered index. Updating Slower: Not only the table record, but also all non-clustered index records potentially need to be updated. Inserting/Deleting Faster: With a non-clustered index, inserting a new record in the table means inserting a new record in the non-clustered index as well. With a clustered index, the table is effectively part of the index, so there is no need for the second insert. The same goes for deleting a record. On the other hand, when the record is inserted at any place in the table but the very end, the insert may cause a page split where half the content of the 8-KB page is moved to another page. Having a page split in a non-clustered index is less likely, because its records are smaller (they normally don't have all columns that a table record has), so more records fit on a page. When the record is inserted at the end of the table, there won't be a page split. Column Size Needs to be kept short and fast - Every non-clustered index contains a clustered index value, to do the key lookup. Every access via a non-clustered index has to use that value, so you want it to be fast for the server to process. That makes a column of type int a lot better to put a clustered index on than a column of type nvarchar(50).   If only one column requires an index, this comparison shows that you'll probably want to give it the clustered index rather than a non-clustered index. If multiple columns need indexes, you'll probably want to put the clustered index on the primary key column: Reading: The primary key tends to be involved in a lot of JOIN clauses, making read performance important. Updating: The primary key should never or rarely get updated, because that would mean changing referring foreign keys as well. Inserting/Deleting: Most often you'll make the primary key an IDENTITY column, so each new record is assigned a unique, ever increasing number. This means that if you put the clustered index on the primary key, new records are always added at the end of the table. When a record is added at the end of a table with a clustered index and there is no space in the current page, the new record goes into a new page but the rest of the data in the current page stays in the page. In other words, there is no expensive page split. Size: Most often, the primary key is of type int, which is short and fast. Indeed, when you set the primary key on a column in the SSMS table designer, SSMS gives that column the clustered index by default, unless another column already has the clustered index.
Read more
  • 0
  • 0
  • 1584

article-image-aspnet-site-performance-speeding-database-access
Packt
18 Oct 2010
8 min read
Save for later

ASP.Net Site Performance: Speeding up Database Access

Packt
18 Oct 2010
8 min read
  ASP.NET Site Performance Secrets Simple and proven techniques to quickly speed up your ASP.NET website Speed up your ASP.NET website by identifying performance bottlenecks that hold back your site's performance and fixing them Tips and tricks for writing faster code and pinpointing those areas in the code that matter most, thus saving time and energy Drastically reduce page load times Configure and improve compression – the single most important way to improve your site's performance Written in a simple problem-solving manner – with a practical hands-on approach and just the right amount of theory you need to make sense of it all. The reader can benefit from the previous articles on Pinpointing bottlenecks for better Database Access in ASP.Net and Fixing bottlenecks for better Database Access in ASP.Net. Locking In this section, you'll see how to determine which queries are involved in excessive locking delays, and how to prevent those delays from happening. Gathering detailed locking information You can find out which queries are involved in excessive locking delays by tracing the event "Blocked process report" in SQL Server Profiler. This event fires when the lock wait time for a query exceeds the "blocked process threshold". To set this threshold to, for example, 30 seconds, run the following lines in a query window in SSMS (locking.sql in the downloaded code bundle): DBCC TRACEON(1222,-1) Then, start the trace in Profiler: Start SQL Profiler. Click on Start | Programs | Microsoft SQL Server 2008 | Performance Tools | SQL Server Profiler. In SQL Profiler, click on File | New Trace. Click on the Events Selection tab. Select Show all events checkbox to see all events. Also select Show all columns to see all the data columns. In the main window, expand Errors and Warnings and select the Blocked process report event. Make sure the checkbox in the TextData column is checked—scroll horizontally if needed to find it. If you need to investigate deadlocks, also expand Locks and select the Deadlock graph event. To get additional information about deadlocks, have SQL Server write information about each deadlock event to its error log, by executing the following from an SSMS query window: ALTER DATABASE mydatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE; ALTER DATABASE mydatabase SET READ_COMMITTED_SNAPSHOT ON; ALTER DATABASE mydatabase SET MULTI_USER; Uncheck all the other events, unless you are interested in them. Click on Run to start the trace. Save the template, so that you don't have to recreate it the next time. Click on File | Save As | Trace Template. Fill in a descriptive name and click on OK. Next time you create a new trace by clicking on File | New Trace, you can retrieve the template from the Use the template drop-down. Once you have captured a representative sample, click File | Save to save the trace to a trace file for later analysis. You can load a trace file by clicking on File | Open. (Move the mouse over the image to enlarge.) When you click a Blocked process report event in Profiler, you'll find information about the event in the lower pane, including the blocking query and the blocked query. You can get details about Deadlock graph events the same way. To check the SQL Server error log for deadlock events: In SSMS expand the database server, expand Management and expand SQL Server Logs. Then double-click on a log. In the Log File Viewer, click on Search near the top of the window and search for "deadlock-list". In the lines that chronologically come after the deadlock-list event, you'll find much more information about the queries involved in the deadlock. Reducing blocking Now that you identified the queries involved in locking delays, it's time to reduce those delays. The most effective way to do this is to reduce the length of time locks are held as follows: Optimize queries. The lesser time your queries take, the lesser time they hold locks. Use stored procedures rather than ad hoc queries. This reduces time spent compiling execution plans and time spent sending individual queries over the network. If you really have to use cursors, commit updates frequently. Cursor processing is much slower than set-based processing. Do not process lengthy operations while locks are held, such as sending e-mails. Do not wait for user input while keeping a transaction open. Instead, use optimistic locking, as described in: Optimistic Locking in SQL Server using the ROWVERSION Data Type http://www.mssqltips.com/tip.asp?tip=1501 A second way to reduce lock wait times is to reduce the number of resources being locked: Do not put a clustered index on frequently updated columns. This requires a lock on both the clustered index and all non-clustered indexes, because their row locator contains the value you are updating. Consider including a column in a non-clustered index. This would prevent a query from having to read the table record, so it won't block another query that needs to update an unrelated column in the same record. Consider row versioning. This SQL Server feature prevents queries that read a table row from blocking queries that update the same row and vice versa. Queries that need to update the same row still block each other.Read versioning works by storing rows in a temporary area (in tempdb) before they are updated, so that reading queries can access the stored version while the update is taking place. This does create an overhead in maintaining the row versions—test this solution before taking it live. Also, in case you set the isolation level of transactions, row versioning only works with the Read Committed isolation mode, which is the default isolation mode.To implement row versioning, set the READ_COMMITTED_SNAPSHOT option as shown in the following code (locking.sql in the downloaded code bundle). When doing this, you can have only one connection open—the one used to set the option. You can make that happen by switching the database to single user mode; warn your users first. Be careful when applying this to a production database, because your website won't be able to connect to the database while you are carrying out this operation. select is_read_committed_snapshot_on from sys.databases where name='mydatabase' To check whether row versioning is in use for a database, run: SET LOCK_TIMEOUT 5000 Finally, you can set a lock timeout. For example, to abort statements that have been waiting for over five seconds (or 5000 milliseconds), issue the following command: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ BEGIN TRAN SELECT Title FROM dbo.Book UPDATE dbo.Book SET Author='Charles Dickens' WHERE Title='Oliver Twist' COMMIT Use 1 to wait indefinitely. Use 0 to not wait at all. Reducing deadlocks Deadlock is a situation where two transactions are waiting for each other to release a lock. In a typical case, transaction 1 has a lock on resource A and is trying to get a lock on resource B, while transaction 2 has a lock on resource B and is trying to get a lock on resource A. Neither transaction can now move forward, as shown below: One way to reduce deadlocks is to reduce lock delays in general, as shown in the last section. That reduces the time window in which deadlocks can occur. A second way is suggested by the diagram—always lock resources in the same order. If, as shown in the diagram, you get transaction 2 to lock the resources in the same order as transaction 1 (first A, then B), then transaction 2 won't lock resource B before it starts waiting for resource A. Hence, it doesn't block transaction 1. Finally, watch out for deadlocks caused by the use of HOLDLOCK or Repeatable Read or Serializable Read isolation levels. Take for example the following code: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ BEGIN TRAN SELECT Title FROM dbo.Book WITH(UPDLOCK) UPDATE dbo.Book SET Author='Charles Dickens' WHERE Title='Oliver Twist' COMMIT Imagine two transactions running this code at the same time. Both acquire a Select lock on the rows in the Book table when they execute the SELECT. They hold onto the lock because of the Repeatable Read isolation level. Now, both try to acquire an Update lock on a row in the Book table to execute the UPDATE. Each transaction is now blocked by the Select lock the other transaction is still holding. To prevent this from happening, use the UPDLOCK hint on the SELECT statement. This causes the SELECT to acquire an Update lock, so that only one transaction can execute the SELECT. The transaction that did get the lock can then execute its UPDATE and free the locks, after which the other transaction comes through. The code is as follows: SELECT b.Title, a.AuthorName FROM dbo.Book b JOIN dbo.Author a ON b.LeadAuthorId=a.Authorid WHERE BookId=5
Read more
  • 0
  • 0
  • 3320

article-image-pinpointing-bottlenecks-better-database-access-aspnet
Packt
18 Oct 2010
7 min read
Save for later

Pinpointing Bottlenecks for Better Database Access in ASP.Net

Packt
18 Oct 2010
7 min read
ASP.NET Site Performance Secrets Simple and proven techniques to quickly speed up your ASP.NET website Speed up your ASP.NET website by identifying performance bottlenecks that hold back your site's performance and fixing them Tips and tricks for writing faster code and pinpointing those areas in the code that matter most, thus saving time and energy Drastically reduce page load times Configure and improve compression – the single most important way to improve your site's performance Written in a simple problem-solving manner – with a practical hands-on approach and just the right amount of theory you need to make sense of it all In this section, we'll identify the biggest bottlenecks. Missing indexes and expensive queries You can greatly improve the performance of your queries by reducing the number of reads executed by those queries. The more reads you execute, the more potentially you stress the disk, CPU, and memory. Secondly, a query reading a resource normally blocks another query from updating that resource. If the updating query has to wait while holding locks itself, it may then delay a chain of other queries. Finally, unless the entire database fits in memory, each time data is read from disk, other data is evicted from memory. If that data is needed later, it then needs to be read from the disk again. The most effective way to reduce the number of reads is to create sufficient indexes on your tables. Just as an index in a book, an SQL Server index allows a query to go straight to the table row(s) it needs, rather than having to scan the entire table. Indexes are not a cure-all though—they do incur overhead and slow down updates, so they need to be used wisely. In this section, we'll see: How to identify missing indexes that would reduce the number of reads in the database How to identify those queries that create the greatest strain, either because they are used very often, or because they are just plain expensive How to identify superfluous indexes that take resources but provide little benefit Missing indexes SQL Server allows you to put indexes on table columns, to speed up WHERE and JOIN statements on those columns. When the query optimizer optimizes a query, it stores information about those indexes it would have liked to have used, but weren't available. You can access this information with the Dynamic Management View (DMV) dm_db_missing_index_details (indexesqueries.sql in the code bundle): select d.name AS DatabaseName, mid.* from sys.dm_db_missing_index_details mid join sys.databases d ON mid.database_id=d.database_id The most important columns returned by this query are: ColumnDescriptionDatabaseNameName of the database this row relates to.equality_columnsComma-separated list of columns used with the equals operator, such as: column=valueinequality_columnsComma-separated list of columns used with a comparison operator other than the equals operator, such as: column>valueincluded_columnsComma-separated list of columns that could profitably be included in an index.statementName of the table where the index is missing. This information is not persistent—you will lose it after a server restart. An alternative is to use Database Engine Tuning Advisor, which is included with SQL Server 2008 (except for the Express version). This tool analyzes a trace of database operations and identifies an optimal set of indexes that takes the requirements of all queries into account. It even gives you the SQL statements needed to create the missing indexes it identified. The first step is to get a trace of database operations during a representative period. If your database is the busiest during business hours, then that is probably when you want to run the trace: Start SQL Profiler. Click on Start | Programs | Microsoft SQL Server 2008 | Performance Tools | SQL Server Profiler. In SQL Profiler, click on File | New Trace. Click on the Events Selection tab. You want to minimize the number of events captured to reduce the load on the server. Deselect every event, except SQL:BatchCompleted and RPC:Completed. It is those events that contain resource information for each batch, and so are used by Database Engine Tuning Advisor to analyze the workload. Make sure that the TextData column is selected for both the events. To capture events related only to your database, click on the Column Filters button. Click on DatabaseName in the left column, expand Like in the righthand pane, and enter your database name. Click on OK. (Move the mouse over the image to enlarge.) To further cut down the trace and only trace calls from your website, put a filter on ApplicationName, so only events where this equals ".Net SqlClient Data Provider" will be recorded. Click on the Run button to start the trace. You will see batch completions scrolling through the window. At any stage, you can click on File | Save or press Ctrl + S. to save the trace to a file. Save the template so that you don't have to recreate it next time. Click on File | Save As | Trace Template. Fill in a descriptive name and click on OK. Next time you create a new trace by clicking on File | New Trace, you can retrieve the template from the Use the template drop-down.Sending all these events to your screen takes a lot of server resources. You probably won't be looking at it all day anyway. The solution is to save your trace as a script and then use that to run a background trace. You'll also be able to reuse the script later on. Click on File | Export | Script Trace Definition | For SQL Server 2005 – 2008. Save the file with a .sql extension. You can now close SQL Server Profiler, which will also stop the trace. In SQL Server Management Studio, open the .sql file you just created. Find the string InsertFileNameHere and replace it with the full path of the file where you want the log stored. Leave off the extension; the script will set it to .trc. Press Ctrl + S to save the .sql file. To start the trace, press F5 to run the .sql file. It will tell you the trace ID of this trace. To see the status of this trace and any other traces in the system, execute the following command in a query window: select * from ::fn_trace_getinfo(default) Find the row with property 5 for your trace ID. If the value column in that row is 1, your trace is running. The trace with trace ID 1 is a system trace. To stop the trace after it has captured a representative period, assuming your trace ID is two, run the following command: exec sp_trace_setstatus 2,0 To restart it, run: exec sp_trace_setstatus 2,1 To stop and close it so that you can access the trace file, run: exec sp_trace_setstatus 2,0 exec sp_trace_setstatus 2,2 Now, run Database Engine Tuning Advisor: Start SQL Profiler. Click on Start | Programs | Microsoft SQL Server 2008 | Performance Tools | Database Engine Tuning Advisor. In the Workload area, select your trace file. In the Database for workload analysis drop-down, select the first database you want to be analyzed. Under Select databases and tables to tune, select the databases for which you want index recommendations. Especially with a big trace, Database Engine Tuning Advisor may take a long time to do its analysis. On the Tuning Options tab, you can tell it when to stop analyzing. This is just a limit; if it is done sooner, it will produce results as soon as it is done. To start the analysis, click on the Start Analysis button in the toolbar. Keep in mind that Database Engine Tuning Advisor is just a computer program. Consider its recommendations, but make up your own mind. Be sure to give it a trace with a representative workload, otherwise its recommendations may make things worse rather than better. For example, if you provide a trace that was captured at night when you process few transactions but execute lots of reporting jobs, its advice is going to be skewed towards optimizing reporting, not transactions.
Read more
  • 0
  • 0
  • 1443
article-image-nhibernate-30-working-data-access-layer
Packt
15 Oct 2010
3 min read
Save for later

NHibernate 3.0: Working with the Data Access Layer

Packt
15 Oct 2010
3 min read
Transaction Auto-wrapping for the data access layer This article by Jason Dentler, author of NHibernate 3.0 Cookbook, shows how we can set up the data access layer to wrap all data access in NHibernate transactions automatically. Getting ready Complete the Eg.Core model and mappings. Download code (ch:1) How to do it... Create a new class library named Eg.Core.Data. Add a reference to NHibernate.dll and the Eg.Core project. Add the following two DAO classes: public class DataAccessObject<T, TId> where T : Entity<TId> { private readonly ISessionFactory _sessionFactory; private ISession session { get { return _sessionFactory.GetCurrentSession(); } } public DataAccessObject(ISessionFactory sessionFactory) { _sessionFactory = sessionFactory; } public T Get(TId id) { return Transact(() => session.Get<T>(id)); } public T Load(TId id) { return Transact(() => session.Load<T>(id)); } public void Save(T entity) { Transact(() => session.SaveOrUpdate(entity)); } public void Delete(T entity) { Transact(() => session.Delete(entity)); } private TResult Transact<TResult>(Func<TResult> func) { if (!session.Transaction.IsActive) { // Wrap in transaction TResult result; using (var tx = session.BeginTransaction()) { result = func.Invoke(); tx.Commit(); } return result; } // Don't wrap; return func.Invoke(); } private void Transact(Action action) { Transact<bool>(() => { action.Invoke(); return false; }); } } public class DataAccessObject<T> : DataAccessObject<T, Guid> where T : Entity { } How it works... NHibernate requires that all data access occurs inside an NHibernate transaction and this can be easily accomplished with AOP. Remember, the ambient transaction created by TransactionScope is not a substitute for an NHibernate transaction. This recipe shows a more explicit approach. To ensure that at least all our data access layer calls are wrapped in transactions, we create a private Transact function that accepts a delegate, consisting of some data access methods, such as session.Save or session.Get. This Transact function first checks if the session has an active transaction. If it does, Transact simply invokes the delegate. If it doesn't, it creates an explicit NHibernate transaction, then invokes the delegate, and finally commits the transaction. If the data access method throws an exception, the transaction will be rolled back automatically as the exception bubbles up through the using block. There's more... This transactional auto-wrapping can also be set up using SessionWrapper from the unofficial NHibernate AddIns project at http://code.google.com/p/unhaddins. This class wraps a standard NHibernate session. By default, it will throw an exception when the session is used without an NHibernate transaction. However, it can be configured to check for and create a transaction automatically, much in the same way I've shown you here. See also Setting up an NHibernate repository
Read more
  • 0
  • 0
  • 1709

article-image-nhibernate-30-using-named-queries-data-access-layer
Packt
15 Oct 2010
4 min read
Save for later

NHibernate 3.0: Using named queries in the data access layer

Packt
15 Oct 2010
4 min read
Getting ready Download the latest release of the Common Service Locator from http://commonservicelocator.codeplex.com, and extract Microsoft.Practices.ServiceLocation.dll to your solution's libs folder. Complete the previous recipe, Setting up an NHibernate repository. Following the Fast testing with SQLite in-memory database recipe in the previous article, create a new NHibernate test project named Eg.Core.Data.Impl.Test. Include the Eg.Core.Data.Impl assembly as an additional mapping assembly in your test project's App.Config with the following xml: <mapping assembly="Eg.Core.Data.Impl"/> How to do it... In the Eg.Core.Data project, add a folder for the Queries namespace. Add the following IQuery interfaces: public interface IQuery { } public interface IQuery<TResult> : IQuery { TResult Execute(); } Add the following IQueryFactory interface: { TQuery CreateQuery<TQuery>() where TQuery :IQuery; } Change the IRepository interface to implement the IQueryFactory interface, as shown in the following code: public interface IRepository<T> : IEnumerable<T>, IQueryFactory where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } In the Eg.Core.Data.Impl project, change the NHibernateRepository constructor and add the _queryFactory field, as shown in the following code: private readonly IQueryFactory _queryFactory; public NHibernateRepository(ISessionFactory sessionFactory, IQueryFactory queryFactory) : base(sessionFactory) { _queryFactory = queryFactory; } Add the following method to NHibernateRepository: public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _queryFactory.CreateQuery<TQuery>(); } In the Eg.Core.Data.Impl project, add a folder for the Queries namespace. To the Eg.Core.Data.Impl project, add a reference to Microsoft.Practices.ServiceLocation.dll. To the Queries namespace, add this QueryFactory class: public class QueryFactory : IQueryFactory { private readonly IServiceLocator _serviceLocator; public QueryFactory(IServiceLocator serviceLocator) { _serviceLocator = serviceLocator; } public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _serviceLocator.GetInstance<TQuery>(); } } Add the following NHibernateQueryBase class: public abstract class NHibernateQueryBase<TResult> : NHibernateBase, IQuery<TResult> { protected NHibernateQueryBase( ISessionFactory sessionFactory) : base(sessionFactory) { } public abstract TResult Execute(); } Add an empty INamedQuery interface, as shown in the following code: public interface INamedQuery { string QueryName { get; } } Add a NamedQueryBase class, as shown in the following code: public abstract class NamedQueryBase<TResult> : NHibernateQueryBase<TResult>, INamedQuery { protected NamedQueryBase(ISessionFactory sessionFactory) : base(sessionFactory) { } public override TResult Execute() { var nhQuery = GetNamedQuery(); return Transact(() => Execute(nhQuery)); } protected abstract TResult Execute(IQuery query); protected virtual IQuery GetNamedQuery() { var nhQuery = session.GetNamedQuery( ((INamedQuery) this).QueryName); SetParameters(nhQuery); return nhQuery; } protected abstract void SetParameters(IQuery nhQuery); public virtual string QueryName { get { return GetType().Name; } } } In Eg.Core.Data.Impl.Test, add a test fixture named QueryTests inherited from NHibernateFixture. Add the following test and three helper methods: [Test] public void NamedQueryCheck() { var errors = new StringBuilder(); var queryObjectTypes = GetNamedQueryObjectTypes(); var mappedQueries = GetNamedQueryNames(); foreach (var queryType in queryObjectTypes) { var query = GetQuery(queryType); if (!mappedQueries.Contains(query.QueryName)) { errors.AppendFormat( "Query object {0} references non-existent " + "named query {1}.", queryType, query.QueryName); errors.AppendLine(); } } if (errors.Length != 0) Assert.Fail(errors.ToString()); } private IEnumerable<Type> GetNamedQueryObjectTypes() { var namedQueryType = typeof(INamedQuery); var queryImplAssembly = typeof(BookWithISBN).Assembly; var types = from t in queryImplAssembly.GetTypes() where namedQueryType.IsAssignableFrom(t) && t.IsClass && !t.IsAbstract select t; return types; } private IEnumerable<string> GetNamedQueryNames() { var nhCfg = NHConfigurator.Configuration; var mappedQueries = nhCfg.NamedQueries.Keys .Union(nhCfg.NamedSQLQueries.Keys); return mappedQueries; } private INamedQuery GetQuery(Type queryType) { return (INamedQuery) Activator .CreateInstance(queryType, new object[] { SessionFactory }); } For our example query, in the Queries namespace of Eg.Core.Data, add the following interface: public interface IBookWithISBN : IQuery<Book> { string ISBN { get; set; } } Add the implementation to the Queries namespace of Eg.Core.Data.Impl using the following code: public class BookWithISBN : NamedQueryBase<Book>, IBookWithISBN { public BookWithISBN(ISessionFactory sessionFactory) : base(sessionFactory) { } public string ISBN { get; set; } protected override void SetParameters( NHibernate.IQuery nhQuery) { nhQuery.SetParameter("isbn", ISBN); } protected override Book Execute(NHibernate.IQuery query) { return query.UniqueResult<Book>(); } } Finally, add the embedded resource mapping, BookWithISBN.hbm.xml, to Eg.Core.Data.Impl with the following xml code: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping > <query name="BookWithISBN"> <![CDATA[ from Book b where b.ISBN = :isbn ]]> </query> </hibernate-mapping>
Read more
  • 0
  • 0
  • 1736