Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-introduction-applicationcfc-object-and-application-variables-coldfusion-9
Packt
26 Jul 2010
9 min read
Save for later

Introduction to the Application.cfc Object and Application Variables in ColdFusion 9

Packt
26 Jul 2010
9 min read
(For more resources on ColdFusion, see here.) Life span Each of our shared scopes has a life span. They come into existence at a given point, and cease to exist at a predictable point. Learning to recognize these points is very important. It is also the first aspect of "Scope". The request scope is created when a request is made to your ColdFusion server from any source. This could either be a web browser, or any type of web application that can make an HTTP request to your server. Any variable placed into the request structure persists until the request processing is complete. Variable persistence is the property of data remaining available for a set period of time. Without persistence, we would have to make information last by passing all the information from one web page to another, in all forms and in all links. You may have heard people say that web pages are stateless. If you have passed all the information into the browser, they would be closer to stateful applications, but would be difficult to manage. In this article, we will learn how to create a stateful web application. Here is a chart of the "life spans" of the key scopes: Scope Begins Ends Request Begins when a server receives a request from any source.Created before any session or an application is created. Ends when the processing for this request is complete.Ending has nothing to do with the end of applications or sessions. Application Begins before a session but after the request.Begins only when an Application.cfc file is first run with the current unique application name, or when the <cfapplication> tag is called in older code CF applications. Ends when the amount of time since a request is greater than the expiration time set for the application. Session Begins after an application is created.Created inside the same sources as the application. Ends when the amount of time since a request is greater than the expiration time set for the session. Client Begins when a unique visitor first visits the server. If you want them to expire, you can store client variables in encrypted cookies. Cookies have limited storage space and are not always available. We will be discussing the scopes in more detail later in this article series. All the scopes, except the client scope, expire if you shut down your server. When we close our browser window or reboot the client side, a session does not come to an end, but our connectivity to that particular session scope ends. The information and resources for storing that session are held until the session expires. Then, when connecting the server starts a new session and we are unable to reconnect to the former session. Introducing the Application.cfc object The first thing we need to do is to understand how this application page is called. When a .cfm or .cfc file is called, the server looks for an Application.cfc file in the current directory where the web page is being called from. It also looks for an Application.cfm file. We do not create an application or session scope with the .cfm version because .cfc provides many advantages and is much more powerful than the .cfm version. It provides better encapsulation and code reuse. If the application file is found, ColdFusion runs it. If the file is not found, then it moves up one directory towards the sever root directory in order to search for an Application.cfc file. The search stops either when a file is found, or once it reaches the root directory and a file is not found. There are several methods in the Application.cfc file. It is worth noting that this file does not exist by default, the developer must create it. The following table gives the method names and the details as to when those methods are called: Method name When a method is called onApplicationEnd The application ends; the application times out. onApplicationStart The application first starts: the first request for a page is processed or the first CFC method is invoked by an event gateway instance, a web service, or Macromedia Flash Remoting CFC. onCFCRequest HTTP or AMF (remote special Flash) calls. onError An exception occurs that is not caught by a try or catch block. onMissingTemplate ColdFusion receives a request for a non-existent page. onRequest The onRequestStart() method finishes(this method can filter request contents). onRequestEnd All pages in the request have been processed. onRequestStart A request starts. onSessionEnd A session ends. onSessionStart A session starts. onServerStart A ColdFusion server starts. When the Application.cfc file runs for the first time, these methods are called in the order as shown in the following diagram. The request variable scope is available at all times. Yet, to make the code flow right, the designers of this object made sure of the order in which the server runs the code. You will also find that for technical reasons, there are some issues that arise when we use the onRequest() method. Therefore, we will not be using this method. The steps in the previous screenshot are explained as follows: Browser Request: The browser sends a request to the server. The server passes the processing to the Application.cfc file, if it exists. It skips the step if it does not exist. The Application.cfc file has methods that execute if they exist too. The first method is onApplicationStart(). This executes on the basis of the application name. If the unique named application is not currently running on the server, then this method is called. Application Start: The next thing that Application.cfc does is to check if the request to the server is a pre-existing application. If the request is to an application which has not started, then it calls the onApplicationStart() method, if the method exists. Session Start: On every request to the server, if the onSessionStart() method exists, then it is called at this particular point in the processing. Request Start: On every request to the server, if the onRequestStart() method exists, then it is called at this particular point in the processing. OnRequest: This step normally occurs after the onRequestStart() method. If the onRequest() method was used, then by default it prevented the calling of CFCs. We do not say that it is always wrong to use this method. However, we will avoid it as much as possible. Requested Page: Now, the actual page requested is called and processed. Request End: After the requested page is processed, the control is passed back to the onRequestEnd() method, if it exists in Application.cfc. return response to browser: This is the point when ColdFusion has completed its work of processing information to respond to the browser request. At this point, you could either send HTML to the browser, a redirect, or any other response. Session End: The onSessionEnd() method is called if the method exists but only when the time since the user has last made a request to the server is less than the time for the session timeout. Application End: The onApplicationEnd() method is called if it exists when the time since the last request was received by the server is greater than the timeout for the application. The application and session scopes have already been created on the server and they do not need to be reinitialized. Once the application is created and other requests are made to the server, the following methods are called with each request: onRequestStart() onRequest() onRequestEnd() In previous versions of ColdFusion, when the onRequest() method of the Application.cfc was called, it blocked CFCs from operating correctly. You may see some fancy code in older frameworks that check if the current request is calling a CFC. They would then delete the onRequest() method for that request. Now there is a new method called onCFCRequest(). If you need backwards capability to previous versions of ColdFusion, then you would delete the onRequest() method. You can use either of these approaches depending on whether you need the code to run on prior versions of ColdFusion. The onCFCRequest() method will execute at the same point as the onRequest() method in the previous examples. You can add this code in or not depending on your own preferences. The previous example still operates as expected if you leave the method out. The OnRequestEnd.cfm side of using Application.cfm based page calls does not execute if the page runs a <cflocation> tag before the OnRequestEnd.cfm is run. It is also not a part of Application.cfc based applications and was intended for use with Application.cfm in older versions of ColdFusion. Here is a representation of the complete process that is less granular. We can see that the application behaves just as it did in the earlier illustration; we just do not go into explicit detail about every method that is called internally. We also see that the requested page can call additional code segments. These code segments can be a CFC, a custom tag, or any included page. Those pages can also include other pages, so that they create a proper hierarchy of functionality. Always try to make sure that functionality is layered, so the separation of layers provides a structured and simpler approach to creation, maintenance, and long-term support for your web applications. The variables set in Application.cfc can be modified before the requested page is called, and even later. Let us say for example, you want to record the time at which the session was last hit. You could choose to set a variable, <cfset session._stat.lasthit = now()>. This could be set either at the beginning of a request, or at the end. Here, the question is where you would put this function. At first, you might think that you would add this function to the OnSessionStart() method or OnSessionEnd() method. This would cause an issue because these two methods are only called when a session has begun or when it has come to an end. You would actually put the code into the OnRequestStart() or OnRequestEnd() code segment of the Application.cfc file for the function to work properly. The request methods are called with every server request. We will have a complete Application.cfc to use as a model for creating our own variations in future projects. Remember to place the code in the right place and test your code using CFDump or by some other means to make sure it works when creating changes.
Read more
  • 0
  • 0
  • 2181

article-image-connecting-microsoft-sql-server-compact-35-visual-studio
Packt
13 Jul 2010
3 min read
Save for later

Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio

Packt
13 Jul 2010
3 min read
SQL Server Compact Edition 3.5 can be used to create applications that are useful for a number of business uses such as: Portable applications; Occasionally connected clients and embedded applications and devices. SQL Server Compact differs from other SQL Servers in that there is just one file which can be password protected and features 128-bit file level encryption. It is referential integrity compliant; supports multiple connections; has transactions support with rich data types. In this tutorial by Jayaram Krishnaswamy, various scenarios where you may need to connect to SQL Server Compact using Visual Studio IDE (both 2008 and 2010) are described in detail. Connecting to SQL Server Compact 4.5 using Visual Studio 2010 Express (free version of Visual Studio) is also described. The connection is the starting point for any database related program and therefore mastering the connection task is crucial to work with the SQL Server Compact. (For more resources on Microsoft, see here.) If you are familiar with SQL Server you already know much of SQL Server Compact. It can be administered from SSMS and, using SQL Syntax and ADO.NET technology you can be immediately productive with SQL Server Compact.It is free to download (also free to deploy and redistribute) and comes in the form of just one code-free file. Its small foot print makes it easily deployable to a variety of device sizes and requires no administration. It also supports a subset of T-SQL and a rich set of data types. It can be used in creating desktop/web applications using Visual Studio 2008 and Visual Studio 2010. It also comes with a sample Northwind database. Download details Microsoft SQL Server Compact 3.5 may be downloaded from this site here. Make sure you download detailed features of this program from the same site. Also several bugs have been fixed in the program as detailed in the two SP's. Link to the latest service pack SP2 is here. By applying SP2 the installed version on the machine is upgraded to the latest version. Connecting to SQL Server Compact from Windows and Web projects You can use the Server Explorer in Visual Studio to drag and drop objects from SQL Server Compact provided you add a connection to the SQL Server Compact. In fact, in Visual Studio 2008 IDE you can configure a data connection without even starting a project from the View menu as shown here. When you click Add Connection... the following window will be displayed. This brings up the Add Connection dialog shown here. Click Change... to choose the correct data source for SQL Server Compact. The default is SQL Server client. The Change Data Source window is displayed as shown. Highlight Microsoft SQL Server Compact 3.5 and click OK. You are returned to Add Connection where you can browse or create a database or, choose also from a ActiveSync connected device such as a Smart phone which has a SQL Server Compact for devices installed. Presently connect to one on the computer (My Computer default option)-the sample database Northwind. Click Browse.... The Select SQL Server Compact 3.5 Database File dialog opens where your sample database Northwind is displayed as shown. Click Open. The database file is entered in the Add Connection dialogue. You may test the connection. You should get a Test connection succeeded message from Microsoft Visual Studio. Click OK. The Northwind.sdf file is displayed as a tree with Tables and View as shown in the next figure. Right click Northwind.sdf in the Server Explorer above and click Properties drop-down menu item. You will see the connection string for this conneciton as shown here.
Read more
  • 0
  • 1
  • 11367

article-image-fine-tune-view-layer-your-fusion-web-application
Packt
08 Jul 2010
5 min read
Save for later

Fine Tune the View layer of your Fusion Web Application

Packt
08 Jul 2010
5 min read
(For more resources on Oracle, see here.) The following diadram illustrates the roles of each layer. Use AJAX to boost up the performance of your web pages Asynchronous JavaScript and XML (AJAX) avoid the full page refresh and minimize the data that being transferred between client and server during each round trip. ADF Faces is packaged with 150+ AJAX enabled components which adds AJAX capability to your applications with zero effort. Certain events on an ADF Faces component trigger Partial Page Rendering (PPR) by default. However action components, by default, triggers full page refresh which is quite expensive and may not be required in most of the cases. Make sure that you set partialSubmit attribute to true whenever possible to optimize the page lifecycle. When partialSubmit is set to true, then only the components that have values for their partialTriggers attribute will be processed through the lifecycle. Avoid mixing of html tags with ADF Faces components Don’t mix html tags and ADF Faces components though JSF design let you to do so using <f:verbatim> tag. Mixing row html contents with ADF Faces components may produce undesired output, especially when you have complex layout design for your page. It's highly discouraged to use <f:verbatim> to embed JavaScript or CSS, instead you can use <af:resource> which adds the resource to the document element and optimizes further processing during tree rendering. Avoid long Ids for User Interface components It's always recommended to use short Ids for your User Interface (UI) components. Your JSF page finally boils down to html contents, whose size decides the network bandwidth usage for your web application. If you use long Ids for UI (User Interface) component, that increases the size of the generated html content. This becomes even worse, if you have many UI elements with long Ids. If there's no Id is set for a UI component explicitly, ADF Faces runtime auto generates Ids for you. ADF Faces let you to control the auto generation of component ids by setting context parameter 'oracle.adf.view.rich.SUPPRESS_IDS' in the web.xml file. The <context-param> entry in the web.xml file may look like as shown below. <context-param> <param-name> oracle.adf.view.rich.SUPPRESS_IDS </param-name> <param-value>auto or explicit </param-value></context-param> Possible values for oracle.adf.view.rich.SUPPRESS_IDS are listed below. auto: Components can suppress auto generated IDs, but explicitly set ID will be honored. explicit: This is the default value for oracle.adf.view.rich.SUPPRESS_IDS parameter. In this case both auto generated IDs and explicitly set IDs would get suppressed. Avoid inline usage of JavaScript/Cascading Style Sheets (CSS) whenever possible If you need to use custom JavaScript functions or CSS in your application, try using external files to hold the same. Avoid inline usage of JavaScripts/CSS as much as possible. A better idea is to logically group them in external files and embed the required one in the candidate page using <af:resource> tag. If you keep JavaScript and CSS in external files, they are cached by the browser. Apparently, subsequent requests for these resources are served from the cache. This in turn reduces the network usage and improves the performance. Avoid mixing JSF/ADF Faces and JavaServer Pages Standard Tag Library (JSTL) tags Stick on JSF/ADF Faces components for building your UI as much as you can. JSF component may not work properly with some JSTL tags as they are not designed to co-exist. Relying on JSF/ADF Faces components may give you better extensibility and portability for your application as bonus. Don't generate client component unless it's really needed ADF Faces runtime generates the client components only when they are really required on the client. However you can override this behavior by setting the attribute clientComponent to true, as shown in the following code snippet. <af:commandButton text="DoSomething" clientComponent="true"> <af:clientListener method="doSomething" type="action"/></af:commandButton> Set clientComponent to true only if you need to access the component on the client side using JavaScript. Otherwise this may result in increased Document Object Model (DOM) size at the client side and may affect the performance of your web page. The following diagram shows the runtime coordination between client side and server side component trees. In the above diagram, you can see that no client side component is generated for the server component whose clientComponent attribute is set to false. Prefer not to render the components over hiding components from DOM tree If you need to hide UI components conditionally on a page, try achieving this with rendered property of the component instead of using visible property. Because the later creates the component instance and then hides the same from client side DOM tree, where as the first approach skips the component creation at the server side itself and client side DOM does not have this element added. Apparently setting rendered to false, reduces the client content size and gives better performance as bonus.
Read more
  • 0
  • 0
  • 2148
Banner background image

article-image-java-oracle-database
Packt
07 Jul 2010
5 min read
Save for later

Java in Oracle Database

Packt
07 Jul 2010
5 min read
"The views expressed in this article are the author's own and do not necessarily reflect the views of Oracle." (For more resources on Oracle, see here.) Introduction This article is better understood by people who have some familiarity with Oracle database, SQL, PL/SQL, and of course Java (including JDBC). Beginners can also understand the article to some extent, because it does not contain many specifics/details. The article can be useful to software developers, designers and architects working with Java. Oracle database provides a Java runtime in its database server process. Because of this, it is possible not only to store Java sources and Java classes in an Oracle database, but also to run the Java classes within the database server. Such Java classes will be 'executed' by the Java Virtual Machine embedded in the database server. The Java platform provided is J2SE-compliant, and in addition to the JVM, it includes all the Java system classes. So, conceptually, whatever Java code that can be run using the JREs (like Sun's JRE) on the operating system, can be run within the Oracle database too. Java stored procedure The key unit of the Java support inside the Oracle database is the 'Java Stored Procedure' (that may be referred to as JSP, as long as it is not confused with JavaServer Pages). A Java stored procedure is an executable unit stored inside the Oracle database, and whose implementation is in Java. It is similar to PL/SQL stored procedures and functions. Creation Let us see an example of how to create a simple Java stored procedure. We will create a Java stored procedure that adds two given numbers and returns the sum. The first step is to create a Java class that looks like the following: public class Math{ public static int add(int x, int y) { return x + y; }} This is a very simple Java class that just contains one static method that returns the sum of two given numbers. Let us put this code in a file called Math.java, and compile it (say, by doing 'javac Math.java') to get Math.class file. The next step is to 'load' Math.class into the Oracle database. That is, we have to put the class file located in some directory into the database, so that the class file gets stored in the database. There are a few ways to do this, and one of them is to use the command-line tool called loadjava provided by Oracle, as follows: loadjava -v -u scott/tiger Math.class Generally, in Oracle database, things are always stored in some 'schema' (also known as 'user'). Java classes are no exception. So, while loading a Java class file into the database, we need to specify the schema where the Java class should be stored. Here, we have given 'scott' (along with the password). There are a lot of other things that can be done using loadjava, but we will not go into them here. Next, we have to create a 'PL/SQL wrapper' as follows: SQL> connect scott/tigerConnected.SQL>SQL> create or replace function addition(a IN number, b IN number) return number 2 as language java name 'Math.add(int, int) return int'; 3 /Function created.SQL> We have created the PL/SQL wrapper called 'addition', for the Java method Math.add(). The syntax is same as the one used to create a PL/SQL function/procedure, but here we have specified that the implementation of the function is in the Java method Math.add(). And that's it. We've created a Java stored procedure! Basically, what we have done is, implemented our requirement in Java, and then exposed the Java implementation via PL/SQL. Using Jdeveloper, an IDE from Oracle, all these steps (creating the Java source, compiling it, loading it into the database, and creating the PL/SQL wrapper) can be done easily from within the IDE. One thing to remember is that, we can create Java stored procedures for Java static methods only, but not for instance methods. This is not a big disadvantage, and in fact makes sense, because even the main() method, which is the entry point for a Java program, is also 'static'. Here, since Math.add() is the entry point, it has to be 'static'. So, we can write as many static methods in our Java code as needed and make them entry points by creating the PL/SQL wrappers for them. Invocation We can call the Java stored procedure we have just created, just like any PL/SQL procedure/function is called, either from SQL or PL/SQL: SQL> select addition(10, 20) from dual;ADDITION(10,20)--------------- 30SQL>SQL> declare 2 s number; 3 begin 4 s := addition(10, 20); 5 dbms_output.put_line('SUM = ' || s); 6 end; 7 /SUM = 30PL/SQL procedure successfully completed.SQL> Here, the 'select' query, as well as the PL/SQL block, invoked the PL/SQL function addition(), which in turn invoked the underlying Java method Math.add(). A main feature of the Java stored procedure is that, the caller (like the 'select' query above) has no idea that the procedure is indeed implemented in Java. Thus, the stored procedures implemented in PL/SQL and Java can be called alike, without requiring to know the language in which the underlying implementation is. So, in general, whatever Java code we have, can be seamlessly integrated into the PL/SQL code via the PL/SQL wrappers. Putting in other words, we now have more than one language option to implement a stored procedure - PL/SQL and Java. If we have any project where stored procedures are to be implemented, then Java is a good option, because today it is relatively easier to find a Java programmer.
Read more
  • 0
  • 0
  • 2137

article-image-modeling-relationships-gorm
Packt
09 Jun 2010
6 min read
Save for later

Modeling Relationships with GORM

Packt
09 Jun 2010
6 min read
(For more resources on Groovy DSL, see here.) Storing and retrieving simple objects is all very well, but the real power of GORM is that it allows us to model the relationships between objects, as we will now see. The main types of relationships that we want to model are associations, where one object has an associated relationship with another, for example, Customer and Account, composition relationships, where we want to build an object from sub components, and inheritance, where we want to model similar objects by describing their common properties in a base class. Associations Every business system involves some sort of association between the main business objects. Relationships between objects can be one-to-one, one-to-many, or many-to-many. Relationships may also imply ownership, where one object only has relevance in relation to another parent object. If we model our domain directly in the database, we need to build and manage tables, and make associations between the tables by using foreign keys. For complex relationships, including many-to-many relationships, we may need to build special tables whose sole function is to contain the foreign keys needed to track the relationships between objects. Using GORM, we can model all of the various associations that we need to establish between objects directly within the GORM class definitions. GORM takes care of all of the complex mappings to tables and foreign keys through a Hibernate persistence layer. One-to-one The simplest association that we need to model in GORM is a one-to-one association. Suppose our customer can have a single address; we would create a new Address domain class using the grails create-domain-class command, as before. class Address { String street String city static constraints = { }} To create the simplest one-to-one relationship with Customer, we just add an Address field to the Customer class. class Customer { String firstName String lastName Address address static constraints = { }} When we rerun the Grails application, GORM will recreate a new address table. It will also recognize the address field of Customer as an association with the Address class, and create a foreign key relationship between the customer and address tables accordingly. This is a one-directional relationship. We are saying that a Customer "has an" Address but an Address does not necessarily "have a" Customer. We can model bi-directional associations by simply adding a Customer field to the Address. This will then be reflected in the relational model by GORM adding a customer_id field to the address table. class Address { String street String city Customer customer static constraints = { } }mysql> describe address;+-------------+--------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+-------------+--------------+------+-----+---------+----------------+| id | bigint(20) | NO | PRI | NULL | auto_increment || version | bigint(20) | NO | | | || city | varchar(255) | NO | | | || customer_id | bigint(20) | YES | MUL | NULL | || street | varchar(255) | NO | | | |+-------------+--------------+------+-----+---------+----------------+5 rows in set (0.01 sec)mysql> These basic one-to-one associations can be inferred by GORM just by interrogating the fields in each domain class via reflection and the Groovy metaclasses. To denote ownership in a relationship, GORM uses an optional static field applied to a domain class, called belongsTo. Suppose we add an Identity class to retain the login identity of a customer in the application. We would then use class Customer { String firstName String lastName Identity ident}class Address { String street String city}class Identity { String email String password static belongsTo = Customer} Classes are first-class citizens in the Groovy language. When we declare static belongsTo = Customer, what we are actually doing is storing a static instance of a java.lang.Class object for the Customer class in the belongsTo field. Grails can interrogate this static field at load time to infer the ownership relation between Identity and Customer. Here we have three classes: Customer, Address, and Identity. Customer has a one-to-one association with both Address and Identity through the address and ident fields. However, the ident field is "owned" by Customer as indicated in the belongsTo setting. What this means is that saves, updates, and deletes will be cascaded to identity but not to address, as we can see below. The addr object needs to be saved and deleted independently of Customer but id is automatically saved and deleted in sync with Customer. def addr = new Address(street:"1 Rock Road", city:"Bedrock")def id = new Identity(email:"email", password:"password")def fred = new Customer(firstName:"Fred", lastName:"Flintstone", address:addr,ident:id)addr.save(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0fred.save(flush:true)assert Customer.list().size == 1assert Address.list().size == 1assert Identity.list().size == 1fred.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0addr.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 0assert Identity.list().size == 0 Constraints You will have noticed that every domain class produced by the grails create-domain- class command contains an empty static closure, constraints. We can use this closure to set the constraints on any field in our model. Here we apply constraints to the e-mail and password fields of Identity. We want an e-mail field to be unique, not blank, and not nullable. The password field should be 6 to 200 characters long, not blank, and not nullable. class Identity { String email String password static constraints = { email(unique: true, blank: false, nullable: false) password(blank: false, nullable:false, size:6..200) }} From our knowledge of builders and the markup pattern, we can see that GORM could be using a similar strategy here to apply constraints to the domain class. It looks like a pretended method is provided for each field in the class that accepts a map as an argument. The map entries are interpreted as constraints to apply to the model field. The Builder pattern turns out to be a good guess as to how GORM is implementing this. GORM actually implements constraints through a builder class called ConstrainedPropertyBuilder. The closure that gets assigned to constraints is in fact some markup style closure code for this builder. Before executing the constraints closure, GORM sets an instance of ConstrainedPropertyBuilder to be the delegate for the closure. We are more accustomed to seeing builder code where the Builder instance is visible. def builder = new ConstrainedPropertyBuilder()builder.constraints {} Setting the builder as a delegate of any closure allows us to execute the closure as if it was coded in the above style. The constraints closure can be run at any time by Grails, and as it executes the ConstrainedPropertyBuilder, it will build a HashMap of the constraints it encounters for each field. We can illustrate the same technique by using MarkupBuilder and NodeBuilder. The Markup class in the following code snippet just declares a static closure named markup. Later on we can use this closure with whatever builder we want, by setting the delegate of the markup to the builder that we would like to use. class Markup { static markup = { customers { customer(id:1001) { name(firstName:"Fred", surname:"Flintstone") address(street:"1 Rock Road", city:"Bedrock") } customer(id:1002) { name(firstName:"Barney", surname:"Rubble") address(street:"2 Rock Road", city:"Bedrock") } } }}Markup.markup.setDelegate(new groovy.xml.MarkupBuilder())Markup.markup() // Outputs xmlMarkup.markup.setDelegate(new groovy.util.NodeBuilder())def nodes = Markup.markup() // builds a node tree
Read more
  • 0
  • 0
  • 3271

article-image-grails-object-relational-mapping-gorm
Packt
08 Jun 2010
5 min read
Save for later

The Grails Object Relational Mapping (GORM)

Packt
08 Jun 2010
5 min read
(For more resources on Groovy DSL, see here.) The Grails framework is an open source web application framework built for the Groovy language. Grails not only leverages Hibernate under the covers as its persistence layer, but also implements its own Object Relational Mapping layer for Groovy, known as GORM. With GORM, we can take a POGO class and decorate it with DSL-like settings in order to control how it is persisted. Grails programmers use GORM classes as a mini language for describing the persistent objects in their application. In this section, we will do a whistle-stop tour of the features of Grails. This won't be a tutorial on building Grails applications, as the subject is too big to be covered here. Our main focus will be on how GORM implements its Object model in the domain classes. Grails quick start Before we proceed, we need to install Grails and get a basic app installation up and running. The Grails' download and installation instructions can be found at http://www.grails.org/Installation. Once it has been installed, and with the Grails binaries in your path, navigate to a workspace directory and issue the following command: grails create-app GroovyDSL This builds a Grails application tree called GroovyDSL under your current workspace directory. If we now navigate to this directory, we can launch the Grails app. By default, the app will display a welcome page at http://localhost:8080/GroovyDSL/. cd GroovyDSLgrails run-app The grails-app directory The GroovyDSL application that we built earlier has a grails-app subdirectory, which is where the application source files for our application will reside. We only need to concern ourselves with the grails-app/domain directory for this discussion, but it's worth understanding a little about some of the other important directories. grails-app/conf: This is where the Grails configuration files reside. grails-app/controllers: Grails uses a Model View Controller (MVC) architecture. The controller directory will contain the Groovy controller code for our UIs. grails-app/domain: This is where Grails stores the GORM model classes of the application. grails-app/view: This is where the Groovy Server Pages (GSPs), the Grails equivalent to JSPs are stored. Grails has a number of shortcut commands that allow us to quickly build out the objects for our model. As we progress through this section, we will take a look back at these directories to see what files have been generated in these directories for us. In this section, we will be taking a whistle-stop tour through GORM. You might like to dig deeper into both GORM and Grails yourself. You can find further online documentation for GORM at http://www.grails.org/GORM. DataSource configuration Out of the box, Grails is configured to use an embedded HSQL in-memory database. This is useful as a means of getting up and running quickly, and all of the example code will work perfectly well with the default configuration. Having an in-memory database is helpful for testing because we always start with a clean slate. However, for the purpose of this section, it's also useful for us to have a proper database instance to peek into, in order to see how GORM maps Groovy objects into tables. We will configure our Grails application to persist in a MySQL database instance. Grails allows us to have separate configuration environments for development, testing, and production. We will configure our development environment to point to a MySQL instance, but we can leave the production and testing environments as they are. First of all we need to create a database, by using the mysqladmin command. This command will create a database called groovydsl, which is owned by the MySQL root user. mysqladmin -u root create groovydsl Database configuration in Grails is done by editing the DataSource.groovy source file in grails-app/conf. We are interested in the environments section of this file. environments { development { dataSource { dbCreate = "create-drop" url = "jdbc:mysql://localhost/groovydsl" driverClassName = "com.mysql.jdbc.Driver" username = "root" password = "" } } test { dataSource { dbCreate = "create-drop" url = "jdbc:hsqldb:mem:testDb" } } production { dataSource { dbCreate = "update" url = "jdbc:hsqldb:mem:testDb" } }} The first interesting thing to note is that this is a mini Groovy DSL for describing data sources. In the previous version, we have edited the development dataSource entry to point to the MySQL groovydsl database that we created. In early versions of Grails, there were three separate DataSource files that need to be configured for each environment, for example, DevelopmentDataSource.groovy. The equivalent DevelopmentDataSource.groovy file would be as follows: class DevelopmentDataSource { boolean pooling = true String dbCreate = "create-drop" String url = " jdbc:mysql://localhost/groovydsl " String driverClassName = "com.mysql.jdbc.Driver" String username = "root" String password = ""} The dbCreate field tells GORM what it should do with tables in the database, on startup. Setting this to create-drop will tell GORM to drop a table if it exists already, and create a new table, each time it runs. This will keep the database tables in sync with our GORM objects. You can also set dbCreate to update or create. DataSource.groovy is a handy little DSL for configuring the GORM database connections. Grails uses a utility class—groovu.utl. ConfigSlurper—for this DSL. The ConfigSlurper class allows us to easily parse a structured configuration file and convert it into a java.util.Properties object if we wish. Alternatively, we can navigate the ConfigObject returned by using dot notation. We can use the ConfigSlurper to open and navigate DataSource.groovy as shown in the next code snippet. ConfigSlurper has a built-in ability to partition the configuration by environment. If we construct the ConfigSlurper for a particular environment, it will only load the settings appropriate to that environment. def development = new ConfigSlurper("development").parse(newFile('DataSource.groovy').toURL())def production = new ConfigSlurper("production").parse(newFile('DataSource.groovy').toURL())assert development.dataSource.dbCreate == "create-drop"assert production.dataSource.dbCreate == "update"def props = development.toProperties()assert props["dataSource.dbCreate"] == "create-drop"
Read more
  • 0
  • 0
  • 2686
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-working-jrockit-runtime-analyzer-sequel
Packt
03 Jun 2010
7 min read
Save for later

Working with JRockit Runtime Analyzer- A Sequel

Packt
03 Jun 2010
7 min read
Code The Code tab group contains information from the code generator and the method sampler. It consists of three tabs —the Overview, Hot Methods, and Optimizations tab. Overview This tab aggregates information from the code generator with sample information from the code optimizer. This allows us to see which methods the Java program spends the most time executing. Again, this information is available virtually "for free", as the code generation system needs it anyway. For CPU-bound applications, this tab is a good place to start looking for opportunities to optimize your application code. By CPU-bound, we mean an application for which the CPU is the limiting factor; with a faster CPU, the application would have a higher throughput.   In the first section, the amount of exceptions thrown per second is shown. This number depends both on the hardware and on the application—faster hardware may execute an application more quickly, and consequently throw more exceptions. However, a higher value is always worse than a lower one on identical setups. Recall that exceptions are just that, rare corner cases. As we have explained, theJVM typically gambles that they aren't occurring too frequently. If an application throws hundreds of thousands exceptions per second, you should investigate why. Someone may be using exceptions for control flow, or there may be a configuration error. Either way, performance will suffer. In JRockit Mission Control 3.1, the recording will only provide information about how many exceptions were thrown. The only way to find out where the exceptions originated is unfortunately by changing the verbosity of the log. An overview of where the JVM spends most of the time executing Java code can be found in the Hot Packages and Hot Classes sections. The only difference between them is the way the sample data from the JVM code optimizer is aggregated. In Hot Packages, hot executing code is sorted on a per-package basis and in Hot Classes on a per-class basis. For more fine-grained information, use the Hot Methods tab. As shown in the example screenshot, most of the time is spent executing code in the weblogic.servlet.internal package. There is also a fair amount of exceptions being thrown. Hot Methods This tab provides a detailed view of the information provided by the JVM code optimizer. If the objective is to find a good candidate method for optimizing the application, this is the place to look. If a lot of the method samples are from one particular method, and a lot of the method traces through that method share the same origin, much can potentially be gained by either manually optimizing that method or by reducing the amount of calls along that call chain. In the following example, much of the time is spent in the method com.bea.wlrt.adapter.defaultprovider.internal.CSVPacketReceiver.parseL2Packet(). It seems likely that the best way to improve the performance of this particular application would be to optimize a method internal to the application container (WebLogic Event Server) and not the code in the application itself, running inside the container. This illustrates both the power of the JRockit Mission Control tools and a dilemma that the resulting analysis may reveal—the answers provided sometimes require solutions beyond your immediate control.   Sometimes, the information provided may cause us to reconsider the way we use data structures. In the next example, the program frequently checks if an object is in a java. util.LinkedList. This is a rather slow operation that is proportional to the size of the list (time complexity O(n)), as it potentially involves traversing the entire list, looking for the element. Changing to another data structure, such as a HashSet would most certainly speed up the check, making the time complexity constant (O(1)) on average, given that the hash function is good enough and the set large enough.   Optimizations This tab shows various statistics from the JIT-compiler. The information in this tab is mostly of interest when hunting down optimization-related bugs in JRockit. It shows how much time was spent doing optimizations as well as how much time was spent JIT-compiling code at the beginning and at the end of the recording. For each method optimized during the recording, native code size before and after optimization is shown, as well as how long it took to optimize the particular method.   Thread/Locks The Thread/Locks tab group contains tabs that visualize thread- and lock-related data. There are five such tabs in JRA—the Overview, Thread, Java Locks, JVM Locks, and Thread Dumps tab. Overview The Overview tab shows fundamental thread and hardware-related information, such as the number of hardware threads available on the system and the number of context switches per second.   A dual-core CPU has two hardware threads, and a hyperthreaded core also counts as two hardware threads. That is, a dual-core CPU with hyperthreading will be displayed as having four hardware threads. A high amount of context switches per second may not be a real problem, but better synchronization behavior may lead to better total throughput in the system. There is a CPU graph showing both the total CPU load on the system, as well as the CPU load generated by the JVM. A saturated CPU is usually a good thing — you are fully utilizing the hardware on which you spent a lot of money! As previously mentioned, in some CPU-bound applications, for example batch jobs, it is normally a good thing for the system to be completely saturated during the run. However, for a standard server-side application it is probably more beneficial if the system is able to handle some extra load in addition to the expected one. The hardware provisioning problem is not simple, but normally server-side systems should have some spare computational power for when things get hairy. This is usually referred to as overprovisioning, and has traditionally just involved buying faster hardware. Virtualization has given us exciting new ways to handle the provisioning problem. Threads This tab shows a table where each row corresponds to a thread. The tab has more to offer than first meets the eye. By default, only the start time, the thread duration, and the Java thread ID are shown for each thread. More columns can be made visible by changing the table properties. This can be done either by clicking on the Table Settings icon, or by using the context menu in the table. As can be seen in the example screenshot, information such as the thread group that the thread belongs to, allocation-related information, and the platform thread ID can also be displayed. The platform thread ID is the ID assigned to the thread by the operating system, in case we are working with native threads. This information can be useful if you are using operating system-specific tools together with JRA.   Java Locks This tab displays information on how Java locks have been used during the recording. The information is aggregated per type (class) of monitor object.   This tab is normally empty. You need to start JRockit with the system property jrockit.lockprofiling set to true, for the lock profiling information to be recorded. This is because lock profiling may cause anything from a small to a considerable overhead, especially if there is a lot of synchronization. With recent changes to the JRockit thread and locking model, it would be possible to dynamically enable lock profiling. This is unfortunately not the case yet, not even in JRockit Flight Recorder. For R28, the system property jrockit.lockprofiling has been deprecated and replaced with the flag -XX:UseLockProfiling. JVM Locks This tab contains information on JVM internal native locks. This is normally useful for the JRockit JVM developers and for JRockit support. An example of a native lock would be the code buffer lock that the JVM acquires in order to emit compiled methods into a native code buffer. This is done to ensure that no other code generation threads interfere with that particular code emission. Thread Dumps The JRA recordings normally contain thread dumps from the beginning and the end of the recording. By changing the Thread dump interval parameter in the JRA recording wizard, more thread dumps can be made available at regular intervals throughout the recording.  
Read more
  • 0
  • 0
  • 1236

article-image-working-jrockit-runtime-analyzer
Packt
03 Jun 2010
9 min read
Save for later

Working with JRockit Runtime Analyzer

Packt
03 Jun 2010
9 min read
The JRockit Runtime Analyzer, or JRA for short, is a JRockit-specific profiling tool that provides information about both the JRockit runtime and the application running in JRockit. JRA was the main profiling tool for JRockit R27 and earlier, but has been superseded in later versions by the JRockit Flight Recorder. Because of its extremely low overhead, JRA is suitable for use in production. This article is mainly targeted at R27.x/3.x versions of JRockit and Mission Control. The need for feedback In order to make JRockit an industry-leading JVM, there has been a great need for customer collaboration. As the focus for JRockit consistently has been on performance and scalability in server-side applications, the closest collaboration has been with customers with large server installations. An example is the financial industry. The birth of the JRockit Runtime Analyzer, or JRA, originally came from the need for gathering profiling information on how well JRockit performed at customer sites. One can easily understand that customers were rather reluctant to send us, for example, their latest proprietary trading applications to play with in our labs. And, of course, allowing us to poke around in a customer's mission critical application in production was completely out of the question. Some of these applications shuffle around billions of dollars per week. We found ourselves in a situation where we needed a tool to gather as much information as possible on how JRockit, and the application running on JRockit, behaved together; both to find opportunities to improve JRockit and to find erratic behavior in the customer application. This was a bit of a challenge, as we needed to get high quality data. If the information was not accurate, we would not know how to improve JRockit in the areas most needed by customers or perhaps at all. At the same time, we needed to keep the overhead down to a minimum. If the profiling itself incurred significant overhead, we would no longer get a true representation of the system. Also, with anything but near-zero overhead, the customer would not let us perform recordings on their mission critical systems in production. JRA was invented as a method of recording information in a way that the customer could feel confident with, while still providing us with the data needed to improve JRockit. The tool was eventually widely used within our support organization to both diagnose problems and as a tuning companion for JRockit. In the beginning, a simple XML format was used for our runtime recordings. A human-readable format made it simple to debug, and the customer could easily see what data was being recorded. Later, the format was upgraded to include data from a new recording engine for latency-related data. When the latency data came along, the data format for JRA was split into two parts, the human-readable XML and a binary file containing the latency events. The latency data was put into JRockit internal memory buffers during the recording, and to avoid introducing unnecessary latencies and performance penalties that would surely be incurred by translating the buffers to XML, it was decided that the least intrusive way was to simply dump the buffers to disk. To summarize, recordings come in two different flavors having either the .jra extension (recordings prior to JRockit R28/JRockit Mission Control 4.0) or the .jfr (JRockit Flight Recorder) extension (R28 or later). Prior to the R28 version of JRockit, the recording files mainly consisted of XML without a coherent data model. As of R28, the recording files are binaries where all data adheres to an event model, making it much easier to analyze the data. To open a JFR recording, a JRockit Mission Control of version 3.x must be used. To open a Flight Recorder recording, JRockit Mission Control version 4.0 or later must be used. Recording The recording engine that starts and stops recordings can be controlled in several different ways: By using the JRCMD command-line tool. By using the JVM command-line parameters. For more information on this, see the -XXjra parameter in the JRockit documentation. From within the JRA GUI in JRockit Mission Control. The easiest way to control recordings is to use the JRA/JFR wizard from within the JRockit Mission Control GUI. Simply select the JVM on which to perform a JRA recording in the JVM Browser and click on the JRA button in the JVM Browser toolbar. You can also click on Start JRA Recording from the context menu. Usually, one of the pre-defined templates will do just fine, but under special circumstances it may be necessary to adjust them. The pre-defined templates in JRockit Mission Control 3.x are: Full Recording: This is the standard use case. By default, it is configured to do a five minute recording that contains most data of interest. Minimal Overhead Recording: This template can be used for very latency-sensitive applications. It will, for example, not record heap statistics, as the gathering of heap statistics will, in effect, cause an extra garbage collection at the beginning and at the end of the recording. Real Time Recording: This template is useful when hunting latency-related problems, for instance when tuning a system that is running on JRockit Real Time. This template provides an additional text field for setting the latency threshold. The latency threshold is explained later in the article in the section on the latency analyzer. The threshold is by default lowered to 5 milliseconds for this type of recording, from the default 20 milliseconds, and the default recording time is longer. Classic Recording: This resembles a classic JRA recording from earlier versions of Mission Control. Most notably, it will not contain any latency data. Use this template with JRockit versions prior to R27.3 or if there is no interest in recording latency data.   All recording templates can be customized by checking the Show advanced options check box. This is usually not needed, but let's go through the options and why you may want to change them: Enable GC sampling: This option selects whether or not GC-related information should be recorded. It can be turned off if you know that you will not be interested in GC-related information. It is on by default, and it is a good idea to keep it enabled. Enable method sampling: This option enables or disables method sampling. Method sampling is implemented by using sample data from the JRockit code optimizer. If profiling overhead is a concern (it is usually very low, but still), it is usually a good idea to use the Method sample interval option to control how much method sampling information to record. Enable native sampling: This option determines whether or not to attempt to sample time spent executing native code as a part of the method sampling. This feature is disabled by default, as it is mostly used by JRockit developers and support. Most Java developers probably do fine without it. Hardware method sampling: On some hardware architectures, JRockit can make use of special hardware counters in the CPU to provide higher resolution for the method sampling. This option only makes sense on such architectures. Stack traces: Use this option to not only get sample counts but also stack traces from method samples. If this is disabled, no call traces are available for sample points in the methods that show up in the Hot Methods list. Trace depth: This setting determines how many stack frames to retrieve for each stack trace. For JRockit Mission Control versions prior to 4.0, this defaulted to the rather limited depth of 16. For applications running in application containers or using large frameworks, this is usually way too low to generate data from which any useful conclusions can be drawn. A tip, when profiling such an application, would be to bump this to 30 or more. Method sampling interval: This setting controls how often thread samples should be taken. JRockit will stop a subset of the threads every Method sample interval milliseconds in a round robin fashion. Only threads executing when the sample is taken will be counted, not blocking threads. Use this to find out where the computational load in an application takes place. See the section, Hot Methods for more information. Thread dumps: When enabled, JRockit will record a thread stack dump at the beginning and the end of the recording. If the Thread dump interval setting is also specified, thread dumps will be recorded at regular intervals for the duration of the recording. Thread dump interval: This setting controls how often, in seconds, to record the thread stack dumps mentioned earlier. Latencies: If this setting is enabled, the JRA recording will contain latency data. For more information on latencies, please refer to the section Latency later in this article. Latency threshold: To limit the amount of data in the recording, it is possible to set a threshold for the minimum latency (duration) required for an event to actually be recorded. This is normally set to 20 milliseconds. It is usually safe to lower this to around 1 millisecond without incurring too much profiling overhead. Less than that and there is a risk that the profiling overhead will become unacceptably high and/or that the file size of the recording becomes unmanageably large. Latency thresholds can be set as low as nanosecond values by changing the unit in the unit combo box. Enable CPU sampling: When this setting is enabled, JRockit will record the CPU load at regular intervals. Heap statistics: This setting causes JRockit to do a heap analysis pass at the beginning and at the end of the recording. As heap analysis involves forcing extra garbage collections at these points in order to collect information, it is disabled in the low overhead template. Delay before starting a recording: This option can be used to schedule the recording to start at a later time. The delay is normally defined in minutes, but the unit combo box can be used to specify the time in a more appropriate unit — everything from seconds to days is supported. Before starting the recording, a location to which the finished recording is to be downloaded must be specified. Once the JRA recording is started, an editor will open up showing the options with which the recording was started and a progress bar. When the recording is completed, it is downloaded and the editor input is changed to show the contents of the recording. Analyzing JRA recordings Analyzing JRA recordings may easily seem like black magic to the uninitiated, so just like we did with the Management Console, we will go through each tab of the JRA editor to explain the information in that particular tab, with examples on when it is useful. Just like in the console, there are several tabs in different tab groups.
Read more
  • 0
  • 0
  • 1962

article-image-metaprogramming-and-groovy-mop
Packt
31 May 2010
6 min read
Save for later

Metaprogramming and the Groovy MOP

Packt
31 May 2010
6 min read
(For more resources on Groovy DSL, see here.) In a nutshell, the term metaprogramming refers to writing code that can dynamically change its behavior at runtime. A Meta-Object Protocol (MOP) refers to the capabilities in a dynamic language that enable metaprogramming. In Groovy, the MOP consists of four distinct capabilities within the language: reflection, metaclasses, categories, and expandos. The MOP is at the core of what makes Groovy so useful for defining DSLs. The MOP is what allows us to bend the language in different ways in order to meet our needs, by changing the behavior of classes on the fly. This section will guide you through the capabilities of MOP. Reflection To use Java reflection, we first need to access the Class object for any Java object in which are interested through its getClass() method. Using the returned Class object, we can query everything from the list of methods or fields of the class to the modifiers that the class was declared with. Below, we see some of the ways that we can access a Class object in Java and the methods we can use to inspect the class at runtime. import java.lang.reflect.Field;import java.lang.reflect.Method;public class Reflection { public static void main(String[] args) { String s = new String(); Class sClazz = s.getClass(); Package _package = sClazz.getPackage(); System.out.println("Package for String class: "); System.out.println(" " + _package.getName()); Class oClazz = Object.class; System.out.println("All methods of Object class:"); Method[] methods = oClazz.getMethods(); for(int i = 0;i < methods.length;i++) System.out.println(" " + methods[i].getName()); try { Class iClazz = Class.forName("java.lang.Integer"); Field[] fields = iClazz.getDeclaredFields(); System.out.println("All fields of Integer class:"); for(int i = 0; i < fields.length;i++) System.out.println(" " + fields[i].getName()); } catch (ClassNotFoundException e) { e.printStackTrace(); } }} We can access the Class object from an instance by calling its Object.getClass() method. If we don't have an instance of the class to hand, we can get the Class object by using .class after the class name, for example, String.class. Alternatively, we can call the static Class.forName, passing to it a fully-qualified class name. Class has numerous methods, such as getPackage(), getMethods(), and getDeclaredFields() that allow us to interrogate the Class object for details about the Java class under inspection. The preceding example will output various details about String, Integer, and Double. Groovy Reflection shortcuts Groovy, as we would expect by now, provides shortcuts that let us reflect classes easily. In Groovy, we can shortcut the getClass() method as a property access .class, so we can access the class object in the same way whether we are using the class name or an instance. We can treat the .class as a String, and print it directly without calling Class.getName(), as follows: The variable greeting is declared with a dynamic type, but has the type java.lang.String after the "Hello" String is assigned to it. Classes are first class objects in Groovy so we can assign String to a variable. When we do this, the object that is assigned is of type java.lang.Class. However, it describes the String class itself, so printing will report java.lang.String. Groovy also provides shortcuts for accessing packages, methods, fields, and just about all other reflection details that we need from a class. We can access these straight off the class identifier, as follows: println "Package for String class"println " " + String.packageprintln "All methods of Object class:"Object.methods.each { println " " + it }println "All fields of Integer class:"Integer.fields.each { println " " + it } Incredibly, these six lines of code do all of the same work as the 30 lines in our Java example. If we look at the preceding code, it contains nothing that is more complicated than it needs to be. Referencing String.package to get the Java package of a class is as succinct as you can make it. As usual, String.methods and String.fields return Groovy collections, so we can apply a closure to each element with the each method. What's more, the Groovy version outputs a lot more useful detail about the package, methods, and fields. When using an instance of an object, we can use the same shortcuts through the class field of the instance. def greeting = "Hello"assert greeting.class.package == String.package Expandos An Expando is a dynamic representation of a typical Groovy bean. Expandos support typical get and set style bean access but in addition to this they will accept gets and sets to arbitrary properties. If we try to access, a non-existing property, the Expando does not mind and instead of causing an exception it will return null. If we set a non-existent property, the Expando will add that property and set the value. In order to create an Expando, we instantiate an object of class groovy.util.Expando. def customer = new Expando()assert customer.properties == [:]assert customer.id == nullassert customer.properties == [:]customer.id = 1001customer.firstName = "Fred"customer.surname = "Flintstone"customer.street = "1 Rock Road"assert customer.id == 1001assert customer.properties == [ id:1001, firstName:'Fred', surname:'Flintstone', street:'1 Rock Road']customer.properties.each { println it } The id field of customer is accessible on the Expando shown in the preceding example even when it does not exist as a property of the bean. Once a property has been set, it can be accessed by using the normal field getter: for example, customer.id. Expandos are a useful extension to normal beans where we need to be able to dump arbitrary properties into a bag and we don't want to write a custom class to do so. A neat trick with Expandos is what happens when we store a closure in a property. As we would expect, an Expando closure property is accessible in the same way as a normal property. However, because it is a closure we can apply function call syntax to it to invoke the closure. This has the effect of seeming to add a new method on the fly to the Expando. customer.prettyPrint = { println "Customer has following properties" customer.properties.each { if (it.key != 'prettyPrint') println " " + it.key + ": " + it.value }}customer.prettyPrint() Here we appear to be able to add a prettyPrint() method to the customer object, which outputs to the console: Customer has following properties surname: Flintstone street: 1 Rock Road firstName: Fred id: 1001
Read more
  • 0
  • 0
  • 3486

article-image-3rd-international-soa-symposium
Packt
28 May 2010
2 min read
Save for later

3rd International SOA Symposium

Packt
28 May 2010
2 min read
With 80 Speaking sessions across 16 tracks, the International SOA and Cloud Symposium will feature the top experts from around the world. The conference will take place on October 5-6, 2010. There will also be a series of SOACP post-conference certification workshops running from October  7-13, 2010, including (for the first time in Europe) the Certified Cloud Computing Specialist Workshop.   The Agenda for 2010 is now online and contains internationally recognized speakers from organizations such as Microsoft, HP, IBM, Oracle, SAP, Amazon, Red Hat, Vordel, Layer7, TIBCO, Logica, SOA Systems, US Department of Defense, and CGI. International experts including Thomas Erl, Dirk Krafzig, Stefan Tilkov, Mark Little, Brian Loesgen, John deVadoss, Nicolai Josuttis, Tony Shan, Toufic Boubez, Paul C. Brown, Clemens Utschig, Satadru Roy, David Chou and many more will provide new and exclusive coverage of the latest SOA and Cloud Computing topics and innovations. Themes and Topics Exploring Modern Service Technologies and Practices Scaling Your Business Into the Cloud •    Service Governance & Scalability •    Service Architecture & Service Engineering Innovation•    SOA Case Studies & Strategic Planning•    REST Service Design & RESTful SOA•    Service Security & Policies•    Semantic Services & Patterns•    Service Modelling & BPM •    The Latest Cloud Computing Technology •    Building and Working with Cloud-Based Services •    Cloud Computing Business Strategies •    Case Studies & Business Models •    Understanding SOA & Cloud Computing •    Cloud-based Infrastructure & Products •    Semantic Web and the Cloud For further information, see http://soasymposium.com/agenda2010.php. Aside from the opportunity to network with 500 practitioners and 80 experts, the Symposium will feature the exclusive Pattern Review Committee, multiple book launches and galleys. Following the success of previous years’ International SOA Symposia in Amsterdam and Rotterdam, this edition will be held in Berlin. Every SOA Symposium and Cloud Symposium event is dedicated to providing valuable contents specifically for IT practitioners. Register by August 31st, and receive an early bird discount.So get a 10 % discount and pay only €981 excl. VAT for two Conference Days. Register now at www.soasymposium.com or www.cloudsymposium.com
Read more
  • 0
  • 0
  • 912
article-image-introduction-flash-builder-4-network-monitor
Packt
13 May 2010
3 min read
Save for later

An Introduction to Flash Builder 4-Network Monitor

Packt
13 May 2010
3 min read
Adobe Flash Builder 4 (formally known as Adobe Flex Builder), which no doubt needs no words of introduction, has become a de-facto standard in rich internet applications development. Latest version is considered as a ground breaking release not only for its dozens of new and enhanced features but also for its new-fangled component architectures like Spark, designer-developer workflow, integration with flash and catalyst, Data centric development, Unit testing, Debugging enhancements etc. In this article, we’ll get acquainted with a brand new premium feature of Adobe Flash Builder 4 called Network Monitor. Network Monitor enables developers to inspect and monitor client-server traffic in the form of textual, XML, AMF, or JSON data within the Adobe Flash Builder 4. It shows real-time data-traffic between application and a local or remote server along with wealth of other related information about the transferred data such as status, size, body etc. If you have used FireBug (A firefox plugin), then you will appreciate Network Monitor too. It is extremely handy during HTTP errors to check the response which is not accessible from the fault event object. Creating a Sample Application Enough Talking lets start and create a very simple application which will serve as groundwork to explore Network Monitor. Assuming you are already equipped with basic knowledge of application creation, we will move on quickly without going through minor details. This sample application will read the PackPublishing Official RSS feed and display every news title along with its publishing date in a DataGrid control. Network monitor will set forth into action when data request will be triggered. Go to File Menu, Select New > Flex Project. Insert information in the New Flex Project dialog box according to following screenshot and hit Enter. In Flex 4, all the non-visual mxml components such as RPC components, effects, validators, formatters etc are declared inside <fx:Declarations> tag. Declare a HTTPService component inside <fx:Declarations> tag. Set its id property to newsService Set its url property to https://www.packtpub.com/rss.xml Set its showBusyCursor property to true and resultFormat property to e4x. Generate result and fault event handlers, though only result event handler will be used. Your HTTPService code should look like following <s:HTTPService id="newsService" url="https://www.packtpub.com/rss.xml" showBusyCursor="true" resultFormat="e4x" result="newsService_resultHandler(event)" fault="newsService_faultHandler(event)"/> Now set the application layout to VerticalLayout <s:layout> <s:VerticalLayout verticalAlign="middle" horizontalAlign="center"/> </s:layout> Add a Label control and set its text property to Packt Publishing Add a DataGrid control, set its id property to dataGrid, and add two DataGridColumn in it. Set first column’s dataField property to title and headerText to Title Set second column’s dataField property to pubDate and headerText to Date Your controls should look like as following <s:Label text="Packt Publishing" fontWeight="bold" fontSize="22"/> <mx:DataGrid id="dataGrid" width="600"> <mx:columns> <mx:DataGridColumn dataField="title" headerText="Title"/> <mx:DataGridColumn dataField="pubDate" width="200" headerText="Date"/> </mx:columns> </mx:DataGrid> Finally add following code in newsService’s result handler. var xml:XML = XML(event.result); dataGrid.dataProvider = xml..item;
Read more
  • 0
  • 0
  • 1589

article-image-creating-new-publication-using-mobile-database-workbench-oracle-mobile-server
Packt
13 May 2010
8 min read
Save for later

Creating a New Publication using Mobile Database Workbench with Oracle Mobile Server

Packt
13 May 2010
8 min read
If you are a mobile device user, it is likely that you would have performed a sync at one point in time (with or without being aware of it). We are all familiar with the convenience of being able to just dock our PDA devices and have our calendars, tasks, and contacts automatically synced to our desktop machines. The synchronization process is a necessity for any type of mobile device, whether it's a Smart Phone, iPhone, or a Pocket PC. The core of this necessity is simple—people need to have access to their data when they're on the move and when they're back at the office, and this data needs to be consistent—wherever they're accessing it from. In a business scenario, the importance of this necessity increases manifold—it's not just about your personal data anymore. The data you've keyed in on your PDA needs to be synced to the server so that it can be shared with other users, used to generate reports, or even sent for number-crunching. With hundreds of mobile users synchronizing their data and server-side applications updating this data at the same time, things can quickly get messy. The synchronization process has to ensure that conflicts are gracefully handled, auto-generated numbers don't overlap, that each user only syncs down the data they're meant to see, and so on. The Oracle Mobile Server can be a bit tedious to set up for first time beginners. Once you get going, however, it can be a powerful tool that can manage not only database synchronization but also mobile application deployment. A publication represents an application (and its database) in the Oracle mobile server. You can create a publication through the Mobile Database Workbench tool provided with Oracle Mobile Server. Creating a new mobile project Launch the Mobile Database Workbench tool from Start | All Programs | Oracle Database Lite 10g | Mobile Database Workbench. Create a new project by clicking on the File | New | Project menu item in the Mobile Database Workbench window. A project creation wizard will run. Specify a name for your project and a location to store the project files. The next screen will request you to key in mobile repository particulars. Specify your mobile repository connection settings, and use the mobile server administrator password you specified earlier to log in. In the next step, specify a schema to use for the application. As you've created the master tables in the MASTER schema, you can specify your MASTER account username and password here. The next screen will show a summary of what you've configured so far. Click the Finish button to generate the project. If your project is generated successfully, you should be able to see your project and a tree list of its components in the left pane. Adding publication items to your project Each publication item corresponds to a database table that you intend to publish. For example, if your application contained five tables, you will need to create five publication items. Let's create the publication items now for the Accounts, AccountTasks, AccountHistories, AccountFiles, and Products tables. Click on the File | New | Publication Item menu item to launch the Publication Item wizard. In the first step of the wizard, specify a name for the publication item (use the table name as a rule of thumb). There are two options here worth noting: Synchronization refresh type This refers to the type of refresh used for a particular table: Fast: This is a type of incremental refresh—only the changes are synced down from the server during a sync. This is the most common mode of refresh used. Complete: In this type of refresh, all content is synced down from the server during each sync. It is comparatively more time consuming and resource intensive. You might use this option with tables containing small lists of data that change very frequently. Queue based: This is a custom refresh in that the developer can define the entire logic for the sync. It can be used for custom scenarios that may not exactly require synchronization—for instance you might need to simply collect data on the client and have it stored at the server. In such a case, the queue-based refresh works better because you can bypass the overhead of conflict detection. Enable automatic synchronization Automatic synchronization allows a sync to be initiated automatical-ly in the background of the mobile device when a set of rules are met. For example, you might decide to use automatic synchronization if you wanted to spread out synchronization load over time and reduce peak-out on the server. In the next step, choose the table that you want to map the publication item to. Select the MASTER schema, and click the Search button to retrieve a list of the tables under this schema. Locate the Accounts table and highlight it. In the next screen, you will need to select all the columns you need from the Accounts table. As you need to sync every single column from the snapshot to the master table, include all columns. Move all columns from the Available list to the Selected list using the arrow buttons and click on the Next button to proceed. The next step is one of the most important steps in creating a publication item. The SQL statement shown here basically defines how data is retrieved from the Accounts table at the server and synced down to the snapshot on the mobile device. This SQL statement is called the Publication Item Query. The first obvious thing you need to do is to edit the default query. You need to include a filter to sync down only the accounts owned by the specific mobile device user. You can easily use a filter that looks like the following: WHERE OwnerID = :OwnerID The following screenshot shows how your Publication Item Query will look after editing. If any part of it is defined or formatted incorrectly, you will receive a notification. Click on Next after that to get to the summary screen, then click on the Finish button to generate the publication item. After creating the publication item for the Accounts table, let's move on to a child table—the AccountTasks table. Create another publication item in the same fashion that maps to the AccountTasks table. At Step 4 of the wizard, the Publication Item Query that you need to specify will be a little bit different. The AccountTasks table does not contain the OwnerID field, so how do we filter what gets synced down to each specific mobile device. You obviously don't want to sync down every single record in this table—including those that are not meant to be accessible by the specific mobile device user. One way to still apply the OwnerID filter is to use a table join with the Accounts table. You can easily specify a table join in the following manner: SELECT "TASKID", A."ACCOUNTGUID", "TASKSUBJECT", "TASKDESCRIPTION", "TASKCREATED", "TASKDATE", "TASKSTATUS" FROM MASTER.ACCOUNTTASKS A, MASTER.ACCOUNTS B WHERE A.ACCOUNTGUID=B.ACCOUNTGUID AND B.OWNERID = :OwnerID If you try to save the Publication Item Query above in the Edit Query box, it may prompt you to select the primary base object for the publication item (as shown in the following screenshot). This should be set to AccountTasks because we are creating a publication item that maps to this table. If you choose the Accounts table again, you will end up with two publication items that map to the same Accounts table. This will cause problems when you attempt to add both items to a publication. If you have typed in everything correctly, you will be able to see your Publication Item Query show up in the Query tab shown as follows. You can then click on the Next and Finish buttons to complete the wizard. Now that you've seen how to create a publication item based on a child table, repeat the same steps above for the other child tables – AccountFiles and AccountHistories. The last table—the Products table deserves a special mention because it's different. You do not need a filter for this table, simply because every mobile device user will need to see the full list of products. You can, therefore, use the default Publication Item Query for the Products table: SELECT "PRODUCTID", "PRODUCTCODE", "PRODUCTNAME", "PRODUCTPRICE" FROM MASTER.Products After you've done this, you can now move on to creating the "sequences" necessary in this mobile application.
Read more
  • 0
  • 0
  • 964

article-image-setting-msmq-your-mobile-and-writing-msmq-application-net-compact-framework-35
Packt
29 Apr 2010
3 min read
Save for later

Setting up MSMQ on your Mobile and Writing MSMQ Application with .NET Compact Framework 3.5

Packt
29 Apr 2010
3 min read
Let's get started. Setting up Microsoft Messaging Queue Service (MSMQ) on your mobile device MSMQ is not installed by default on the Windows Mobile platform. This section will guide you on how to install MSMQ on your mobile device or device emulator. You will first need to download the Redistributable Server Components for Windows Mobile 5.0 package (which can also be used for Windows Mobile 6.0) from this location: href="http://www.microsoft.com/downloads/details.aspx?FamilyID=cdfd2bb2-fa13-4062-b8d1-4406ccddb5fd&displaylang=en After downloading and unzipping this file, you will have access to the MSMQ.arm.cab file in the following folder: Optional Windows Mobile 5.0 Server Componentsmsmq Copy this file via ActiveSync to your mobile device and run it on the device. This package contains two applications (and a bunch of other DLL components) that you will be using frequently on the device: msmqadm.exe:This is the command line tool that allows you to start and stop the MSMQ service on the mobile device and also configure MSMQ settings. It can also be invoked programmatically from code. visadm.exe: This tool does the same thing as above, but provides a visual interface. These two files will be unpacked into the Windows folder of your mobile device. The following DLL files will also be unpacked into the Windows folder: msmqd.dll msmqrt.dll Verify that these files exist. The next thing you need to do is to change the name of your device (if you haven't done so earlier). In most cases, you are probably using the Windows Mobile Emulator, which comes with an unassigned device name by default. To change your device name, navigate to Settings | System | About on your mobile device. You can change the device name in the Device ID tab. At this point, you have the files for MSMQ unpacked, but it isn't exactly installed yet. To do this, you must invoke either msmqadm.exe or visadm.exe. Launch the following application: Windowsvisadm.exe A pop-up window will appear. This window contains a text box and a Run button that allows you to type in the desired command and to execute it. The first command you need to issue is the register install command. Type in the command and click the Run button. No message will be displayed in the window. This command will install MSMQ (as a device driver) on your device. Run the following commands in the given order next (one after the other): MSMQ Command Name Purpose register You will need to run the register command one more time (without the install keyword) to create the MSMQ configuration keys in the registry. enable binary This command enables the proprietary MSMQ binary protocol to send messages to remote queues. enable srmp This command enables SRMP (SOAP Reliable Messaging Protocol), for sending messages to remote queues over HTTP. start This command starts the MSMQ service   Verify that the MSMQ service has been installed successfully by clicking on the Shortcuts button and then clicking the Verify button in the ensuing pop-up window. You will be presented with a pop-up dialog as shown in the following screenshot: MSMQ log information If you scroll down in this same window above, you will find the Base Dir path, which contains the MSMQ auto-generated log file. This log file, named MQLOGFILE by default, contains useful MSMQ related information and error messages. After you've done the preceding steps, you will need to do a soft-reset of your device. The MSMQ service will automatically start upon boot up.
Read more
  • 0
  • 0
  • 2515
article-image-data-validation-silverlight-4
Packt
23 Apr 2010
7 min read
Save for later

Data Validation in Silverlight 4

Packt
23 Apr 2010
7 min read
With Silverlight, data validation has been fully implemented, allowing controls to be bound to data objects and those data objects to handle the validation of data and provide feedback to the controls via the Visual State Machine. The Visual State Machine is a feature of Silverlight used to render to views of a control based on its state. For instance, the mouse over state of a button can actually change the color of the button, show or hide parts of the control, and so on. Controls that participate in data validation contain a ValidationStates group that includes a Valid, InvalidUnfocused, and InvalidFocused states. We can implement custom styles for these states to provide visual feedback to the user. Data object In order to take advantage of the data validation in Silverlight, we need to create a data object or client side business object that can handle the validation of data. Time for action – creating a data object We are going to create a data object that we will bind to our input form to provide validation. Silverlight can bind to any properties of an object, but for validation we need to do two way binding, for which we need both a get and a set accessor for each of our properties. In order to use two way binding, we will need to implement the INotifyPropertyChanged interface that defines a PropertyChanged event that Silverlight will use to update the binding when a property changes. Firstly, we will need to switch over to Visual Studio and add a new class named CustomerInfo to the Silverlight project: Replace the body of the CustomerInfo.cs file with the following code: using System;using System.ComponentModel;namespace CakeORamaData{ public class CustomerInfo : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged =delegate { }; private string _cutomerName = null; public string CustomerName { get { return _cutomerName; } set { if (value == _cutomerName) return; _cutomerName = value; OnPropertyChanged("CustomerName"); } } private string _phoneNumber = null; public string PhoneNumber { get { return _phoneNumber; } set { if (value == _phoneNumber) return; _phoneNumber = value; OnPropertyChanged("PhoneNumber"); } } private string _email = null; public string Email { get { return _email; } set { if (value == _email) return; _email = value; OnPropertyChanged("Email"); } } private DateTime _eventDate = DateTime.Now.AddDays(7); public DateTime EventDate { get { return _eventDate; } set { if (value == _eventDate) return; _eventDate = value; OnPropertyChanged("EventDate"); } } private void OnPropertyChanged(string propertyName) { PropertyChanged(this, new PropertyChangedEventArgs (propertyName)); } }} Code Snippets Code snippets are a convenient way to stub out repetitive code and increase productivity, by removing the need to type a bunch of the same syntax over and over. The following is a code snippet used to create properties that execute the OnPropertyChanged method and can be very useful when implementing properties on a class that implements the INotifyPropertyChanged interface. To use the snippet, save the file as propnotify.snippet to your hard drive. In Visual Studio go to Tools | Code Snippets Manager (Ctrl + K, Ctrl + B) and click the Import button. Find the propnotify.snippet file and click Open, this will add the snippet. To use the snippet in code, simply type propnotify and hit the Tab key; a property will be stubbed out allowing you to change the name and type of the property. <?xml version="1.0" encoding="utf-8" ?><CodeSnippets > <CodeSnippet Format="1.0.0"> <Header> <Title>propnotify</Title> <Shortcut>propnotify</Shortcut> <Description>Code snippet for a property that raises the PropertyChanged event in a class.</Description> <Author>Cameron Albert</Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Declarations> <Literal> <ID>type</ID> <ToolTip>Property type</ToolTip> <Default>int</Default> </Literal> <Literal> <ID>property</ID> <ToolTip>Property name</ToolTip> <Default>MyProperty</Default> </Literal> <Literal> <ID>field</ID> <ToolTip>Private field</ToolTip> <Default>_myProperty</Default> </Literal> <Literal> <ID>defaultValue</ID> <ToolTip>Default Value</ToolTip> <Default>null</Default> </Literal> </Declarations> <Code Language="csharp"> <![CDATA[private $type$ $field$ = $defaultValue$; public $type$ $property$ { get { return $field$; } set { if (value == $field$) return; $field$ = value; OnPropertyChanged("$property$"); } } $end$]]> </Code> </Snippet> </CodeSnippet></CodeSnippets> What just happened? We created a data object or client-side business object that we can use to bind to our input controls. We implemented the INotifyPropertyChanged interface, so that our data object can raise the PropertyChanged event whenever the value of one of its properties is changed. We also defined a default delegate value for the PropertyChanged event to prevent us from having to do a null check when raising the event. Not to mention we have a nice snippet for stubbing out properties that raise the PropertyChanged event. Now we will be able to bind this object to Silverlight input controls and the controls can cause the object values to be updated so that we can provide data validation from within our data object, rather than having to include validation logic in our user interface code. Data binding We are going to bind our CustomerInfo object to our data entry form, using Blend. Be sure to build the solution before switching back over to Blend. With MainPage.xaml open in Blend, select the LayoutRoot control. In the Properties panel enter DataContext in the search field and click the New button: In the dialog that opens, select the CustomerInfo class and click OK: Blend will set the DataContext of the LayoutRoot to an instance of a CustomerInfo class: Blend inserts a namespace to our class; set the Grid.DataContext in the XAML of MainPage.xaml: <Grid.DataContext> <local:CustomerInfo/></Grid.DataContext> Now we will bind the value of CustomerName to our customerName textbox. Select the customerName textbox and then on the Properties panel enter Text in the search field. Click on the Advanced property options icon, which will open a context menu for choosing an option: Click on the Data Binding option to open the Create Data Binding dialog: In the Create Data Binding dialog (on the Explicit Data Context tab), click the arrow next to the CustomerInfo entry in the Fields list and select CustomerName: At the bottom of the Create Data Binding dialog, click on the Show advanced properties arrow to expand the dialog and display additional binding options: Ensure that TwoWay is selected in the Binding direction option and that Update source when is set to Explicit. This creates a two-way binding, meaning that when the value of the Text property of the textbox changes the underlying property, bound to Text will also be updated. In our case the customerName property of the CustomerInfo class: Click OK to close the dialog; we can now see that Blend indicates that this property is bound by the yellow border around the property input field: Repeat this process for both the phoneNumber and emailAddress textbox controls, to bind the Text property to the PhoneNumber and Email properties of the CustomerInfo class. You will see that Blend has modified our XAML using the Binding Expression: <TextBox x_Name="customerName" Margin="94,8,8,0" Text="{BindingCustomerName, Mode=TwoWay, UpdateSourceTrigger=Explicit}"TextWrapping="Wrap" VerticalAlignment="Top" Grid.Column="1" Grid.Row="1" MaxLength="40"/> In the Binding Expression code the Binding is using the CustomerName property as the binding Path. The Path (Path=CustomerName) attribute can be omitted since the Binding class constructor accepts the path as an argument. The UpdateSourceTrigger is set to Explicit, which causes any changes in the underlying data object to force a re-bind of the control. For the eventDate control, enter SelectedDate into the Properties panel search field and following the same process of data binding, select the EventDate property of the CustomerInfo class. Remember to ensure that TwoWay/Explict binding is selected in the advanced options:
Read more
  • 0
  • 0
  • 1335

article-image-creating-data-forms-silverlight-4
Packt
23 Apr 2010
4 min read
Save for later

Creating Data Forms in Silverlight 4

Packt
23 Apr 2010
4 min read
Collecting data Now that we have created a business object and a WCF service here-http://www.packtpub.com/article/creating-wcf-service-business-object-data-submission-silverlight, we are ready to collect data from the customer through our Silverlight application. Silverlight provides all of the standard input controls that .NET developers have come to know with Windows and ASP.NET development, and of course the controls are customizable through styles. Time for action – creating a form to collect data We will begin by creating a form in Silverlight for collecting the data from the client. We are going to include a submission form to collect the name, phone number, email address, and the date of event for the person submitting the sketch. This will allow the client (Cake O Rama) to contact this individual and follow up on a potential sale. We'll change the layout of MainPage.xaml to include a form for user input. We will need to open the CakeORama project in Expression Blend and then open MainPage.xaml for editing in the Blend art board. Our Ink capture controls are contained within a Grid, so we will just add a column to the Grid and place our input form right next to the Ink surface. To add columns in Blend, select the Grid from the Objects and Timeline panel, position your mouse in the highlighted area above the Grid and click to add a column: Blend will add a <Grid.ColumnDefinitions> node to our XAML: <Grid.ColumnDefinitions><ColumnDefinition Width="0.94*"/><ColumnDefinition Width="0.06*"/></Grid.ColumnDefinitions> Blend also added a Grid.ColumnSpan="2" attribute to both the StackPanel and InkPresenter controls that were already on the page. We need to modify the StackPanel and inkPresenter, so that they do not span both columns and thereby forcing us to increase the size of our second column. In Blend, select the StackPanel from the Objects and Timeline panel: In the Properties panel, you will see a property called ColumnSpan with a value of 2. Change this value to 1 and press the Enter key. We can see that Blend moved the StackPanel into the first column, and we now have a little space next to the buttons. We need to do the same thing to the inkPresenter control, so that it is also within the first column. Select the inkPresenter control from the Objects and Timeline panel: Change the ColumnSpan from 2 to 1 to reposition the inkPresenter into the left column: The inkPresenter control should be positioned in the left column and aligned with the StackPanel containing our ink sketch buttons: Now that we have moved the existing controls into the first column, we will change the size of the second column, so that we can start adding our input controls. We also need to change the overall size of the MainPage.xaml control to fit more information on the right side of the ink control. Click on the [UserControl] in the Objects and Timeline panel, and then in the Properties panel change the Width to 800: Now we need to change the size of our grid columns. We can do this very easily in XAML, so switch to the XAML view in Blend by clicking on the XAML icon: In the XAML view, change the grid column settings to give both columns an equal width: <Grid.ColumnDefinitions><ColumnDefinition Width="0.5*"/><ColumnDefinition Width="0.5*"/></Grid.ColumnDefinitions> Switch back to the design view by clicking on the design button: Our StackPanel and inkPresenter controls are now positioned to the left of the page and we have some empty space to the right for our input controls: Select the LayoutRoot control in the Objects and Timeline panel and then doubleclick on the TextBlock control in the Blend toolbox to add a new TextBlock control: Drag the control to the top and right side of the page: On the Properties panel, change the Text of the TextBlock to Customer Information, change the FontSize to 12pt and click on the Bold indicator:
Read more
  • 0
  • 0
  • 1098