Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-software-documentation-trac
Packt
21 Oct 2009
4 min read
Save for later

Software Documentation with Trac

Packt
21 Oct 2009
4 min read
Documentation—if there is one word that installs fear in most developers, it must be this one. No one in their right mind would argue the value of documentation, but it is the actual act of writing it that concerns developers so. The secret of creating good documentation is to make the process of doing so as painless as possible, and if we are lucky maybe even attractive, to the developers. The only practical way to achieve that is to reduce friction. The last thing we need when we are in middle of fixing a bug is to wrestle with our word processor, or even worse try to find the right document to update. What's in a name?Throughout the rest of this article, we will refer to various URLs that point to specific areas of our Trac environment, Subversion repository, or WebDAV folders. Whenever you see servername, replace it with your own server name. Making Documentation Easy One of the reasons Trac works so well for managing software development is because it is browser based. Apart from our development environment, the browser, along with our email client, are the next most likely applications we are going to have installed and running on our computer. If access to our Trac environment is only a click away, it stands to reason that we are more likely to use it. We can refer to Trac as a "wiki on steroids" because of the way the developers have integrated the typical features of a wiki throughout the whole product. However, for all the extra features and integration, at its heart Trac is basically just a wiki and this is the main reason why it so useful in helping smooth the documentation process. A wiki is a web application that allows visitors to create and modify its content. Let's expand on that slightly. As well as letting us view content—like a normal website—a wiki lets us create or edit the content as we desire. This could take the form of creating new content, or simply touching up the spelling on something that already exists. While the general idea with a wiki is that anyone can edit them, in practice this can lead to abuse, vandalism, or spam. The obvious solution to this is to involve people to authenticate the edit. Do we really need this security?Yes. Having these security requirements provides us with accountability. We will always be able to see when something is done, but by enforcing security we can see who did it. While this does cause some administrative overhead to create and maintain authentication details for anyone involved with our development projects, the benefits outweigh the costs. Accessing Trac Before we look at how to modify and create pages, let's see how our Trac environment looks to a normal (i.e. unauthenticated) user. To do this we need to open our web browser and enter the URL http://servername/projects/sandbox into the address bar and then press the Enter key. This will take us to the default page (which is actually called WikiStart). When we access our project as an unauthenticated (or anonymous in Trac parlance) user, the majority of it will look and act like a normal website and the wiki in particular seems just like the usual collection of interlinked pages. However, as soon as we authenticate ourselves to Apache (which passes that information on to Trac), it all changes. If we click the Login link in the top right of the page now, we will be presented with our browser's usual authentication dialog box as shown in the following screenshot. Input the proper username and password and click OK. If we enter them correctly we will be taken back to the same page, but this time there will be two differences. Firstly, instead of the login link we will see the text logged in as followed by the username we used and a Logout link. Secondly, if we scroll to the bottom of the page there are some buttons that allow us to modify the page in various ways. Anonymous users have permission to only view wiki pages, while authenticated users have full control. We should try that out now—click the Logout link and scroll down again, and you will see that the buttons are absent.
Read more
  • 0
  • 0
  • 1808

article-image-change-control-personal-projects-subversion-style
Packt
21 Oct 2009
5 min read
Save for later

Change Control for Personal Projects - Subversion Style

Packt
21 Oct 2009
5 min read
Who Should Read This Read on if you are new to change control, or believe that change control only applies to software, or that it is only meant for large projects. If you are a software pro working with large software projects, you can still read this if you want a gentle introduction to Subversion or svn as it is called. Introduction We have all heard those trite remarks about change -- “... change is the only constant ...”, or similar ones, especially before an unpleasant corporate announcement. These overused remarks about change are unfortunately true. During the course of a day, we make numerous (hopefully!) interrelated changes, updates, or transformations to our work products to reach specific project goals. Needless to say, these changes need to be tracked along with the rationale behind each if we are to prevent ourselves from repeating mistakes, or simply want to recall why we did what we did one month ago! Note that we are not talking about only code or documents here; your work products could be a portfolios of photographs, animations, or some arbitrary binary format. A change control discipline also gives you additional advantages such as being able to develop simultaneous versions of work products for different purposes or clients, rolling back to a previous arbitrary version, or setting up trial development in a so-called branch to bring it back to the main work stream after due review. You also have a running history of how your work product has evolved over time and features. Fetching from a change managed repository also prevents you from creating those fancifully named multiple copies of a file just to keep track of its versions. To reiterate: we use the words 'work product' and 'development' in the broadest sense and not just as applied to software. You might as well be creating a banner ad for your client as much as a Firefox plugin. In the rest of this article we will see how to build a simple personal change control discipline for your day-to-day work using a version control tool. As you will note, 'control' and 'management' have been used interchangeably, though a little hair splitting will yield rich dividends in terms of how different these terms are. Subversion Subversion is version control system available on the Linux (and similar) platforms. If you are trapped in a proprietary world by choice, circumstance, or compulsion, you should try TortoiseSVN. Here, we confine ourselves to the Linux platform. Subversion works by creating a time line of your work products from their inception (or from the point they are brought under version control) to the present point in time, by capturing snapshots of your work products at discrete points that you decide. Each snapshot is a version. You can traverse this time line and extract specific versions for use. How does subversion do it? It versions entire directories. A new version of your directory will be created even if you change one file in it. Don't worry; this does not lead to an explosion of file size with each version. Explaining some terminology, albeit informally, should make the going easier from here. Subversion stores your project(s) in a repository. For the purpose of this article, our repository will stay on the local machine. A revision is nothing but a particular snapshot of the project directory. A working directory is your sandbox. This is where you check out a particular version of your project directory from the repository, make any modifications to it, and then do a check in back into the repository. Revision numbers are bumped up with each check in. You can revert a configuration item, which is like undoing any changes you made. If all this sounds a little abstruse, don't worry, because we will shortly set up our repository so that you can try things out. A commit is when you...., well commit a change done to a file into the repository. Subversion is mostly bundled with a Linux distribution. Find out if you have yours with a 'man svn' or 'svn -h' or a 'whereis svn' command. Setting up Your Repository You can set up your repository in your home directory if you are working on a shared environment. If you have a machine to yourself, you might want to create an 'svn' account with /sbin/nologin (politely refuses logins) as the shell. Your repository might then be '/home/svn/repos'. Subversion is a command line tool. But the only command you will ever issue for the purpose of this article will be to set up your repository: $ svnadmin create /path/to/your/repository The rest, as they say, is GUI! Let Us Get Visual A GUI for subversion is a great tool for learning and working even if you decide to settle for the command line once you get more proficient. eSvn (http://zoneit.free.fr/esvn/) is a Qt-based graphical front end for Subversion. Follow the instructions with the download to compile and install eSvn. Run esvn and this is how it will look with the File | Options... dialog open. Make sure you enter the correct path to svn if not for the other items.    
Read more
  • 0
  • 0
  • 1280

article-image-osworkflow-and-quartz-task-scheduler
Packt
21 Oct 2009
10 min read
Save for later

OSWorkflow and the Quartz Task Scheduler

Packt
21 Oct 2009
10 min read
Task Scheduling with Quartz Both people-oriented and system-oriented BPM systems need a mechanism to execute tasks within an event or temporal constraint, for example, every time a state change occurs or every two weeks. BPM suites address these requirements with a job-scheduling component responsible for executing tasks at a given time. OSWorkflow, the core of our open-source BPM solution, doesn't include these temporal capabilities by default. Thus, we can enhance OSWorkflow by adding the features present in the Quartz open-source project. What is Quartz? Quartz is a Java job-scheduling system capable of scheduling and executing jobs in a very flexible manner. The latest stable Quartz version is 1.6. You can download Quartz from http://www.opensymphony.com/quartz/download.action. Installing The only file you need in order to use Quartz out of the box is quartz.jar. It contains everything you need for basic usage. Quartz configuration is in the quartz. properties file, which you must put in your application's classpath. Basic Concepts The Quartz API is very simple and easy to use. The first concept that you need to be familiar with is the scheduler. The scheduler is the most important part of Quartz, managing as the word implies the scheduling and unscheduling of jobs and the firing of triggers. Task Scheduling with Quartz A job is a Java class containing the task to be executed and the trigger is the temporal specification of when to execute the job. A job is associated with one or more triggers and when a trigger fires, it executes all its related jobs. That's all you need to know to execute our Hello World job. Integration with OSWorkflow By complementing the features of OSWorkflow with the temporal capabilities of Quartz, our open-source BPM solution greatly enhances its usefulness. The Quartz-OSWorkflow integration can be done in two ways—Quartz calling OSWorkflow workflow instances and OSWorkflow scheduling and unscheduling Quartz jobs. We will cover the former first, by using trigger-functions, and the latter with the ScheduleJob function provider. Creating a Custom Job Job's are built by implementing the org.quartz.Job interface as follows: public void execute(JobExecutionContext context) throws JobExecutionException; The interface is very simple and concise, with just one method to be implemented. The Scheduler will invoke the execute method when the trigger associated with the job fires. The JobExecutionContext object passed as an argument has all the context and environment data for the job, such as the JobDataMap. The JobDataMap is very similar to a Java map but provides strongly typed put and get methods. This JobDataMap is set in the JobDetail file before scheduling the job and can be retrieved later during the execution of the job via the JobExecutionContext's getJobDetail().getJobDataMap() method. Trigger Functions trigger-functions are a special type of OSWorkflow function designed specifically for job scheduling and external triggering. These functions are executed when the Quartz trigger fires, thus the name. trigger-functions are not associated with an action and they have a unique ID. You shouldn't execute a trigger-function in your code. To define a trigger-function in the definition, put the trigger-functions declaration before the initial-actions element. ... <trigger-functions> <trigger-function id="10"> <function type="beanshell"> <arg name="script"> propertySet.setString("triggered", "true"); </arg> </function> </trigger-function> </trigger-functions> <initial-actions> ... This XML definition fragment declares a trigger-function (having an ID of 10), which executes a beanshell script. This script will put a named property inside the PropertySet of the instance but you can define a trigger-function just like any other Java- or BeanShell-based function. To invoke this trigger-function, you will need an OSWorkflow built-in function provider to execute trigger-functions and to schedule a custom job—the ScheduleJob FunctionProvider. More about Triggers Quartz's triggers are of two types—the SimpleTrigger and the CronTrigger. The former, as its name implies, serves for very simple purposes while the latter is more complex and powerful; it allows for unlimited flexibility for specifying time periods. SimpleTrigger SimpleTrigger is more suited for job firing at specific points in time, such as Saturday 1st at 3.00 PM, or at an exact point in time repeating the triggering at fixed intervals. The properties for this trigger are the shown in the following table: s. The properties for this trigger are the shown in the following table: Property Description Start time The fire time of the trigger. End time The end time of the trigger. If it is specified, then it overrides the repeat count. Repeat interval The interval time between repetitions. It can be 0 or a positive integer. If it is 0, then the repeat count will happen in parallel. Repeat count How many times the trigger will fire. It can be 0, a positive integer, or SimpleTrigger.REPEAT_INDEFINITELY.     CronTrigger The CronTrigger is based on the concept of the UN*X Cron utility. It lets you specify complex schedules, like every Wednesday at 5.00 AM, or every twenty minutes, or every 5 seconds on Monday. Like the SimpleTrigger, the CronTrigger has a start time property and an optional end time. A CronExpression is made of seven parts, each representing a time component: Each number represents a time part: 1 represents seconds 2 represents minutes 3 represents hours 4 represents the day-of-month 5 represents month 6 represents the day-of-week 7 represents year (optional field) Here are a couple of examples of cron expression: 0 0 6 ? * MON: This CronExpression means "Every Monday at 6 AM". 0 0 6 * *: This CronExpression mans "Every day at 6 am". For more information about CronExpressions refer to the following website: http://www.opensymphony.com/quartz/wikidocs/CronTriggers%20Tutorial.html. Scheduling a Job We will get a first taste of Quartz, by executing a very simple job. The following snippet of code shows how easy it is to schedule a job.     SchedulerFactory schedFact = new                                             org.quartz.impl.StdSchedulerFactory();     Scheduler sched = schedFact.getScheduler();     sched.start();        JobDetail jobDetail = new JobDetail("myJob", null, HelloJob.class);        Trigger trigger = TriggerUtils.makeHourlyTrigger();                                                       // fire every hour        trigger.setStartTime(TriggerUtils.getEvenHourDate(new Date()));                                                     // start on the next even hour        trigger.setName("myTrigger");        sched.scheduleJob(jobDetail, trigger); The following code assumes a HelloJob class exists. It is a very simple class that implements the job interface and just prints a message to the console. package packtpub.osw; import org.quartz.Job; import org.quartz.JobExecutionContext; import org.quartz.JobExecutionException; /** * Hello world job. */ public class HelloJob implements Job { public void execute(JobExecutionContext ctx) throws JobExecutionException { System.out.println("Hello Quartz world."); } }   The first three lines of the following code create a SchedulerFactory, an object that creates Schedulers, and then proceed to create and start a new Scheduler. SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory(); Scheduler sched = schedFact.getScheduler(); sched.start(); This Scheduler will fire the trigger and subsequently the jobs associated with the trigger. After creating the Scheduler, we must create a JobDetail object that contains information about the job to be executed, the job group to which it belongs, and other administrative data. JobDetail jobDetail = new JobDetail("myJob", null, HelloJob.class); This JobDetail tells the Scheduler to instantiate a HelloJob object when appropriate, has a null JobGroup, and has a Job name of "myJob". After defining the JobDetail, we must create and define the Trigger, that is, when the Job will be executed and how many times, etc. Trigger trigger = TriggerUtils.makeHourlyTrigger(); // fire every hour trigger.setStartTime(TriggerUtils.getEvenHourDate(new Date())); // start on the next even hour trigger.setName("myTrigger"); The TriggerUtils is a helper object used to simplify the trigger code. With the help of the TriggerUtils, we will create a trigger that will fire every hour. This trigger will start firing the next even hour after the trigger is registered with the Scheduler. The last line of code puts a name to the trigger for housekeeping purposes. Finally, the last line of code associates the trigger with the job and puts them under the control of the Scheduler. sched.scheduleJob(jobDetail, trigger); When the next even hour arrives after this line of code is executed, the Scheduler will fire the trigger and it will execute the job by reading the JobDetail and instantiating the HelloJob.class. This requires that the class implementing the job interface must have a no-arguments constructor. An alternative method is to use an XML file for declaring the jobs and triggers. This will not be covered in the book, but you can find more information about it in the Quartz documentation. Scheduling from a Workflow Definition The ScheduleJob FunctionProvider has two modes of operation, depending on whether you specify the jobClass parameter or not. If you declare the jobClass parameter, ScheduleJob will create a JobDetail with jobClass as the class implementing the job interface. <pre-functions> <function type="class"> <arg name="class.name">com.opensymphony.workflow.util.ScheduleJob </arg> <arg name="jobName">Scheduler Test </arg> <arg name="triggerName">SchedulerTestTrigger</arg> <arg name="triggerId">10 </arg> <arg name="jobClass">packtpub.osw.SendMailIfActive </arg> <arg name="schedulerStart">true </arg> <arg name="local">true </arg> </function> </pre-functions> This fragment will schedule a job based on the SendMailIfActive class with the current time as the start time. The ScheduleJob like any FunctionProvider can be declared as a pre or a post function. On the other hand, if you don't declare the jobClass, ScheduleJob will use the WorkflowJob.class as the class implementing the job interface. This job executes a trigger-function on the instance that scheduled it when fired.    <pre-functions> <function type="class"> <arg name="class.name">com.opensymphony.workflow.util.ScheduleJob </arg> <arg name="jobName">Scheduler Test </arg> <arg name="triggerName">SchedulerTestTrigger </arg> <arg name="triggerId">10 </arg> <arg name="schedulerStart">true </arg> <arg name="local">true </arg> </function> </pre-functions>   This definition fragment will execute the trigger-function with ID 10 as soon as possible, because no CronExpression or start time arguments have been specified. This FunctionProvider has the arguments shown in the following table:
Read more
  • 0
  • 0
  • 2132
Visually different images

article-image-configuring-jdbc-oracle-jdeveloper
Packt
21 Oct 2009
14 min read
Save for later

Configuring JDBC in Oracle JDeveloper

Packt
21 Oct 2009
14 min read
Introduction Unlike Eclipse IDE, which requires a plug-in, JDeveloper has a built-in provision to establish a JDBC connection with a database. JDeveloper is the only Java IDE with an embedded application server, the Oracle Containers for J2EE (OC4J). This database-based web application may run in JDeveloper without requiring a third-party application server. However, JDeveloper also supports third-party application servers. Starting with JDeveloper 11, application developers may point the IDE to an application server instance (or OC4J instance), including third-party application servers that they want to use for testing during development. JDeveloper provides connection pooling for the efficient use of database connections. A database connection may be used in an ADF BC application, or in a JavaEE application. A database connection in JDeveloper may be configured in the Connections Navigator. A Connections Navigator connection is available as a DataSource registered with a JNDI naming service. The database connection in JDeveloper is a reusable named connection that developers configure once and then use in as many of their projects as they want. Depending on the nature of the project and the database connection, the connection is configured in the bc4j.xcfg file or a JavaEE data source. Here, it is necessary to distinguish between data source and DataSource. A data source is a source of data; for example an RDBMS database is a data source. A DataSource is an interface that represents a factory for JDBC Connection objects. JDeveloper uses the term Data Source or data source to refer to a factory for connections. We will also use the term Data Source or data source to refer to a factory for connections, which in the javax.sql package is represented by the DataSource interface. A DataSource object may be created from a data source registered with the JNDI (Java Naming and Directory) naming service using JNDI lookup. A JDBC Connection object may be obtained from a DataSource object using the getConnection method. As an alternative to configuring a connection in the Connections Navigator a data source may also be specified directly in the data source configuration file data-sources.xml. In this article we will discuss the procedure to configure a JDBC connection and a JDBC data source in JDeveloper 10g IDE. We will use the MySQL 5.0 database server and MySQL Connector/J 5.1 JDBC driver, which support the JDBC 4.0 specification. In this article you will learn the following: Creating a database connection in JDeveloper Connections Navigator. Configuring the Data Source and Connection Pool associated with the connection configured in the Connections Navigator. The common JDBC Connection Errors. Before we create a JDBC connection and a data source we will discuss connection pooling and DataSource. Connection Pooling and DataSource The javax.sql package provides the API for server-side database access. The main interfaces in the javax.sql package are DataSource, ConnectionPoolDataSource, and PooledConnection. The DataSource interface represents a factory for connections to a database. DataSource is a preferred method of obtaining a JDBC connection. An object that implements the DataSource interface is typically registered with a Java Naming and Directory API-based naming service. DataSource interface implementation is driver-vendor specific. The DataSource interface has three types of implementations: Basic implementation: In basic implementation there is 1:1 correspondence between a client's Connection object and the connection with the database. This implies that for every Connection object, there is a connection with the database. With the basic implementation, the overhead of opening, initiating, and closing a connection is incurred for each client session. Connection pooling implementation: A pool of Connection objects is available, from which connections are assigned to the different client sessions. A connection pooling manager implements the connection pooling. When a client session does not require a connection, the connection is returned to the connection pool and becomes available to other clients. Thus, the overheads of opening, initiating, and closing connections are reduced. Distributed transaction implementation: Distributed transaction implementation produces a Connection object that is mostly used for distributed transactions and is always connection pooled. A transaction manager implements the distributed transactions. An advantage of using a data source is that code accessing a data source does not have to be modified when an application is migrated to a different application server. Only the data source properties need to be modified. A JDBC driver that is accessed with a DataSource does not register itself with a DriverManager. A DataSource object is created using a JNDI lookup and subsequently a Connection object is created from the DataSource object. For example, if a data source JNDI name is jdbc/OracleDS a DataSource object may be created using JNDI lookup. First, create an InitialContext object and subsequently create a DataSource object using the InitialContext lookup method. From the DataSource object create a Connection object using the getConnection() method: InitialContext ctx=new InitialContext(); DataSource ds=ctx.lookup("jdbc/OracleDS"); Connection conn=ds.getConnection(); The JNDI naming service, which we used to create a DataSource object is provided by J2EE application servers such as the Oracle Application Server Containers for J2EE (OC4J) embedded in the JDeveloper IDE. A connection in a pool of connections is represented by the PooledConnection interface, not the Connection interface. The connection pool manager, typically the application server, maintains a pool of PooledConnection objects. When an application requests a connection using the DataSource.getConnection() method, as we did using the jdbc/OracleDS data source example, the connection pool manager returns a Connection object, which is actually a handle to an object that implements the PooledConnection interface. A ConnectionPoolDataSource object, which is typically registered with a JNDI naming service, represents a collection of PooledConnection objects. The JDBC driver provides an implementation of the ConnectionPoolDataSource, which is used by the application server to build and manage a connection pool. When an application requests a connection, if a suitable PooledConnection object is available in the connection pool, the connection pool manager returns a handle to the PooledConnection object as a Connection object. If a suitable PooledConnection object is not available, the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource to create a new PooledConnection object. For example, if connectionPoolDataSource is a ConnectionPoolDataSource object a new PooledConnection gets created as follows: PooledConnection pooledConnection=connectionPoolDataSource.getPooledConnection(); The application does not have to invoke the getPooledConnection() method though; the connection pool manager invokes the getPooledConnection() method and the JDBC driver implementing the ConnectionPoolDataSource creates a new PooledConnection, and returns a handle to it. The connection pool manager returns a Connection object, which is a handle to a PooledConnection object, to the application requesting a connection. When an application closes a Connection object using the close() method, as follows, the connection does not actually get closed. conn.close(); The connection handle gets deactivated when an application closes a Connection object with the close() method. The connection pool manager does the deactivation. When an application closes a Connection object with the close() method any client info properties that were set using the setClientInfo method are cleared. The connection pool manager is registered with a PooledConnection object using the addConnectionEventListener() method. When a connection is closed, the connection pool manager is notified and the connection pool manager deactivates the handle to the PooledConnection object, and returns the PooledConnection object to the connection pool to be used by another application. The connection pool manager is also notified if a connection has an error. A PooledConnection object is not closed until the connection pool is being reinitialized, the server is shutdown, or a connection becomes unusable. In addition to connections being pooled, PreparedStatement objects are also pooled by default if the database supports statement pooling. It can be discovered if a database supports statement pooling using the supportsStatementPooling() method of the DatabaseMetaData interface. The PeparedStatement pooling is also managed by the connection pool manager. To be notified of PreparedStatement events such as a PreparedStatement getting closed or a PreparedStatement becoming unusable, a connection pool manager is registered with a PooledConnection manager using the addStatementEventListener() method. A connection pool manager deregisters a PooledConnection object using the removeStatementEventListener() method. Methods addStatementEventListener and removeStatementEventListener are new methods in the PooledConnection interface in JDBC 4.0. Pooling of Statement objects is another new feature in JDBC 4.0. The Statement interface has two new methods in JDBC 4.0 for Statement pooling: isPoolable() and setPoolable(). The isPoolable method checks if a Statement object is poolable and the setPoolable method sets the Statement object to poolable. When an application closes a PreparedStatement object using the close() method the PreparedStatement object is not actually closed. The PreparedStatement object is returned to the pool of PreparedStatements. When the connection pool manager closes a PooledConnection object by invoking the close() method of PooledConnection all the associated statements also get closed. Pooling of PreparedStatements provides significant optimization, but if a large number of statements are left open, it may not be an optimal use of resources. Thus, the following procedure is followed to obtain a connection in an application server using a data source: Create a data source with a JNDI name binding to the JNDI naming service. Create an InitialContext object and look up the JNDI name of the data source using the lookup method to create a DataSource object. If the JDBC driver implements the DataSource as a connection pool, a connection pool becomes available. Request a connection from the connection pool. The connection pool manager checks if a suitable PooledConnection object is available. If a suitable PooledConnection object is available, the connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. If a PooledConnection object is not available the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource, which is implemented by the JDBC driver. The JDBC driver implementing the ConnectionPoolDataSource creates a PooledConnection object and returns a handle to it. The connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. When an application closes a connection, the connection pool manager deactivates the handle to the PooledConnection object and returns the PooledConnection object to the connection pool. ConnectionPoolDataSource provides some configuration properties to configure a connection pool. The configuration pool properties are not set by the JDBC client, but are implemented or augmented by the connection pool. The properties can be set in a data source configuration. Therefore, it is not for the application itself to change the settings, but for the administrator of the pool, who also happens to be the developer sometimes, to do so. Connection pool properties supported by ConnectionPoolDataSource are discussed in following table: Connection Pool Property Type Description maxStatements int Maximum number of statements the pool should keep open. 0 (zero) indicates that statement caching is not enabled. initialPoolSize int The initial number of connections the pool should have at the time of creation. minPoolSize int The minimum number of connections in the pool. 0 (zero) indicates that connections are created as required. maxPoolSize int The maximum number of connections in the connection pool. 0 indicates that there is no maximum limit. maxIdleTime int Maximum duration (in seconds) a connection can be kept open without being used before the connection is closed. 0 (zero) indicates that there is no limit. propertyCycle int The interval in seconds the pool should wait before implementing the current policy defined by the connection pool properties. maxStatements int The maximum number of statements the pool can keep open. 0 (zero) indicates that statement caching is not enabled.     Setting the Environment Before getting started, we have to install the JDeveloper 10.1.3 IDE and the MySQL 5.0 database. Download JDeveloper from: http://www.oracle.com/technology/software/products/jdev/index.html. Download the MySQL Connector/J 5.1, the MySQL JDBC driver that supports JDBC 4.0 specification. To install JDeveloper extract the JDeveloper ZIP file to a directory. Log in to the MySQL database and set the database to test. Create a database table, Catalog, which we will use in a web application. The SQL script to create the database table is listed below: CREATE TABLE Catalog(CatalogId VARCHAR(25)PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO Catalog VALUES('catalog1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss');INSERT INTO Catalog VALUES('catalog2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); MySQL does not support ROWID, for which support has been added in JDBC 4.0. Having installed the JDeveloper IDE, next we will configure a JDBC connection in the Connections Navigator. Select the Connections tab and right-click on the Database node to select New Database Connection. Click on Next in Create Database Connection Wizard. In the Create Database Connection Type window, specify a Connection Name—MySQLConnection for example—and set Connection Type to Third Party JDBC Driver, because we will be using MySQL database, which is a third-party database for Oracle JDeveloper and click on Next. If a connection is to be configured with Oracle database select Oracle (JDBC) as the Connection Type and click on Next. In the Authentication window specify Username as root (Password is not required to be specified for a root user by default), and click on Next. In the Connection window, we will specify the connection parameters, such as the driver name and connection URL; click on New to specify a Driver Class. In the Register JDBC Driver window, specify Driver Class as com.mysql.jdbc.Driver and click on Browse to select a Library for the Driver Class. In the Select Library window, click on New to create a new library for the MySQL Connector/J 5.1 JAR file. In the Create Library window, specify Library Name as MySQL and click on Add Entry to add a JAR file entry for the MySQL library. In the Select Path Entry window select mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar and click on Select. In the Create Library window, after a Class Path entry gets added to the MySQL library, click on OK. In the Select Library window, select the MySQL library and click on OK. In the Register JDBC Driver window, the MySQL library gets specified in the Library field and the mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar gets specified in the Classpath field. Now, click on OK. The Driver Class, Library, and Classpath fields get specified in the Connection window. Specify URL as jdbc:mysql://localhost:3306/test, and click on Next. In the Test window click on Test Connection to test the connection that we have configured. A connection is established and a success message gets output in the Status text area. Click on Finish in the Test window. A connection configuration, MySQLConnection, gets added to the Connections navigator. The connection parameters are displayed in the structure view. To modify any of the connection settings, double-click on the Connection node. The Edit Database Connection window gets displayed. The connection Username, Password, Driver Class, and URL may be modified in the Edit window. A database connection configured in the Connections navigator has a JNDI name binding in the JNDI naming service provided by OC4J. Using the JNDI name binding, a DataSource object may be created in a J2EE application. To view, or modify the configuration settings of the JDBC connection select Tools | Embedded OC4J Server Preferences in JDeveloper. In the window displayed, select Global | Data Sources node, and to update the data-sources.xml file with the connection defined in the Connections navigator, click on the Refresh Now button. Checkboxes may be selected to Create data-source elements where not defined, and to Update existing data-source elements. The connection pool and data source associated with the connection configured in the Connections navigator get listed. Select the jdev-connection-pool-MySQLConnection node to list the connection pool properties as Property Set A and Property Set B. The tuning properties of the JDBC connection pool may be set in the Connection Pool window. The different tuning attributes are listed in following table:        
Read more
  • 0
  • 0
  • 8680

article-image-integrating-zimbra-collaboration-suite-microsoft-outlook
Packt
21 Oct 2009
11 min read
Save for later

Integrating Zimbra Collaboration Suite with Microsoft Outlook

Packt
21 Oct 2009
11 min read
Introduction Let's face it, in today's business environment, there is only one email client that truly matters. I am not saying it is the best client, or that it offers more features, and I certainly am not saying it is the most secure. What I am saying is that you would be hard pressed to walk into an organization, of let's say more than 10 desktops, and not see users checking their email with Microsoft Outlook. Zimbra Collaboration Suite offers uncanny support for Outlook including: Native Sync with MAPI Support for both Online and Offline Modes Cached mode operation Support for multiple calendars Here, we will discuss these features and focus on configuring Outlook to work with the Zimbra server. As you will see with a Zimbra back end and an Outlook client, it is transparent to your users whether you are using Microsoft Exchange or Zimbra Collaboration Suite as your back end product. This transparency to users makes the migration from Exchange to Zimbra that much easier in the eyes of your users, especially when it comes to user training. The ability to seamlessly integrate Zimbra and Outlook is one of Zimbra's strongest assets and one of its strongest arguments for making the transition from Exchange to Zimbra. However, if you want to use the full power of Zimbra (not only the fancy look but great features such as the searches), you should use the Web Client. We will take a detailed look at Zimbra integration with Outlook, including: The Zimbra Connector for Outlook (ZCO) A look at Zimbra integration Sharing Outlook folders Outlook uses the Messaging Application Programming Interface (MAPI) to allow programs to communicate with messaging systems and stores. MAPI is proprietary to Microsoft and is key to Zimbra being able to synchronize and work with Outlook. Zimbra uses a connector to facilitate this communication—called the Zimbra Connector—for Outlook. The PST Import Wizard One of the beauties of Outlook's integration in Zimbra is that you won't start from scratch: Zimbra gives you tools that are able to import your data (emails, calendars, contacts, etc) either from a concurrent solution's server (Exchange or Domino) or directly from a PST file (the file used by Outlook to store all its data).   We'll have a look at the PST Import Wizard. To download it: Log in to the Administration Console at https://zimbra.emailcs.com:7071/. Click on the Downloads tab on the left of the navigation pane. 3. In the Content Pane, click on the PST Import Wizard to download the executable file. 4. Save the file to the local computer, or a network accessible shared folder. 5. Double click the .exe file to launch the wizard (there's no installation process). 6. Click on Next on the presentation page. 7. In the Hostname field enter: zimbra.emailcs.com. 8. In the Port field you can leave the default (80) and let the Use Secure Connection box unchecked. 9. In the Username field, enter your Zimbra's user ([email protected]) and in the Password field your Zimbra's password. Click on the Next button. You will now have to select the PST file that you want to import into the Zimbra server. The Zimbra Import Wizard helps you as it opens the default Outlook PST's directory when you click on the Browse button. Once you have selected the PST you want to import and clicked on the Open button, you can now click on the Next button of the initial window. Now, you can choose how your data will be imported: Import Junk-Mail Folder: If you leave this box checked, all your spam will be imported to the server and marked as spam on the server. Import Deleted Items Folder: With this, the deleted mails will be imported on the server. Ignore previously imported items: This option is used when you're importing your data in several attempts. If you leave it checked, the already imported mails won't be imported again. Import items received after: Checking this box and choosing a date in the calendar next to it, allows us to do a partial import based on date. Import messages with some headers but no message body: You should leave this one checked as it imports some badly formatted mails. Convert meeting, organized by me, from my old address to my new address: You need to check this box (and enter your previous email address) if you're getting a new email address on the Zimbra server. If you don't, the meetings will be imported but you won't be the owner. Migrate private appointments: This one is your own choice as Zimbra does not handle (yet) the private items; all the imported appointments will be viewable by anyone you share your calendar with.   Once your choices are made, you can click on the Next button. 14. Click OK in the confirmation window. 15. Next window will show you the import evolution (number of items and percentage). You can stop the import at anytime and start again later. Once the import is finished, a window pops up with a summary of the import session. 16. You can click on the OK button of the summary window then Finish in the last window. The Zimbra Connector for Outlook The Zimbra Connector for Outlook (ZCO) is a downloadable .msi installable file that must be installed on the desktop in order for Outlook and Zimbra to communicate. To download the ZCO to the client: Log in to the Administration Console at https://zimbra.emailcs.com:7071. Click on the Downloads tab on the left of the navigation pane. 3. In the Content Pane, click on the Zimbra Connector for Outlook to download the .msi installable file. Save the file to the local computer, or a network accessible shared folder. Double click the .msi file to start the installation process. The installation wizard will begin and go ahead and accept the License Agreement and accept all of the defaults. Once complete click FINISH. The ZCO creates a brand new profile within Outlook, called Zimbra. If you had previous profile(s) created on your computer, be sure to choose the "Zimbra" one. The ZCO is now installed, and the first time we run Outlook on this client, the connector will prompt us for configuration information. Zimbra currently supports Outlook 2003 only. In the Server Name field, enter: zimbra.emailcs.com. For port leave the default of 80. Email address will be the email address for this user. In our case, we will use the Worker Bee with an email address of [email protected]. For the password, enter the same password Worker would use to log in to the AJAX web client. Once completed, click OK and Outlook will open Worker's email box. The ZCO will now sync the Global Access List and will get all the emails from the server locally. It means that if you imported lots of item previously, you might need some time to get them back into Outlook from the server. Luckily, then the sync process happens in the background. As you can see in the following screenshot, the folders we created in the Web Client are now configured in Outlook. The first time Outlook is opened, it automatically performs a send/receive with the Zimbra server. After this initial synchronization, there is nothing a user needs to do from then on to initiate synchronization with the server. There is also nothing the user needs to set-up to let Outlook know whether or not the user is Online—connected to the server, or Offline—disconnected from the server. Outlook automatically checks the status and acts accordingly. Therefore, a user need not be connected to the server to work with email that has already been received, check the address book, or work with the Calendar. All changes that the user makes in Offline mode, will synchronize with the server the next time the server is connected and Outlook is online. At this time, we should take a quick look around Outlook and see how integrated Zimbra really is with Outlook. A Look at Zimbra Integration The integration of Zimbra is more than just the ability to send and receive email. Outlook is now acting as the front end to create contacts, appointments, and tasks that will be stored on the server. Let's take a moment to look at each one individually. Contacts The easiest way to see the integration of contacts between Outlook and Zimbra is to compose a new mail message, and instead of typing in an email address, click on the TO: button and select Global Address List from the Address Book drop-down menu. As you can see, this feature looks exactly the same whether you are using Exchange or Zimbra as your back end collaboration server. Users are familiar with this look and feel, and the ability to select users that are within the organization's Global Address List. This list comes directly from the Zimbra server and is maintained there as well. The user also has the option to select the own personal contact list. This list could be created and maintained either via the web client, through Outlook directly, or both as they will synchronize together. Appointments In most work organizations, the ability to create appointments, invite people to attend, and check invitees schedule is a key function of Exchange and Outlook. Luckily for us, the same functionality could be used with Zimbra and Outlook. As seen in the following figure, the process for creating an appointment is exactly the same. In the Calendar application, click New --> Appointment. Click on the Invite Attendees button. Click the To button and select the Global Address List from the drop-down menu. Select the CEO from the address list and click OK. Click on the Scheduling tab.   The Calendar is synced with the Zimbra server and is able to check the availability of the users within the organization, a key feature of any collaboration suite. Once you have found a schedule when all the attendees are free, go back to the Appointment tab, type in a Subject for your appointment then click on the Send button. The last feature we will look at is Sharing Outlook folders. Sharing Outlook Folders Users have the option to share any Outlook folder with users in the Global Address List. Essentially, this is the same ability we covered in an earlier chapter with the Web Client for the Contacts or Calendar. However, here the process is different. Users could be delegated different levels of access to Outlook folders. These levels include: Read View items in folder only Edit Edit any contents in the folder Create Create/add items to the folder Delete Delete/modify items in the folder Act on workflow Respond to meeting and task requests Administer Folder Modify the permissions on the folder There are also predefined roles that users could assign to other users in the Global Address List including: Administrator Has all rights to the folder listed above Delegate Has access to all rights except for Administer folder Editor Access to Read, Edit, Create and Delete Author Access to Read and Create Reviewer Read only To assign roles and rights to the folder: 1. Right click the folder and click Properties. 2. Click on the Sharing tab. 3. Click Add and select CEO from the Global Address List. 4. With CEO highlighted, select Administrator from the Permission Level drop-down-box. 5. With Administrator selected, you should be able to see all of the Permissions selected. 6. Change the Permission Level to Reviewer and you will see that only Read items, is selected. 7. Go ahead and play with the various levels so you can get a feel for the different permissions associated with the various levels. 8. Once complete, click OK. An email will be sent to the CEO informing him that he now has Administrator access to the Inbox of the Worker Bee. In order for the CEO to work with the new Shared Folder (the Worker's Inbox in this case), the CEO would simply: 1. Click on File --> Open --> Other User's Mailbox in the Outlook Menu bar. 2. Select Worker from the Global Address List. 3. Once the folder is added to Outlook, click on the Send/Receive button to synchronize the folder. Summary The goal of this article was to take a brief look at using the Microsoft Outlook client as a front end to the Zimbra Collaboration Suite. In my experience, users do not like change and they tend to be comfortable with applications they are familiar with. One of the most common objections to changing email systems is that users rely so heavily on their email and contacts that they do not want to have to learn a whole new system to access them. Hopefully, if I have done my job, you could now see how users need not be afraid of moving to a Zimbra system, because in the end, their everyday life and functionality is not going to change much. They could still use the tool that they are most familiar with, but still have the added benefit of using the AJAX Web Client when they are on the road or away from their desks.
Read more
  • 0
  • 0
  • 4257

article-image-data-migration-scenarios-sap-business-one-application-part-1
Packt
20 Oct 2009
25 min read
Save for later

Data Migration Scenarios in SAP Business ONE Application- part 1

Packt
20 Oct 2009
25 min read
Just recently, I found myself in a data migration project that served as an eye-opener. Our team had to migrate a customer system that utilized Act! and Peachtree. Both systems are not very famous for having good accessibility to their data. In fact, Peachtree is a non-SQL database that does not enforce data consistency. Act! also uses a proprietary table system that is based on a non-SQL database. The general migration logic was rather straightforward. However, our team found that the migration and consolidation of data into the new system posed multiple challenges, not only on the technical front, but also for the customer when it came to verifying the data. We used the on-the-edge tool xFusion Studio for data migration. This tool allows migrating and synchronizing data by using simple and advanced SQL data messaging techniques. The xFusion Studio tool has a graphical representation of how the data flows from the source to the target. When I looked at one section of this graphical representation, I started humming the song Welcome to the Jungle. Take a look at the following screenshot and find out why Guns and Roses may have provided the soundtrack for this data migration project: What we learned from the above screenshot is quite obvious and I have dedicated this article to helping you overcome these potential issues. Keep it simple and focus on information rather than data. You know that just by having more data does not always mean you’ve added more information. Sometimes, it just means a data jungle has been created. Making the right decisions at key milestones during the migration can keep the project simple and guarantee the success. Your goal should be to consolidate the islands of data into a more efficient and consistent database that provides real-time information. What you will learn about data migration In order to accomplish the task of migrating data from different sources into SAP Business ONE application, a strategy must be designed that addresses the individual needs of the project at hand. The data migration strategy uses proven processes and templates. The data migration itself can be managed as a mini project depending on the complexity. During the course of this article, the following key topics will be covered. The goal is to help you make crucial decisions, which will keep a project simple and manageable: Position the data migration tasks in the project plan – We will start by positioning the data migration tasks in the project plan. I will further define the required tasks that you need to complete as a part of the data migration. Data types and scenarios – With the general project plan structure in place, it is time to cover the common terms related to data migration. I will introduce you to the main aspects, such as master data and transactional data, as well as the impact they have on the complexity of data migration. SAP tools available for migration – During the course of our case study, I will introduce you to the data migration tools that come with SAP. However, there are also more advanced tools for complex migrations. You will learn about the main player in this area and how to use it. Process of migration – To avoid problems and guarantee success, the data migration project must follow a proven procedure. We will update the project plan to include the procedure and will also use the process during our case study. Making decisions about what data to bring – I mentioned that it is important to focus on information versus data. With the knowledge of the right tools and procedures, it is a good time to summarize the primary known issues and explain how to tackle them. The project plan We are still progressing in Phase 2 – Analysis and Design. The data migration is positioned in the Solution Architecture section and is called Review Data Conversion Needs (Amount and Type of Data). A thorough evaluation of the data conversion needs will also cover the next task in the project plan called Review Integration Points with any 3rd Party Solution. As you can see, the data migration task stands as a small task in the project plan. But as mentioned earlier, it can wind up being a large project depending on the number and size of data sources that need to be migrated. To honor this, we will add some more details to this task. As the task name suggests, we must review data conversion needs and identify the amount and type of data. This simple task must be structured in phases, just like the entire project that is structured in phases. Therefore, data migration needs to go through the following phases to be successful: 1. Design - Identify all of the Data Sources 2. Extraction of data into Excel or SQL for review and consistency 3. Review of Data and Verification(Via Customer Feedback) 4. Load into SAP System and Verification Note that the validation process and the consequential load could be iterative processes. For example, if the validated data has many issues, it only makes sense to perform a load into SAP if an additional verification takes place before the load. You only want to load data into an SAP system for testing if you know the quality of the records going to be loaded is good. Therefore, new phases were added in the project plan (seen below). Please do this in your project too based on the actual complexity and the number of data sources you have. A thorough look at the tasks above will be provided when we talk about the process of migration. Before we do that, the basic terms related to data migration will be covered. Data sources—where is my data There is a great variety in the potential types data sources. We will now identify the most common sources and explain their key characteristics. However, if there is a source that is not mentioned here, you can still migrate the data easily by transitioning it into one of the following formats. Microsoft Excel and text data migration The most common format for data migration is Excel, or text-based files. Text-based files are formatted using commas or tabs as field separators. When a comma is used as a field separator, the file format is referred to as Comma Separated Values (CSV). Most of the migration templates and strategies are based on Excel files that have specific columns where you can manually enter data, or copy and paste larger chunks. Therefore, if there is any way for you to extract data from your current system and present it in Excel, you have already done a great deal of data migration work. Microsoft Access An Access database is essentially an Excel sheet on a larger scale with added data consistency capability. It is a good idea to consider extracting Access tables to Excel in order to prepare for data migration. SQL If you have very large sets of data, then instead of using Excel, we usually employ an SQL database. The database then has a set of tables instead of Excel sheets. Using SQL tables, we can create SQL statements that can verify data and analyze results sets. Please note that you can use any SQL database, such as Microsoft SQL Server, Oracle, IBM DB, and so on. SaaS (Netsuite, Salesforce) SaaS stands for Software as a Service. Essentially, it means you can use software functionality based on a subscription. However, you don't own the solution. All of the hardware and software is installed at the service center, so you don't need to worry about hardware and software maintenance. However, keep in mind that these services don't allow you to manage the service packs according to your requirements. You need to adjust your business to the schedule of the SaaS company. If you are migrating from a modern SaaS solution, such as Salesforce or Netsuite, you will probably know that the data is not at your site, but rather stored at your solution hosting provider. Getting the data out to migrate to another solution may be done by obtaining reports, which could then be saved in an Excel format. Other legacy data The term legacy data is often mentioned when evaluating larger old systems. Legacy data basically comprises a large set of data that a company is using on mostly obsolete systems. AS/400 or Mainframe The IBM AS/400 is a good example of a legacy data source. Experts who are capable of extracting data from these systems are highly sought after, and so the budget must be on a higher scale. AS/400 data can often be extracted into a text or an Excel format. However, the data may come without headings. The headings are usually documented in a file that describes the data. You need to make sure that you get the file definitions, without which the pure text files may be meaningless. In addition, the media format is worth considering. An older AS/400 system may utilize a backup tape format which is not available on your Intel server. Peachtree, QuickBooks, and Act! Another potential source for data migration may be a smaller PC-based system, such as Peachtree, QuickBooks, or Act!. These systems have a different data format, and are based on non-SQL databases. This means the data cannot be accessed via SQL. In order to extract data from those systems, the proprietary API must be used. For example, if Peachtree displays data in the applications forms, it uses the program logic to put the pieces together from different text files. Getting data out from these types of systems is difficult and sometimes impossible. It is recommended to employ the relevant API to access the data in a structured way. You may want to run reports and export the results to text or Excel. Data classification in SAP Business ONE There are two main groups of data that we will migrate to the SAP Business ONE application: master data and transaction data. Master data Master data is the basic information that SAP Business ONE uses to record transactions (for example, business partner information). In addition, information about your products, such as items, finished goods, and raw materials are considered master data. Master data should always be migrated if possible. It can easily be verified and structured in an Excel or SQL format. For example, the data could be displayed using Excel sheets. You can then quickly verify that the data is showing up in the correct columns. In addition, you can see if the data is broken down into its required components. For example, each Excel column should represent a target field in SAP Business ONE. You should avoid having a single column in Excel that provides data for more than one target in SAP Business ONE. Transaction data Transaction data are proposals, orders, invoices, deliveries, and other similar information that comprise a combination of master data to create a unique business document. Customers often will want to migrate historical transactions from older systems. However, the consequences of doing this may have a landslide effect. For example, inventory is valuated based on specific settings in the finance section of a system. If these settings are not identical in the new system, transactions may look different in the old and the new system. This makes the migration very risky as the data verification becomes difficult on the customer side. I recommend making historical transactions available via a reporting database. For example, often, sales history must be available when migrating data. You can create a reporting database that provides sales history information. The user can use this data via reports within the SAP Business ONE application. Therefore, closed transactions should be migrated via a reporting database . Closed transactions are all of the business-related activities that were fully completed in the old system. Open transactions, on the other hand, are all of the business-related activities that are currently not completed. It makes sense that the open transactions be migrated directly to SAP, and not to a history database because they will be completed within the new SAP system. As a result of the data migration, you would be able to access sales history information from within SAP by accessing a reporting database. Open transactions will be completed within SAP, and then consequently lead to new transactions in SAP. Create a history database for sales history and manually enter open transactions. SAP DI-API Now that we know the main data types for an SAP migration, and the most common sources, we can take a brief look at the way the data is inserted into the SAP system. Based on the SAP guidelines, you are not allowed to insert data directly in the underlying SQL tables. The reason for that is that it can cause inconsistencies. When SAP works with the database, multiple tables are often updated. If you manually update a table to insert data, there is a good chance that another table has a link that also requires updating. Therefore, unless you know the exact table structure for the data you are trying to update, don't mess with the SAP SQL tables. If you carefully read this and understand the table structure, you will now know that there may be situations where you decide to access the tables directly. If you decide to insert data directly into the SAP database tables, you run the risk of losing your warranty. Migration scenarios and key decisions Data migration not only takes place as a part of a new SAP implementation, but also if you have a running system and you want to import leads or a list of new items. Therefore, it is a good idea to learn about the scenarios that you may come across and be able to select the right migration and integration tools. As outlined before, data can be divided into two groups: master data and transaction data. It is important that you separate the two, and structure each data migration accordingly. Master data is an essential component for manifesting transactions. Therefore, even if you need to bring over transactional data, the master data must already be in place. Always start with the master data alongside a verification procedure, and then continue with the relevant transaction data. Let’s now briefly look at the most common situations where you may require the evaluation of potential data migration options. New company (start-up) In this setup, you may not have extensive amounts of existing data to migrate. However, you may want to bring over lead lists or lists of items. During the course of this article, we will import a list of leads into SAP using the Excel Import functionality. Many new companies require the capability to easily import data into SAP. As you already know by now, the import of leads and item information will be considered as importing master data. Working with this master data by entering sales orders and so forth, would constitute transaction data. Transaction data is considered closed if all of the relevant actions are performed. For example, a sales order is considered closed if the items are delivered, invoiced, and paid for. If the chain of events is not complete, the transaction is open. Islands of data scenario This is the classic situation for an SAP implementation. You will first need to identify the available data sources and their formats. Then, you select the master data you want to bring over. With multiple islands of data, an SAP master record may result from more than one source. A business partner record may come, in part, from an existing accounting system, such as QuickBooks or Peachtree. Whereas other parts may come from a CRM system, such as Act!. For example, the billing information may be retrieved from the finance system and the relevant lead and sales information, such as specific contacts and notes, may come from the CRM system. In such a case, you need to merge this information into a new consistent master record in SAP. For this situation, first manually put the pieces together. Once the manual process works, you can attempt to automate the process. Don't try to directly import all of the data. You should always establish an intermediary level that allows for data verification. Only then import the data into SAP. For example, if you have QuickBooks and Act!, first merge the information into Excel for verification, and then import it into SAP. If the amount of data is large, you can also establish an SQL database. In that case, the Excel sheets would be replaced by SQL tables. IBM legacy data migration The migration of IBM legacy data is potentially the most challenging because the IBM systems are not directly compatible with Windows-based systems. Therefore, almost naturally, you will establish a text-based, or an Excel-formatted, representation of the IBM data. You can then proceed with verifying the information. SQL migration The easiest migration type is obviously the one where all of the data is already structured and consistent. However, you will not always have documentation of the table structure where the data resides. In this case, you need to create queries against the SQL tables to verify the data. The queries can then be saved as views. The views you create should always represent a consistent set of information that you can migrate. For example, if you have one table with address information, and another table with customer ID fields, you can create a view that consolidates this information into a single consistent set. Process of migration for your project I briefly touched upon the most common data migration scenarios so you can get a feel for the process. As you can see, whatever the source of data is, we always attempt to create an intermediary platform that allows the data to be verified. This intermediary platform is most commonly Excel or an SQL database. The process of data migration has the following subtasks: Identify available data sources Structure data into master data and transaction data Establish an intermediary platform with Excel or SQL Verify data Match data columns with Excel templates Run migration based on templates and verify data Based on this procedure, I have added more detail to the project plan. As you can see in this example, based on the required level of detail, we can make adjustments to the project plan to address the requirements: SAP standard import features Let's take a look at the available data exchange features in SAP. SAP provides two main tools for data migration. The fi rst option is to use the available menu in the SAP Business ONE client interface to exchange data. The other option is to use the Data Transfer Workbench (DTW). Standard import/export features— walk-through You can reach the Import from Excel form via Administration | Data Import/Export. As you can see in the following screenshot on the right top section of the form, the type of import is a drop-down selection. The options are BP and Items. In the screenshot, we have selected BP, which allows business partner information to be imported. There are drop-down fields that you can select based on the data you want to import. However, keep in mind that certain fields are mandatory, such as the BP Code field, whereas others are optional. The fields you select are associated with a column as you can see here: If you want to find out if a field is mandatory or not, simply open SAP and attempt to enter the data directly in the relevant SAP form. For example, if you are trying to import business partner information, enter the fields you want to import and see if the record can be saved. If you are missing any mandatory fields, SAP will provide an error message. You can modify the data that you are planning to import based on that. When you click on the OK button in the Import from Excel form (seen above), the Excel sheet with all of the data needs to be selected. In the following screenshot, you can see how the Excel sheet in our example looks. For example, column A has all of the BP Codes. This is in line with the mapping of columns to fields that we can see on the Import from Excel form. Please note that the file we select must be in a .txt format. For this example, I used the Save As feature in Excel (seen in the following screenshot) to save the file in the Text MS-DOS (*.txt) format. I was then able to select the BP Migration.txt file. This is actually a good thing because it points to the fact that you can use any application that can save data in the .txt format as the data source. The following screenshot shows the Save As screen: I imported the file and a success message confirms that the records were imported into SAP: A subsequent check in SAP confirms that the BP records that I had in the text file are now available in SAP: In the example, we only used two records. It is recommended to start out with a limited number of records to verify that the import is working. Therefore, you may start by reducing your import file to five records. This has the advantage of the import not taking a long time and you can immediately verify the result. See the following screenshot: Sometimes, it is not clear what kind of information SAP expects when importing. For example, at first Lead, Customer, Vendor were used in Column C to indicate the type of BP that was to be imported. However, this resulted in an error message upon completion of the import. Therefore, system information was activated to check what information SAP requires for the BP Type representation. As you can see in the screenshot of the Excel sheet you get when you click on the OK button in the Import from Excel form, the BP Type information is indicated by only one letter using L, C, or V. In the example screenshot above, you can clearly see L in the lower left section. The same thing is done for Country in the Addresses section. You can try that by navigating to Administration | Sales | Countries, and then hovering over the country you will be importing. In my example, USA was internally represented by SAP as US. It is a minor issue. However, when importing data, all of these issues need to be addressed. Please note that the file you are trying to import should not be open in Excel at the same time, as this may trigger an error. The Excel or text file does not have a header with a description of the data. Standard import/export features for your own project SAP’s standard import functionality for business partners and items is very straightforward. For your own project, you can prepare an Excel sheet for business partners and items. If you need to import BP or item information from another system, you can get this done quickly. If you get an error during the import process, try to manually enter the data in SAP. In addition, you can use the System Information feature to identify how SAP stores information in the database. I recommend you first create an Excel sheet with a maximum of two records to see if the basic information and data format is correct. Once you have this running, you can add all of the data you want to import. Overall, this functionality is a quick way to get your own data into the system. This feature can also be used in case you regularly receive address information. For example, if you have salespeople visiting trade fairs, you can provide them with the Excel sheet that you may have prepared for BP import. The salespeople can directly add their information there. Once they return from the trade fair with the Excel fi les, you can easily import the information into SAP and schedule follow-up activities using the Opportunity Management System. The item import is useful if you work with a vendor who updates his or her price lists and item information on a monthly basis. You can prepare an Excel template where the item information will regularly be entered and you can easily import the updates into SAP. Data Migration Workbench (DTW) The SAP standard import/export features are straightforward, but may not address the full complexity of the data that you need to import. For this situation, you may want to evaluate the SAP Data Migration Workbench (DTW). The functionality of this tool provides a greater level of detail to address the potential data structures that you want to import. To understand the basic concept of the DTW, it is a good idea to look at the different master data sections in SAP as business objects. A business object groups related information together. For example, BP information can have much more detail than what was previously shown in the standard import. The DTW templates and business objects To better understand the business object metaphor, you need to navigate to the DTW directory and evaluate the Templates folder. The templates are organized by business objects. The oBusinessPartners business object is represented by the folder with the same name (seen below). In this folder, you can find Excel template files that can be used to provide information for this type of business object. The following objects are available as Excel templates: BPAccountReceivables BPAddresses BPBankAccounts BPPaymentDates BPPaymentMethods BPWithholdingTax BusinessPartners ContactEmployees Please notice that these templates are Excel .xlt files, which is the Excel template extension. It is a good idea to browse through the list of templates and see the relevant templates. In a nutshell, you essentially add your own data to the templates and use DTW to import the data. Connecting to DTW In order to work with DTW, you need to connect to your SAP system using the DTW interface. The following screenshot shows the parameters I used to connect to the Lemonade Stand database: Once you are connected, a wizard-type interface walks you through the required steps to get started. Look at the next screenshot: The DTW examples and templates There is also an example folder in the DTW installation location on your system. This example folder has information about how to add information to your Excel templates. The following screenshot shows an example for business partner migration. You can see that the Excel template does have a header line on top that explains the content in the particular column. The actual template files also have comments in the header fi le, which provide information about the data format expected, such as String, Date, and so on. See the example in this screenshot: The actual template is empty and you need to add your information as shown here:   DTW for your own project If you realize that the basic import features in SAP are not sufficient, and your requirements are more challenging, evaluate DTW. Think of the data you want to import as business objects where information is logically grouped. If you are able to group your data together, you can modify the Excel templates with your own information. The DTW example folder provides working examples that you can use to get started. Please note that you should establish a test database before you start importing data this way. This is because once new data arrives in SAP, you need to verify the results based on the procedure discussed earlier. In addition, be prepared to fine-tune the import. Often, an import and data verification process takes four attempts of data importing and verification. Summary In this article, we covered the tasks related to data migration. This also included some practical examples for simple data imports related to business partner information and items. In addition, more advanced topics were covered by introducing the SAP DTW (Data Transfer Workbench) and the related aspects to get you started. During the course of this article, we positioned the data migration task in the project plan. The project plan was then fine-tuned with more detail to give some justice to the potential complexity of a data migration project. The data migration tasks established a process, from design to data mapping and verification of the data. Notably, the establishment of an intermediary data platform was recommended for your projects. This will help you verify data at each step of the migration. The key message of keeping it simple will be the basis for every migration project. The data verification task ensures simplicity and the quality of your data. If you have read this article you may be interested to view : Competitive Service and Contract Management in SAP Business ONE Implementation: Part 1 Competitive Service and Contract Management in SAP Business ONE Implementation: Part 2 Data Migration Scenarios in SAP Business ONE Application- part 2
Read more
  • 0
  • 0
  • 7274
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-primitive-data-types-variables-and-operators-object-oriented-javascript
Packt
20 Oct 2009
10 min read
Save for later

Primitive Data Types, Variables, and Operators in Object-Oriented JavaScript

Packt
20 Oct 2009
10 min read
Let's get started. Variables Variables are used to store data. When writing programs, it is convenient to use variables instead of the actual data, as it's much easier to write pi instead of 3.141592653589793 especially when it happens several times inside your program. The data stored in a variable can be changed after it was initially assigned, hence the name "variable". Variables are also useful for storing data that is unknown to the programmer when the code is written, such as the result of later operations. There are two steps required in order to use a variable. You need to: Declare the variable Initialize it, that is, give it a value In order to declare a variable, you use the var statement, like this: var a;var thisIsAVariable;var _and_this_too;var mix12three; For the names of the variables, you can use any combination of letters, numbers, and the underscore character. However, you can't start with a number, which means that this is invalid: var 2three4five; To initialize a variable means to give it a value for the first (initial) time. You have two ways to do so: Declare the variable first, then initialize it, or Declare and initialize with a single statement An example of the latter is: var a = 1; Now the variable named a contains the value 1. You can declare (and optionally initialize) several variables with a single var statement; just separate the declarations with a comma: var v1, v2, v3 = 'hello', v4 = 4, v5; Variables are Case Sensitive Variable names are case-sensitive. You can verify this statement using the Firebug console. Try typing this, pressing Enter after each line: var case_matters = 'lower';var CASE_MATTERS = 'upper';case_mattersCASE_MATTERS To save keystrokes, when you enter the third line, you can only type ca and press the Tab key. The console will auto-complete the variable name to case_matters. Similarly, for the last line—type CA and press Tab. The end result is shown on the following figure. Throughout the rest of this article series, only the code for the examples will be given, instead of a screenshot: >>> var case_matters = 'lower';>>> var CASE_MATTERS = 'upper';>>> case_matters"lower">>> CASE_MATTERS"upper" The three consecutive greater-than signs (>>>) show the code that you type, the rest is the result, as printed in the console. Again, remember that when you see such code examples, you're strongly encouraged to type in the code yourself and experiment tweaking it a little here and there, so that you get a better feeling of how it works exactly. Operators Operators take one or two values (or variables), perform an operation, and return a value. Let's check out a simple example of using an operator, just to clarify the terminology. >>> 1 + 23 In this code: + is the operator The operation is addition The input values are 1 and 2 (the input values are also called operands) The result value is 3 Instead of using the values 1 and 2 directly in the operation, you can use variables. You can also use a variable to store the result of the operation, as the following example demonstrates: >>> var a = 1;>>> var b = 2;>>> a + 12>>> b + 24>>> a + b3>>> var c = a + b;>>> c3 The following table lists the basic arithmetic operators: Operator symbol Operation Example + Addition >>> 1 + 2 3 - Subtraction >>> 99.99 - 11 88.99 * Multiplication >>> 2 * 3 6 / Division >>> 6 / 4 1.5 % Modulo, the reminder of a division >>> 6 % 3 0 >>> 5 % 3 2 It's sometimes useful to test if a number is even or odd. Using the modulo operator it's easy. All odd numbers will return 1 when divided by 2, while all even numbers will return 0. >>> 4 % 2 0 >>> 5 % 2 1 ++ Increment a value by 1 Post-increment is when the input value is incremented after it's returned. >>> var a = 123; var b = a++; >>> b 123 >>> a 124 The opposite is pre-increment; the input value is first incremented by 1 and then returned. >>> var a = 123; var b = ++a; >>> b 124 >>> a 124 -- Decrement a value by 1 Post-decrement >>> var a = 123; var b = a--; >>> b 123 >>> a 122 Pre-decrement >>> var a = 123; var b = --a; >>> b 122 >>> a 122 When you type var a = 1; this is also an operation; it's the simple assignment operation and = is the simple assignment operator. There is also a family of operators that are a combination of an assignment and an arithmetic operator. These are called compound operators. They can make your code more compact. Let's see some of them with examples. >>> var a = 5;>>> a += 3;8 In this example a += 3; is just a shorter way of doing a = a + 3; >>> a -= 3;5 Here a -= 3; is the same as a = a - 3; Similarly: >>> a *= 2;10>>> a /= 5;2>>> a %= 2;0 In addition to the arithmetic and assignment operators discussed above, there are other types of operators, as you'll see later in this article series.   Primitive Data Types Any value that you use is of a certain type. In JavaScript, there are the following primitive data types: Number—this includes floating point numbers as well as integers, for example 1, 100, 3.14. String—any number of characters, for example "a", "one", "one 2 three". Boolean—can be either true or false. Undefined—when you try to access a variable that doesn't exist, you get the special value undefined. The same will happen when you have declared a variable, but not given it a value yet. JavaScript will initialize it behind the scenes, with the value undefined. Null—this is another special data type that can have only one value, the null value. It means no value, an empty value, nothing. The difference with undefined is that if a variable has a value null, it is still defined, it only happens that its value is nothing. You'll see some examples shortly. Any value that doesn't belong to one of the five primitive types listed above is an object. Even null is considered an object, which is a little awkward—having an object (something) that is actually nothing. The data types in JavaScript the data types are either: Primitive (the five types listed above), or Non-primitive (objects) Finding out the Value Type —the typeof Operator If you want to know the data type of a variable or a value, you can use the special typeof operator. This operator returns a string that represents the data type. The return values of using typeof can be one of the following—"number", "string", "boolean", "undefined", "object", or "function". In the next few sections, you'll see typeof in action using examples of each of the five primitive data types. Numbers The simplest number is an integer. If you assign 1 to a variable and then use the typeof operator, it will return the string "number". In the following example you can also see that the second time we set a variable's value, we don't need the var statement. >>> var n = 1;>>> typeof n;"number">>> n = 1234;>>> typeof n;"number" Numbers can also be floating point (decimals): >>> var n2 = 1.23;>>> typeof n;"number" You can call typeof directly on the value, without assigning it to a variable first: >>> typeof 123;"number" Octal and Hexadecimal Numbers When a number starts with a 0, it's considered an octal number. For example, the octal 0377 is the decimal 255. >>> var n3 = 0377;>>> typeof n3;"number">>> n3;255 The last line in the example above prints the decimal representation of the octal value. While you may not be very familiar with octal numbers, you've probably used hexadecimal values to define, for example, colors in CSS stylesheets. In CSS, you have several options to define a color, two of them being: Using decimal values to specify the amount of R (red), G (green) and B (blue) ranging from 0 to 255. For example rgb(0, 0, 0) is black and rgb(255, 0, 0) is red (maximum amount of red and no green or blue). Using hexadecimals, specifying two characters for each R, G and B. For example, #000000 is black and #ff0000 is red. This is because ff is the hexadecimal for 255. In JavaScript, you put 0x before a hexadecimal value (also called hex for short). >>> var n4 = 0x00;>>> typeof n4;"number">>> n4;0>>> var n5 = 0xff;>>> typeof n5;"number">>> n5;255 Exponent Literals 1e1 (can also be written as 1e+1 or 1E1 or 1E+1) represents the number one with one zero after it, or in other words 10. Similarly, 2e+3 means the number 2 with 3 zeros after it, or 2000. >>> 1e110>>> 1e+110>>> 2e+32000>>> typeof 2e+3;"number" 2e+3 means moving the decimal point 3 digits to the right of the number 2. There's also 2e-3 meaning you move the decimal point 3 digits to the left of the number 2. >>> 2e-30.002>>> 123.456E-30.123456>>> typeof 2e-3"number" Infinity There is a special value in JavaScript called Infinity. It represents a number too big for JavaScript to handle. Infinity is indeed a number, as typing typeof Infinity in the console will confirm. You can also quickly check that a number with 308 zeros is ok, but 309 zeros is too much. To be precise, the biggest number JavaScript can handle is 1.7976931348623157e+308 while the smallest is 5e-324. >>> InfinityInfinity>>> typeof Infinity"number">>> 1e309Infinity>>> 1e3081e+308 Dividing by 0 will give you infinity. >>> var a = 6 / 0;>>> aInfinity Infinity is the biggest number (or rather a little bigger than the biggest), but how about the smallest? It's infinity with a minus sign in front of it, minus infinity. >>> var i = -Infinity;>>> i-Infinity>>> typeof i"number" Does this mean you can have something that's exactly twice as big as Infinity—from 0 up to infinity and then from 0 down to minus infinity? Well, this is purely for amusement and there's no practical value to it. When you sum infinity and minus infinity, you don't get 0, but something that is called NaN (Not A Number). >>> Infinity - InfinityNaN>>> -Infinity + InfinityNaN Any other arithmetic operation with Infinity as one of the operands will give you Infinity: >>> Infinity - 20Infinity>>> -Infinity * 3-Infinity>>> Infinity / 2Infinity>>> Infinity - 99999999999999999Infinity NaN What was this NaN you saw in the example above? It turns out that despite its name, "Not A Number", NaN is a special value that is also a number. >>> typeof NaN"number">>> var a = NaN;>>> aNaN You get NaN when you try to perform an operation that assumes numbers but the operation fails. For example, if you try to multiply 10 by the character "f", the result is NaN, because "f" is obviously not a valid operand for a multiplication. >>> var a = 10 * "f";>>> aNaN NaN is contagious, so if you have even only one NaN in your arithmetic operation, the whole result goes down the drain. >>> 1 + 2 + NaNNaN
Read more
  • 0
  • 0
  • 3491

article-image-human-readable-rules-drools-jboss-rules-50part-1
Packt
20 Oct 2009
6 min read
Save for later

Human-readable Rules with Drools JBoss Rules 5.0(Part 1)

Packt
20 Oct 2009
6 min read
Domain Specific Language The domain in this sense represents the business area (for example, life insurance or billing). Rules are expressed with the terminology of the problem domain. This means that domain experts can understand, validate, and modify these rules more easily. You can think of DSL as a translator. It defines how to translate sentences from the problem-specific terminology into rules. The translation process is defined in a .dsl file. The sentences themselves are stored in a .dslr file. The result of this process must be a valid .drl file. Building a simple DSL might look like: [condition][]There is a Customer with firstName {name}=$customer : Customer(firstName == {name})[consequence][]Greet Customer=System.out.println("Hello " + $customer.getFirstName()); Code listing 1: Simple DSL file, simple.dsl. The code listing above contains only two lines (each begins with [). However, because the lines are too long, they are wrapped effectively creating four lines. This will be the case in most of the code listings.When you are using the Drools Eclipse plugin to write this DSL, enter the text before the first equal sign into the field called Language expression, the text after equal sign into Rule mapping, leave the object field blank, and select the correct scope. The previous DSL defines two DSL mappings. They map a DSLR sentence to a DRL rule. The first one translates to a condition that matches a Customer object with the specified first name. The first name is captured into a variable called name. This variable is then used in the rule condition. The second line translates to a greeting message that is printed on the console. The following .dslr file can be written based on the previous DSL: package droolsbook.dsl;import droolsbook.bank.model.*;expander simple.dslrule "hello rule" when There is a Customer with firstName "David" then Greet Customerend Code listing 2: Simple .dslr file (simple.dslr) with rule that greets a customer with name David. As can be seen, the structure of a .dslr file is the same as the structure of a .drl file. Only the rule conditions and consequences are different. Another thing to note is the line containing expander simple.dsl. It informs Drools how to translate sentences in this file into valid rules. Drools reads the simple.dslr file and tries to translate/expand each line by applying all mappings from the simple.dsl file (it does it in a single pass process, line-by-line from top to bottom). The order of lines is important in a .dsl file. Please note that one condition/consequence must be written on one line, otherwise the expansion won't work (for example, the condition after the when clause, from the rule above, must be on one line). When you are writing .dslr files, consider using the Drools Eclipse plugin. It provides a special editor for .dslr files that has an editing mode and a read-only mode for viewing the resulting .drl file. A simple DSL editor is provided as well. The result of the translation process will look like the following screenshot: This translation process happens in memory and no .drl file is physically stored. We can now run this example. First of all, a knowledge base must be created from the simple.dsl and simple.dslr files. The process of creating a package using a DSL is as follows (only the package creation is shown) KnowledgeBuilder acts as the translator. It takes the .dslr file, and based on the .dsl file, creates the DRL. This DRL is then used as normal (we don't see it; it's internal to KnowledgeBuilder). The implementation is as follows: private KnowledgeBase createKnowledgeBaseFromDSL() throws Exception { KnowledgeBuilder builder = KnowledgeBuilderFactory.newKnowledgeBuilder(); builder.add(ResourceFactory.newClassPathResource( "simple.dsl"), ResourceType.DSL); builder.add(ResourceFactory.newClassPathResource( "simple.dslr"), ResourceType.DSLR); if (builder.hasErrors()) { throw new RuntimeException(builder.getErrors() .toString()); } KnowledgeBase knowledgeBase = KnowledgeBaseFactory .newKnowledgeBase(); knowledgeBase.addKnowledgePackages( builder.getKnowledgePackages()); return knowledgeBase; } Code listing 3: Creating knowledge base from .dsl and .dslr files. The .dsl and subsequently the .dslr files are passed into KnowledgeBuilder. The rest is similar to what we've seen before. DSL as an interface DSLs can be also looked at as another level of indirection between your .drl files and business requirements. It works as shown in the following figure: The figure above shows DSL as an interface (dependency diagram). At the top are the business requirements as defined by the business analyst. These requirements are represented as DSL sentences (.dslr file). The DSL then represents the interface between DSL sentences and rule implementation (.drl file) and the domain model. For example, we can change the transformation to make the resulting rules more efficient without changing the language. Further, we can change the language, for example, to make it more user friendly, without changing the rules. All of this can be done just by changing the .dsl file. DSL for validation rules We'll rewrite the three usually implemented object/field required rules as follows: If the Customer does not have an address, then Display warning message If the Customer does not have a phone number or it is blank, then Display error message If the Account does not have an owner, then Display error message for Account We can clearly see that all of them operate on some object (Customer/Account), test its property (address/phone/owner), and display a message (warning/error) possibly with some context (account). Our validation.dslr file might look like the following code: expander validation.dslrule "address is required" when The Customer does not have address then Display warningendrule "phone number is required" when The Customer does not have phone number or it is blank then Display errorendrule "account owner is required" when The Account does not have owner then Display error for Accountend Code listing 4: First DSL approach at defining the required object/field rules (validation.dslr file). The conditions could be mapped like this: [condition][]The {object} does not have {field}=${object} : {object}( {field} == null ) Code listing 5: validation.dsl. This covers the address and account conditions completely. For the phone number rule, we have to add the following mapping at the beginning of the validation.dsl file: [condition][] or it is blank = == "" || Code listing 6: Mapping that checks for a blank phone number. As it stands, the phone number condition will be expanded to: $Customer : Customer( phone number == "" || == null ) Code listing 7: Unfinished phone number condition. To correct it, phone number has to be mapped to phoneNumber. This can be done by adding the following at the end of the validation.dsl file: [condition][]phone number=phoneNumber Code listing 8: Phone number mapping. The conditions are working. Now, let's focus on the consequences. The following mapping will do the job: [consequence][]Display {message_type} for {object}={message_type}( kcontext, ${object} ); [consequence][]Display {message_type}={message_type}( kcontext ); Code listing 9: Consequence mappings. The three validation rules are now being expanded to the .drl representation
Read more
  • 0
  • 0
  • 5885

article-image-dwr-java-ajax-user-interface-basic-elements-part-2
Packt
20 Oct 2009
21 min read
Save for later

DWR Java AJAX User Interface: Basic Elements (Part 2)

Packt
20 Oct 2009
21 min read
Implementing Tables and Lists The first actual sample is very common in applications: tables and lists. In this sample, the table is populated using the DWR utility functions, and a remoted Java class. The sample code also shows how DWR is used to do inline table editing. When a table cell is double-clicked, an edit box opens, and it is used to save new cell data. The sample will have country data in a CSV file: country Name, Long Name, two-letter Code, Capital, and user-defined Notes. The user interface for the table sample appears as shown in the following screenshot: Server Code for Tables and Lists The first thing to do is to get the country data. Country data is in a CSV file (named countries.csv and located in the samples Java package). The following is an excerpt of the content of the CSV file (data is from http://www.state.gov ). Short-form name,Long-form name,FIPS Code,CapitalAfghanistan,Islamic Republic of Afghanistan,AF,KabulAlbania,Republic of Albania,AL,TiranaAlgeria,People's Democratic Republic of Algeria,AG,AlgiersAndorra,Principality of Andorra,AN,Andorra la VellaAngola,Republic of Angola,AO,LuandaAntigua andBarbuda,(no long-form name),AC,Saint John'sArgentina,Argentine Republic,AR,Buenos AiresArmenia,Republic of Armenia,AM,Yerevan The CSV file is read each time a client requests country data. Although this is not very efficient, it is good enough here. Other alternatives include an in-memory cache or a real database such as Apache Derby or IBM DB2. As an example, we have created a CountryDB class that is used to read and write the country CSV. We also have another class, DBUtils, which has some helper methods. The DBUtils code is as follows: package samples;import java.io.BufferedReader;import java.io.File;import java.io.FileReader;import java.io.FileWriter;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader;import java.io.PrintWriter;import java.util.List;import java.util.Vector;public class DBUtils { private String fileName=null; public void initFileDB(String fileName) { this.fileName=fileName; // copy csv file to bin-directory, for easy // file access File countriesFile = new File(fileName); if (!countriesFile.exists()) { try { List<String> countries = getCSVStrings(null); PrintWriter pw; pw = new PrintWriter(new FileWriter(countriesFile)); for (String country : countries) { pw.println(country); } pw.close(); } catch (IOException e) { e.printStackTrace(); } } } protected List<String> getCSVStrings(String letter) { List<String> csvData = new Vector<String>(); try { File csvFile = new File(fileName); BufferedReader br = null; if(csvFile.exists()) { br=new BufferedReader(new FileReader(csvFile)); } else { InputStream is = this.getClass().getClassLoader() .getResourceAsStream("samples/"+fileName); br=new BufferedReader(new InputStreamReader(is)); br.readLine(); } for (String line = br.readLine(); line != null; line = br.readLine()) { if (letter == null || (letter != null && line.startsWith(letter))) { csvData.add(line); } } br.close(); } catch (IOException ioe) { ioe.printStackTrace(); } return csvData; }} The DBUtils class is a straightforward utility class that returns CSV content as a List of Strings. It also copies the original CSV file to the runtime directory of any application server we might be running. This may not be the best practice, but it makes it easier to manipulate the CSV file, and we always have the original CSV file untouched if and when we need to go back to the original version. The code for CountryDB is given here: package samples;import java.io.FileWriter;import java.io.IOException;import java.io.PrintWriter;import java.util.Arrays;import java.util.List;import java.util.Vector;public class CountryDB { private DBUtils dbUtils = new DBUtils(); private String fileName = "countries.csv"; public CountryDB() { dbUtils.initFileDB(fileName); } public String[] getCountryData(String ccode) { List<String> countries = dbUtils.getCSVStrings(null); for (String country : countries) { if (country.indexOf("," + ccode + ",") > -1) { return country.split(","); } } return new String[0]; } public List<List<String>> getCountries(String startLetter) { List<List<String>> allCountryData = new Vector<List<String>>(); List<String> countryData = dbUtils.getCSVStrings(startLetter); for (String country : countryData) { String[] data = country.split(","); allCountryData.add(Arrays.asList(data)); } return allCountryData; } public String[] saveCountryNotes(String ccode, String notes) { List<String> countries = dbUtils.getCSVStrings(null); try { PrintWriter pw = new PrintWriter(new FileWriter(fileName)); for (String country : countries) { if (country.indexOf("," + ccode + ",") > -1) { if (country.split(",").length == 4) { // no existing notes country = country + "," + notes; } else { if (notes.length() == 0) { country = country.substring(0, country .lastIndexOf(",")); } else { country = country.substring(0, country .lastIndexOf(",")) + "," + notes; } } } pw.println(country); } pw.close(); } catch (IOException ioe) { ioe.printStackTrace(); } String[] rv = new String[2]; rv[0] = ccode; rv[1] = notes; return rv; }} The CountryDB class is a remoted class. The getCountryData() method returns country data as an array of strings based on the country code. The getCountries() method returns all the countries that start with the specified parameter, and saveCountryNotes() saves user added notes to the country specified by the country code. In order to use CountryDB, the following script element must be added to the index.jsp file together with other JavaScript elements. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/CountryDB.js'></script> There is one other Java class that we need to create and remote. That is the AppContent class that was already present in the JavaScript functions of the home page. The AppContent class is responsible for reading the content of the HTML file and parses the possible JavaScript function out of it, so it can become usable by the existing JavaScript functions in index.jsp file. package samples;import java.io.ByteArrayOutputStream;import java.io.IOException;import java.io.InputStream;import java.util.List;import java.util.Vector;public class AppContent { public AppContent() { } public List<String> getContent(String contentId) { InputStream is = this.getClass().getClassLoader().getResourceAsStream( "samples/"+contentId+".html"); String content=streamToString(is); List<String> contentList=new Vector<String>(); //Javascript within script tag will be extracted and sent separately to client for(String script=getScript(content);!script.equals("");script=getScript(content)) { contentList.add(script); content=removeScript(content); } //content list will have all the javascript //functions, last element is executed last //and all other before html content if(contentList.size()>1) { contentList.add(contentList.size()-1, content); } else { contentList.add(content); } return contentList; } public List<String> getLetters() { List<String> letters=new Vector<String>(); char[] l=new char[1]; for(int i=65;i<91;i++) { l[0]=(char)i; letters.add(new String(l)); } return letters; } public String removeScript(String html) { //removes first script element int sIndex=html.toLowerCase().indexOf("<script "); if(sIndex==-1) { return html; } int eIndex=html.toLowerCase().indexOf("</script>")+9; return html.substring(0, sIndex)+html.substring(eIndex); } public String getScript(String html) { //returns first script element int sIndex=html.toLowerCase().indexOf("<script "); if(sIndex==-1) { return ""; } int eIndex=html.toLowerCase().indexOf("</script>")+9; return html.substring(sIndex, eIndex); } public String streamToString(InputStream is) { String content=""; try { ByteArrayOutputStream baos=new ByteArrayOutputStream(); for(int b=is.read();b!=-1;b=is.read()) { baos.write(b); } content=baos.toString(); } catch(IOException ioe) { content=ioe.toString(); } return content; }} The getContent() method reads the HTML code from a file based on the contentId. ContentId was specified in the dwrapplication.properties file, and the HTML is just contentId plus the extension .html in the package directory. There is also a getLetters() method that simply lists letters from A to Z and returns a list of letters to the browser. If we test the application now, we will get an error as shown in the following screenshot: We know why the AppContent is not defined error occurs, so let's fix it by adding AppContent to the allow element in the dwr.xml file. We also add CountryDB to the allow element. The first thing we do is to add required elements to the dwr.xml file. We add the following creators within the allow element in the dwr.xml file. <create creator="new" javascript="AppContent"> <param name="class" value="samples.AppContent" /> <include method="getContent" /> <include method="getLetters" /> </create> <create creator="new" javascript="CountryDB"> <param name="class" value="samples.CountryDB" /> <include method="getCountries" /> <include method="saveCountryNotes" /> <include method="getCountryData" /></create> We explicitly define the methods we are remoting using the include elements. This is a good practice, as we don't accidentally allow access to any methods that are not meant to be remoted. Client Code for Tables and Lists We also need to add a JavaScript interface to the index.jsp page. Add the following with the rest of the scripts in the index.jsp file. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/AppContent.js'></script> Before testing, we need the sample HTML for the content area. The following HTML is in the TablesAndLists.html file under the samples directory: <h3>Countries</h3><p>Show countries starting with <select id="letters" onchange="selectLetter(this);return false;"> </select><br/>Doubleclick "Notes"-cell to add notes to country.</p><table border="1"> <thead> <tr> <th>Name</th> <th>Long name</th> <th>Code</th> <th>Capital</th> <th>Notes</th> </tr> </thead> <tbody id="countryData"> </tbody></table><script type='text/javascript'>//TO BE EVALEDAppContent.getLetters(addLetters);</script> The script element at the end is extracted by our Java class, and it is then evaluated by the browser when the client-side JavaScript receives the HTML. There is the select element, and its onchange event calls the selectLetter() JavaScript function. We will implement the selectLetter() function shortly. JavaScript functions are added in the index.jsp file, and within the head element. Functions could be in separate JavaScript files, but the embedded script is just fine here. function selectLetter(selectElement){ var selectedIndex = selectElement.selectedIndex; var selectedLetter= selectElement.options[selectedIndex ].value; CountryDB.getCountries(selectedLetter,setCountryRows);}function addLetters(letters){dwr.util.addOptions('letters',['letter...']);dwr.util.addOptions('letters',letters);}function setCountryRows(countryData){var cellFuncs = [ function(data) { return data[0]; }, function(data) { return data[1]; }, function(data) { return data[2]; }, function(data) { return data[3]; }, function(data) { return data[4]; }];dwr.util.removeAllRows('countryData');dwr.util.addRows( 'countryData',countryData,cellFuncs, { cellCreator:function(options) { var td = document.createElement("td"); if(options.cellNum==4) { var notes=options.rowData[4]; if(notes==undefined) { notes='&nbsp;';// + options.rowData[2]+'notes'; } var ccode=options.rowData[2]; var divId=ccode+'_Notes'; var tdId=divId+'Cell'; td.setAttribute('id',tdId); var html=getNotesHtml(ccode,notes); td.innerHTML=html; options.data=html; } return td; }, escapeHtml:false });}function getNotesHtml(ccode,notes){ var divId=ccode+'_Notes'; return "<div onDblClick="editCountryNotes('"+divId+"','"+ccode+"');" id=""+divId+"">"+notes+"</div>";}function editCountryNotes(id,ccode){ var notesElement=dwr.util.byId(id); var tdId=id+'Cell'; var notes=notesElement.innerHTML; if(notes=='&nbsp;') { notes=''; } var editBox='<input id="'+ccode+'NotesEditBox" type="text" value="'+notes+'"/><br/>'; editBox+="<input type='button' id='"+ccode+"SaveNotesButton' value='Save' onclick='saveCountryNotes(""+ccode+"");'/>"; editBox+="<input type='button' id='"+ccode+"CancelNotesButton' value='Cancel' onclick='cancelEditNotes(""+ccode+"");'/>"; tdElement=dwr.util.byId(tdId); tdElement.innerHTML=editBox; dwr.util.byId(ccode+'NotesEditBox').focus();}function cancelEditNotes(ccode){ var countryData=CountryDB.getCountryData(ccode, { callback:function(data) { var notes=data[4]; if(notes==undefined) { notes='&nbsp;'; } var html=getNotesHtml(ccode,notes); var tdId=ccode+'_NotesCell'; var td=dwr.util.byId(tdId); td.innerHTML=html; } });}function saveCountryNotes(ccode){ var editBox=dwr.util.byId(ccode+'NotesEditBox'); var newNotes=editBox.value; CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { var ccode=newNotes[0]; var notes=newNotes[1]; var notesHtml=getNotesHtml(ccode,notes); var td=dwr.util.byId(ccode+"_NotesCell"); td.innerHTML=notesHtml; } });} There are lots of functions for table samples, and we go through each one of them. The first is the selectLetter() function. This function gets the selected letter from the select element and calls the CountryDB.getCountries() remoted Java method. The callback function is setCountryRows. This function receives the return value from the Java getCountries() method, that is List<List<String>>, a List of Lists of Strings. The second function is addLetters(letters), and it is a callback function for theAppContent.getLetters() method, which simply returns letters from A to Z. The addLetters() function uses the DWR utility functions to populate the letter list. Then there is a callback function for the CountryDB.getCountries() method. The parameter for the function is an array of countries that begin with a specified letter. Each array element has a format: Name, Long name, (country code) Code, Capital, Notes. The purpose of this function is to populate the table with country data; and let's see how it is done. The variable, cellFuncs, holds functions for retrieving data for each cell in a column. The parameter named data is an array of country data that was returned from the Java class. The table is populated using the DWR utility function, addRows(). The cellFuncs variable is used to get the correct data for the table cell. The cellCreator function is used to create custom HTML for the table cell. Default implementation generates just a td element, but our custom implementation generates the td-element with the div placeholder for user notes. The getNotesHtml() function is used to generate the div element with the event listener for double-click. The editCountryNotes() function is called when the table cell is double-clicked. The function creates input fields for editing notes with the Save and Cancel buttons. The cancelEditNotes() and saveCountryNotes() functions cancel the editing of new notes, or saves them by calling the CountryDB.saveCountryNotes() Java method. The following screenshot shows what the sample looks like with the populated table: Now that we have added necessary functions to the web page we can test the application. Testing Tables and Lists The application should be ready for testing if we have had the test environment running during development. Eclipse automatically deploys our new code to the server whenever something changes. So we can go right away to the test page http://127.0.0.1:8080/DWREasyAjax. On clicking Tables and lists we can see the page we have developed. By selecting some letter, for example "I" we get a list of all the countries that start with letter "I" (as shown in the previous screenshot). Now we can add notes to countries. We can double-click any table cell under Notes. For example, if we want to enter notes to Iceland, we double-click the Notes cell in Iceland's table row, and we get the edit box for the notes as shown in the following screenshot: The edit box is a simple text input field. We didn't use any forms. Saving and canceling editing is done using JavaScript and DWR. If we press Cancel, we get the original notes from the CountryDB Java class using DWR and saving also uses DWR to save data. CountryDB.saveCountryNotes() takes the country code and the notes that the user entered in the edit box and saves them to the CSV file. When notes are available, the application will show them in the country table together with other country information as shown in the following screenshot: Afterword The sample in this section uses DWR features to get data for the table and list from the server. We developed the application so that most of the application logic is written in JavaScript and Java beans that are remoted. In principle, the application logic can be thought of as being fully browser based, with some extensions in the server. Implementing Field Completion Nowadays, field completion is typical of many web pages. A typical use case is getting a stock quote, and field completion shows matching symbols as users type letters. Many Internet sites use this feature. Our sample here is a simple license text finder. We enter the license name in the input text field, and we use DWR to show the license names that start with the typed text. A list of possible completions is shown below the input field. The following is a screenshot of the field completion in action: Selected license content is shown in an iframe element from http://www.opensource.org. Server Code for Field Completion We will re-use some of the classes we developed in the last section. AppContent is used to load the sample page, and the DBUtils class is used in the LicenseDB class. The LicenseDB class is shown here: package samples;import java.util.List;import java.util.Vector;public class LicenseDB{ private DBUtils dbUtils=new DBUtils(); public LicenseDB() { dbUtils.initFileDB("licenses.csv"); } public List<String> getLicensesStartingWith(String startLetters) { List<String> list=new Vector<String>(); List<String> licenses=dbUtils.getCSVStrings(startLetters); for(String license : licenses) { list.add(license.split(",")[0]); } return list; } public String getLicenseContentUrl(String licenseName) { List<String> licenses=dbUtils.getCSVStrings(licenseName); if(licenses.size()>0) { return licenses.get(0).split(",")[1]; } return ""; }} The getLicenseStartingWith() method goes through the license names and returns valid license names and their URLs. Similar to the data in the previous section, license data is in a CSV file named licenses.csv in the package directory. The following is an excerpt of the file content: Academic Free License, http://opensource.org/licenses/afl-3.0.phpAdaptive Public License, http://opensource.org/licenses/apl1.0.phpApache Software License, http://opensource.org/licenses/apachepl-1.1.phpApache License, http://opensource.org/licenses/apache2.0.phpApple Public Source License, http://opensource.org/licenses/apsl-2.0.phpArtistic license, http://opensource.org/licenses/artistic-license-1.0.php... There are quite a few open-source licenses. Some are more popular than others (like the Apache Software License) and some cannot be re-used (like the IBM Public License). We want to remote the LicenseDB class, so we add the following to the dwr.xml file. <create creator="new" javascript="LicenseDB"> <param name="class" value="samples.LicenseDB"/> <include method="getLicensesStartingWith"/> <include method="getLicenseContentUrl"/></create> Client Code for Field Completion The following script element will go in the index.jsp page. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/LicenseDB.js'></script> The HTML for the field completion is as follows: <h3>Field completion</h3><p>Enter Open Source license name to see it's contents.</p><input type="text" id="licenseNameEditBox" value="" onkeyup="showPopupMenu()" size="40"/><input type="button" id="showLicenseTextButton" value="Show license text" onclick="showLicenseText()"/><div id="completionMenuPopup"></div><div id="licenseContent"></div> The input element, where we enter the license name, listens to the onkeyup event which calls the showPopupMenu() JavaScript function. Clicking the Input button calls the showLicenseText() function (the JavaScript functions are explained shortly). Finally, the two div elements are place holders for the pop-up menu and the iframe element that shows license content. For the pop-up box functionality, we use existing code and modify it for our purpose (many thanks to http://www.jtricks.com). The following is the popup.js file, which is located under the WebContent | js directory. //<script type="text/javascript"><!--/* Original script by: www.jtricks.com * Version: 20070301 * Latest version: * www.jtricks.com/javascript/window/box.html * * Modified by Sami Salkosuo. */// Moves the box object to be directly beneath an object.function move_box(an, box){ var cleft = 0; var ctop = 0; var obj = an; while (obj.offsetParent) { cleft += obj.offsetLeft; ctop += obj.offsetTop; obj = obj.offsetParent; } box.style.left = cleft + 'px'; ctop += an.offsetHeight + 8; // Handle Internet Explorer body margins, // which affect normal document, but not // absolute-positioned stuff. if (document.body.currentStyle && document.body.currentStyle['marginTop']) { ctop += parseInt( document.body.currentStyle['marginTop']); } box.style.top = ctop + 'px';}var popupMenuInitialised=false;// Shows a box if it wasn't shown yet or is hidden// or hides it if it is currently shownfunction show_box(html, width, height, borderStyle,id){ // Create box object through DOM var boxdiv = document.getElementById(id); boxdiv.style.display='block'; if(popupMenuInitialised==false) { //boxdiv = document.createElement('div'); boxdiv.setAttribute('id', id); boxdiv.style.display = 'block'; boxdiv.style.position = 'absolute'; boxdiv.style.width = width + 'px'; boxdiv.style.height = height + 'px'; boxdiv.style.border = borderStyle; boxdiv.style.textAlign = 'right'; boxdiv.style.padding = '4px'; boxdiv.style.background = '#FFFFFF'; boxdiv.style.zIndex='99'; popupMenuInitialised=true; //document.body.appendChild(boxdiv); } var contentId=id+'Content'; var contents = document.getElementById(contentId); if(contents==null) { contents = document.createElement('div'); contents.setAttribute('id', id+'Content'); contents.style.textAlign= 'left'; boxdiv.contents = contents; boxdiv.appendChild(contents); } move_box(html, boxdiv); contents.innerHTML= html; return false;}function hide_box(id){ document.getElementById(id).style.display='none'; var boxdiv = document.getElementById(id+'Content'); if(boxdiv!=null) { boxdiv.parentNode.removeChild(boxdiv); } return false;}//--></script> Functions in the popup.js file are used as menu options directly below the edit box. The show_box() function takes the following arguments: HTML code for the pop-up, position of the pop-up window, and the "parent" element (to which the pop-up box is related). The function then creates a pop-up window using DOM. The move_box() function is used to move the pop-up window to its correct place under the edit box and the hide_box() function hides the pop-up window by removing the pop-up window from the DOM tree. In order to use functions in popup.js, we need to add the following script-element to the index.jsp file: <script type='text/javascript' src='js/popup.js'></script> Our own JavaScript code for the field completion is in the index.jsp file. The following are the JavaScript functions, and an explanation follows the code: function showPopupMenu(){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); var startLetters=licenseNameEditBox.value; LicenseDB.getLicensesStartingWith(startLetters, { callback:function(licenses) { var html=""; if(licenses.length==0) { return; } if(licenses.length==1) { hidePopupMenu(); licenseNameEditBox.value=licenses[0]; } else { for (index in licenses) { var licenseName=licenses[index];//.split(",")[0]; licenseName=licenseName.replace(/"/g,"&quot;"); html+="<div style="border:1px solid #777777;margin-bottom:5;" onclick="completeEditBox('"+licenseName+"');">"+licenseName+"</div>"; } show_box(html, 200, 270, '1px solid','completionMenuPopup'); } } });}function hidePopupMenu(){ hide_box('completionMenuPopup');}function completeEditBox(licenseName){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); licenseNameEditBox.value=licenseName; hidePopupMenu(); dwr.util.byId('showLicenseTextButton').focus();}function showLicenseText(){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); licenseName=licenseNameEditBox.value; LicenseDB.getLicenseContentUrl(licenseName,{ callback:function(licenseUrl) { var html='<iframe src="'+licenseUrl+'" width="100%" height="600"></iframe>'; var content=dwr.util.byId('licenseContent'); content.style.zIndex="1"; content.innerHTML=html; } });} The showPopupMenu() function is called each time a user enters a letter in the input box. The function gets the value of the input field and calls the LicenseDB. getLicensesStartingWith() method. The callback function is specified in the function parameters. The callback function gets all the licenses that match the parameter, and based on the length of the parameter (which is an array), it either shows a pop-up box with all the matching license names, or, if the array length is one, hides the pop-up box and inserts the full license name in the text field. In the pop up box, the license names are wrapped within the div element that has an onclick event listener that calls the completeEditBox() function. The hidePopupMenu() function just closes the pop-up menu and the competeEditBox() function inserts the clicked license text in the input box and moves the focus to the button. The showLicenseText() function is called when we click the Show license text button. The function calls the LicenseDB. getLicenseContentUrl() method and the callback function creates an iframe element to show the license content directly from http://www.opensource.org, as shown in the following screenshot: Afterword Field completion improves user experience in web pages and the sample code in this section showed one way of doing it using DWR. It should be noted that the sample for field completion presented here is only for demonstration purposes.
Read more
  • 0
  • 0
  • 2229

article-image-dwr-java-ajax-user-interface-basic-elements-part-1
Packt
20 Oct 2009
16 min read
Save for later

DWR Java AJAX User Interface: Basic Elements (Part 1)

Packt
20 Oct 2009
16 min read
  Creating a Dynamic User Interface The idea behind a dynamic user interface is to have a common "framework" for all samples. We will create a new web application and then add new features to the application as we go on. The user interface will look something like the following figure: The user interface has three main areas: the title/logo that is static, the tabs that are dynamic, and the content area that shows the actual content. The idea behind this implementation is to use DWR functionality to generate tabs and to get content for the tab pages. The tabbed user interface is created using a CSS template from the Dynamic Drive CSS Library (http://dynamicdrive.com/style/csslibrary/item/css-tabs-menu). Tabs are read from a properties file, so it is possible to dynamically add new tabs to the web page. The following screenshot shows the user interface. The following sequence diagram shows the application flow from the logical perspective. Because of the built-in DWR features we don't need to worry very much about how asynchronous AJAX "stuff" works. This is, of course, a Good Thing. Now we will develop the application using the Eclipse IDE and the Geronimo test environment Creating a New Web Project First, we will create a new web project. Using the Eclipse IDE we do the following: select the menu File | New | Dynamic Web Project. This opens the New Dynamic Web Project dialog; enter the project name DWREasyAjax and click Next, and accept the defaults on all the pages till the last page, where Geronimo Deployment Plan is created as shown in the following screenshot: Enter easyajax as Group Id and DWREasyAjax as Artifact Id. On clicking Finish, Eclipse creates a new web project. The following screen shot shows the generated project and the directory hierarchy. Before starting to do anything else, we need to copy DWR to our web application. All DWR functionality is present in the dwr.jar file, and we just copy that to the WEB-INF | lib directory. A couple of files are noteworthy: web.xml and geronimo-web.xml. The latter is generated for the Geronimo application server, and we can leave it as it is. Eclipse has an editor to show the contents of geronimo-web.xml when we double-click the file. Configuring the Web Application The context root is worth noting (visible in the screenshot above). We will need it when we test the application. The other XML file, web.xml, is very important as we all know. This XML will hold the DWR servlet definition and other possible initialization parameters. The following code shows the full contents of the web.xml file that we will use: <?xml version="1.0" encoding="UTF-8"?> <web-app xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web- app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>DWREasyAjax</display-name> <servlet> <display-name>DWR Servlet</display-name> <servlet-name>dwr-invoker</servlet-name> <servlet-class> org.directwebremoting.servlet.DwrServlet </servlet-class> <init-param> <param-name>debug</param-name> <param-value>true</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> </web-app> DWR cannot function without the dwr.xml configuration file. So we need to create the configuration file. We use Eclipse to create a new XML file in the WEB-INF directory. The following is required for the user interface skeleton. It already includes the allow-element for our DWR based menu. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE dwr PUBLIC "-//GetAhead Limited//DTD Direct Web Remoting 2.0//EN" "http://getahead.org/dwr/dwr20.dtd"> <dwr> <allow> <create creator="new" javascript="HorizontalMenu"> <param name="class" value="samples.HorizontalMenu" /> </create> </allow> </dwr> In the allow element, there is a creator for the horizontal menu Java class that we are going to implement here. The creator that we use here is the new creator, which means that DWR will use an empty constructor to create Java objects for clients. The parameter named class holds the fully qualified class name. Developing the Web Application Since we have already defined the name of the Java class that will be used for creating the menu, the next thing we do is implement it. The idea behind the HorizontalMenu class is that it is used to read a properties file that holds the menus that are going to be on the web page. We add properties to a file named dwrapplication.properties, and we create it in the same samples-package as the HorizontalMenu-class. The properties file for the menu items is as follows: menu.1=Tables and lists,TablesAndLists menu.2=Field completion,FieldCompletion The syntax for the menu property is that it contains two elements separated by a comma. The first element is the name of the menu item. This is visible to user. The second is the name of HTML template file that will hold the page content of the menu item. The class contains just one method, which is used from JavaScript and via DWR to retrieve the menu items. The full class implementation is shown here: package samples; import java.io.IOException; import java.io.InputStream; import java.util.List; import java.util.Properties; import java.util.Vector; public class HorizontalMenu { public HorizontalMenu() { } public List<String> getMenuItems() throws IOException { List<String> menuItems = new Vector<String>(); InputStream is = this.getClass().getClassLoader().getResourceAsStream( "samples/dwrapplication.properties"); Properties appProps = new Properties(); appProps.load(is); is.close(); for (int menuCount = 1; true; menuCount++) { String menuItem = appProps.getProperty("menu." + menuCount); if (menuItem == null) { break; } menuItems.add(menuItem); } return menuItems; } } The implementation is straightforward. The getMenuItems() method loads properties using the ClassLoader.getResourceAsStream() method, which searches the class path for the specified resource. Then, after loading properties, a for loop is used to loop through menu items and then a List of String-objects is returned to the client. The client is the JavaScript callback function that we will see later. DWR automatically converts the List of String objects to JavaScript arrays, so we don't have to worry about that. Testing the Web Application We haven't completed any client-side code now, but let's test the code anyway. Testing uses the Geronimo test environment. The Project context menu has the Run As menu that we use to test the application as shown in the following screenshot: Run on Server opens a wizard to define a new server runtime. The following screenshot shows that the Geronimo test environment has already been set up, and we just click Finish to run the application. If the test environment is not set up, we can manually define a new one in this dialog: After we click Finish, Eclipse starts the Geronimo test environment and our application with it. When the server starts, the Console tab in Eclipse informs us that it's been started. The Servers tab shows that the server is started and all the code has been synchronized, that is, the code is the most recent (Synchronization happens whenever we save changes on some deployed file.) The Servers tab also has a list of deployed applications under the server. Just the one application that we are testing here is visible in the Servers tab. Now comes the interesting part—what are we going to test if we haven't really implemented anything? If we take a look at the web.xml file, we will find that we have defined one initialization parameter. The Debug parameter is true, which means that DWR generates test pages for our remoted Java classes. We just point the browser (Firefox in our case) to the URL http://127.0.0.1:8080/DWREasyAjax/dwr and the following page opens up: This page will show a list of all the classes that we allow to be remoted. When we click the class name, a test page opens as in the following screenshot: This is an interesting page. We see all the allowed methods, in this case, all public class methods since we didn't specifically include or exclude anything. The most important ones are the script elements, which we need to include in our HTML pages. DWR does not automatically know what we want in our web pages, so we must add the script includes in each page where we are using DWR and a remoted functionality. Then there is the possibility of testing remoted methods. When we test our own method, getMenuItems(), we see a response in an alert box: The array in the alert box in the screenshot is the JavaScript array that DWR returns from our method. Developing Web Pages The next step is to add the web pages. Note that we can leave the test environment running. Whenever we change the application code, it is automatically published to test the environment, so we don't need to stop and start the server each time we make some changes and want to test the application. The CSS style sheet is from the Dynamic Drive CSS Library. The file is named styles.css, and it is in the WebContent directory in Eclipse IDE. The CSS code is as shown: /*URL: http://www.dynamicdrive.com/style/ */ .basictab{ padding: 3px 0; margin-left: 0; font: bold 12px Verdana; border-bottom: 1px solid gray; list-style-type: none; text-align: left; /*set to left, center, or right to align the menu as desired*/ } .basictab li{ display: inline; margin: 0; } .basictab li a{ text-decoration: none; padding: 3px 7px; margin-right: 3px; border: 1px solid gray; border-bottom: none; background-color: #f6ffd5; color: #2d2b2b; } .basictab li a:visited{ color: #2d2b2b; } .basictab li a:hover{ background-color: #DBFF6C; color: black; } .basictab li a:active{ color: black; } .basictab li.selected a{ /*selected tab effect*/ position: relative; top: 1px; padding-top: 4px; background-color: #DBFF6C; color: black; } This CSS is shown for the sake of completion, and we will not go into details of CSS style sheets. It is sufficient to say that CSS provides an excellent method to create websites with good presentation. The next step is the actual web page. We create an index.jsp page, in the WebContent directory, which will have the menu and also the JavaScript functions for our samples. It should be noted that although all JavaScript code is added to a single JSP page here in this sample, in "real" projects it would probably be more useful to create a separate file for JavaScript functions and include the JavaScript file in the HTML/JSP page using a code snippet such as this: <script type="text/javascript" src="myjavascriptcode/HorizontalMenu.js"/>. We will add JavaScript functions later for each sample. The following is the JSP code that shows the menu using the remoted HorizontalMenu class. <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link href="styles.css" rel="stylesheet" type="text/css"/> <script type='text/javascript' src='/DWREasyAjax/dwr/engine.js'></script> <script type='text/javascript' src='/DWREasyAjax/dwr/util.js'></script> <script type='text/javascript' src='/DWREasyAjax/dwr/interface/HorizontalMenu.js'></script> <title>DWR samples</title> <script type="text/javascript"> function loadMenuItems() { HorizontalMenu.getMenuItems(setMenuItems); } function getContent(contentId) { AppContent.getContent(contentId,setContent); } function menuItemFormatter(item) { elements=item.split(','); return '<li><a href="#" onclick="getContent(''+elements[1]+'');return false;">'+elements[0]+'</a></li>'; } function setMenuItems(menuItems) { menu=dwr.util.byId("dwrMenu"); menuItemsHtml=''; for(var i=0;i<menuItems.length;i++) { menuItemsHtml=menuItemsHtml+menuItemFormatter(menuItems[i]); } menu.innerHTML=menuItemsHtml; } function setContent(htmlArray) { var contentFunctions=''; var scriptToBeEvaled=''; var contentHtml=''; for(var i=0;i<htmlArray.length;i++) { var html=htmlArray[i]; if(html.toLowerCase().indexOf('<script')>-1) { if(html.indexOf('TO BE EVALED')>-1) { scriptToBeEvaled=html.substring(html.indexOf('>')+1,html.indexOf('</')); } else { eval(html.substring(html.indexOf('>')+1,html.indexOf('</'))); contentFunctions+=html; } } else { contentHtml+=html; } } contentScriptArea=dwr.util.byId("contentAreaFunctions"); contentScriptArea.innerHTML=contentFunctions; contentArea=dwr.util.byId("contentArea"); contentArea.innerHTML=contentHtml; if(scriptToBeEvaled!='') { eval(scriptToBeEvaled); } } </script> </head> <body onload="loadMenuItems()"> <h1>DWR Easy Java Ajax Applications</h1> <ul class="basictab" id="dwrMenu"> </ul> <div id="contentAreaFunctions"> </div> <div id="contentArea"> </div> </body> </html> This JSP is our user interface. The HTML is just normal HTML with a head element and a body element. The head includes reference to a style sheet and to DWR JavaScript files, engine.js, util.js, and our own HorizontalMenu.js. The util.js file is optional, but as it contains very useful functions, it could be included in all the web pages where we use the functions in util.js. The body element has a contentArea place holder for the content pages just below the menu. It also contains the content area for JavaScript functions for a particular content. The body element onload-event executes the loadMenuItems() function when the page is loaded. The loadMenuItems() function calls the remoted method of the HorizontalMenu Java class. The parameter of the HorizontalMenu. getMenuItems() JavaScript function is the callback function that is called by DWR when the Java method has been executed and it returns menu items. The setMenuItems() function is a callback function for the loadMenuItems() function mentioned in the previous paragraph. While loading menu items, the Horizontal.getMenuItems() remoted method returns menu items as a List of Strings as a parameter to the setMenuItems() function. The menu items are formatted using the menuItemFormatter() helper function. The menuItemFormatter() function creates li elements of menu texts. Menus are formatted as links, (a href) and they have an onclick event that has a function call to the getContent-function, which in turn calls the AppContent.getContent() function. The AppContent is a remoted Java class, which we haven't implemented yet, and its purpose is to read the HTML from a file based on the menu item that the user clicked. Implementation of AppContent and the content pages are described in the next section. The setContent() function sets the HTML content to the content area and also evaluates JavaScript options that are within the content to be inserted in the content area (this is not used very much, but it is there for those who need it). Our dynamic user interface looks like this: Note the Firebug window at the bottom of the browser screen. The Firebug console in the screenshot shows one POST request to our HorizontalMenu.getMenuItems() method. Other Firebug features are extremely useful during development work, and we find it useful that Firebug has been enabled throughout the development work. Callback Functions We saw our first callback function as a parameter in the HorizontalMenu.getMenuItems(setMenuItems) function, and since callbacks are an important concept in DWR, it would be good to discuss a little more about them now that we have seen their first usage. Callbacks are used to operate on the data that was returned from a remoted method. As DWR and AJAX are asynchronous, typical return values in RPCs (Remote Procedure Calls), as in Java calls, do not work. DWR hides the details of calling the callback functions and handles everything internally from the moment we return a value from the remoted Java method to receiving the returned value to the callback function. Two methods are recommended while using callback functions. We have already seen the first method in the HorizontalMenu.getMenuItems(setMenuItems) function call. Remember that there are no parameters in the getMenuItems()Java method, but in the JavaScript call, we added the callback function name at the end of the parameter list. If the Java method has parameters, then the JavaScript call is similar to CountryDB.getCountries(selectedLetters,setCountryRows), where selectedLetters is the input parameter for the Java method and setCountryRows is the name of the callback function (we see the implementation later on). The second method to use callbacks is a meta-data object in the remote JavaScript call. An example (a full implementation is shown later in this article) is shown here: CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { //function body here } }); Here, the function is anonymous and its implementation is included in the JavaScript call to the remoted Java method. One advantage here is that it is easy to read the code, and the code is executed immediately after we get the return value from the Java method. The other advantage is that we can add extra options to the call. Extra options include timeout and error handler as shown in the following example: CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { //function body here }, timeout:10000, errorHandler:function(errorMsg) { alert(errorMsg);} }); It is also possible to add a callback function to those Java methods that do not return a value. Adding a callback to methods with no return values would be useful in getting a notification when a remote call has been completed. Afterword Our first sample is ready, and it is also the basis for the following samples. We also looked at how applications are tested in the Eclipse environment. Using DWR, we can look at JavaScript code on the browser and Java code on the server as one. It may take a while to get used to it, but it will change the way we develop web applications. Logically, there is no longer a client and a server but just a single run time platform that happens to be physically separate. But in practice, of course, applications using DWR, JavaScript on the client and Java in the server, are using the typical client-server interaction. This should be remembered when writing applications in the logically single run-time platform.
Read more
  • 0
  • 0
  • 2691
article-image-soa-service-oriented-architecture
Packt
20 Oct 2009
17 min read
Save for later

SOA—Service Oriented Architecture

Packt
20 Oct 2009
17 min read
What is SOA? SOA is the acronym for Service Oriented Architecture. As it has come to be known, SOA is an architectural design pattern by which several guiding principles determine the nature of the design. Basically, SOA states that every component of a system should be a service, and the system should be composed of several loosely-coupled services. A service here means a unit of a program that serves a business process. "Loosely-coupled" here means that these services should be independent of each other, so that changing one of them should not affect any other services. SOA is not a specific technology, nor a specific language. It is just a blueprint, or a system design approach. It is an architecture model that aims to enhance the efficiency, agility, and productivity of an enterprise system. The key concepts of SOA are services, high interoperability and loose coupling. Several other architecture/technologies such as RPC, DCOM, and CORBA have existed for a long time, and attempted to address the client/server communication problems. The difference between SOA and these other approaches is that SOA is trying to address the problem from the client side, and not from the server side. It tries to decouple the client side from the server side, instead of bundling them, to make the client side application much easier to develop and maintain. This is exactly what happened when object-oriented programming (OOP) came into play 20 years ago. Prior to object-oriented programming, most designs were procedure-oriented, meaning the developer had to control the process of an application. Without OOP, in order to finish a block of work, the developer had to be aware of the sequence that the code would follow. This sequence was then hard-coded into the program, and any change to this sequence would result in a code change. With OOP, an object simply supplied certain operations; it was up to the caller of the object to decide the sequence of those operations. The caller could mash up all of the operations, and finish the job in whatever order needed. There was a paradigm shift from the object side to the caller side. This same paradigm shift is happening today. Without SOA, every application is a bundled, tightly coupled solution. The client-side application is often compiled and deployed along with the server-side applications, making it impossible to quickly change anything on the server side. DCOM and CORBA were on the right track to ease this problem by making the server-side components reside on remote machines. The client application could directly call a method on a remote object, without knowing that this object was actually far away, just like calling a method on a local object. However, the client-side applications continue to remain tightly coupled with these remote objects, and any change to the remote object will still result in a recompiling or redeploying of the client application. Now, with SOA, the remote objects are truly treated as remote objects. To the client applications, they are no longer objects; they are services. The client application is unaware of how the service is implemented, or of the signature that should be used when interacting with those services. The client application interacts with these services by exchanging messages. What a client application knows now is only the interfaces, or protocols of the services, such as the format of the messages to be passed in to the service, and the format of the expected returning messages from the service. Historically, there have been many other architectural design approaches, technologies, and methodologies to integrate existing applications. EAI (Enterprise Application Integration) is just one of them. Often, organizations have many different applications, such as order management systems, accounts receivable systems, and customer relationship management systems. Each application has been designed and developed by different people using different tools and technologies at different times, and to serve different purposes. However, between these applications, there are no standard common ways to communicate. EAI is the process of linking these applications and others in order to realize financial and operational competitive advantages. It may seem that SOA is just an extension of EAI. The similarity is that they are both designed to connect different pieces of applications in order to build an enterprise-level system for business. But fundamentally, they are quite different. EAI attempts to connect legacy applications without modifying any of the applications, while SOA is a fresh approach to solve the same problem. Why SOA? So why do we need SOA now? The answer is in one word—agility. Business requirements change frequently, as they always have. The IT department has to respond more quickly and cost-effectively to those changes. With a traditional architecture, all components are bundled together with each other. Thus, even a small change to one component will require a large number of other components to be recompiled and redeployed. Quality assurance (QA) effort is also huge for any code changes. The processes of gathering requirements, designing, development, QA, and deployment are too long for businesses to wait for, and become actual bottlenecks. To complicate matters further, some business processes are no longer static. Requirements change on an ad-hoc basis, and a business needs to be able to dynamically define its own processes whenever it wants. A business needs a system that is agile enough for its day-to-day work. This is very hard, if not impossible, with existing traditional infrastructure and systems. This is where SOA comes into play. SOA's basic unit is a service. These services are building blocks that business users can use to define their own processes. Services are designed and implemented so that they can serve different purposes or processes, and not just specific ones. No matter what new processes a business needs to build or what existing processes a business needs need to modify, the business users should always be able to use existing service blocks, in order to compete with others according to current marketing conditions. Also, if necessary, some new service blocks can be used. These services are also designed and implemented so that they are loosely coupled, and independent of one another. A change to one service does not affect any other service. Also, the deployment of a new service does not affect any existing service. This greatly eases release management and makes agility possible. For example, a GetBalance service can be designed to retrieve the balance for a loan. When a borrower calls in to query the status of a specific loan, this GetBalance service may be called by the application that is used by the customer service representatives. When a borrower makes a payment online, this service can also be called to get the balance of the loan, so that the borrower will know the balance of his or her loan after the payment. Yet in the payment posting process, this service can still be used to calculate the accrued interest for a loan, by multiplying the balance with the interest rate. Even further, a new process can be created by business users to utilize this service if a loan balance needs to be retrieved. The GetBalance service is developed and deployed independently from all of the above processes. Actually, the service exists without even knowing who the client will be or even how many clients there will be. All of the client applications communicate with this service through its interface, and its interface will remain stable once it is in production. If we have to change the implementation of this service, for example by fixing a bug, or changing an algorithm inside a method of the service, all of the client applications can still work without any change. When combined with the more mature Business Process Management (BPM) technology, SOA plays an even more important role in an organization's efforts to achieve agility. Business users can create and maintain processes within BPM, and through SOA they can plug a service into any of the processes. The front-end BPM application is loosely coupled to the back-end SOA system. This combination of BPM and SOA will give an organization much greater flexibility in order to achieve agility. How do we implement SOA? Now that we've established why SOA is needed by the business, the question becomes—how do we implement SOA? To implement SOA in an organization, three key elements have to be evaluated—people, process, and technology. Firstly, the people in the organization must be ready to adopt SOA. Secondly, the organization must know the processes that the SOA approach will include, including the definition, scope, and priority. Finally, the organization should choose the right technology to implement it. Note that people and processes take precedence over technology in an SOA implementation, but they are out of the scope of this article. In this article, we will assume people and processes are all ready for an organization to adopt SOA. Technically, there are many SOA approaches. At certain degrees, traditional technologies such as RPC, DCOM, CORBA, or some modern technologies such as IBM WebSphere MQ, Java RMI, and .NET Remoting could all be categorized as service-oriented, and can be used to implement SOA for one organization. However, all of these technologies have limitations, such as language or platform specifications, complexity of implementation, or the ability to support binary transports only. The most important shortcoming of these approaches is that the server-side applications are tightly coupled with the client-side applications, which is against the SOA principle. Today, with the emergence of web service technologies, SOA becomes a reality. Thanks to the dramatic increase in network bandwidth, and given the maturity of web service standards such as WS-Security, and WS-AtomicTransaction, an SOA back-end can now be implemented as a real system. SOA from different users' perspectives However, as we said earlier, SOA is not a technology, but only a style of architecture, or an approach to building software products. Different people view SOA in different ways. In fact, many companies now have their own definitions for SOA. Many companies claim they can offer an SOA solution, while they are really just trying to sell their products. The key point here is—SOA is not a solution. SOA alone can't solve any problem. It has to be implemented with a specific approach to become a real solution. You can't buy an SOA solution. You may be able to buy some kinds of products to help you realize your own SOA, but this SOA should be customized to your specific environment, for your specific needs. Even within the same organization, different players will think about SOA in quite different ways. What follows are just some examples of how different players in an organization judge the success of an SOA initiative using different criteria. [Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446] To a programmer, SOA is a form of distributed computing in which the building blocks (services) may come from other applications or be offered to them. SOA increases the scope of a programmer's product and adds to his or her resources, while also closely resembling familiar modular software design principles. To a software architect, SOA translates to the disappearance of fences between applications. Architects turn to the design of business functions rather than to self-contained and isolated applications. The software architect becomes interested in collaboration with a business analyst to get a clear picture of the business functionality and scope of the application. SOA turns software architects into integration architects and business experts. For the Chief Investment Officers (CIOs), SOA is an investment in the future. Expensive in the short term, its long-term promises are lower costs, and greater flexibility in meeting new business requirements. Re-use is the primary benefit anticipated as a means to reduce the cost and time of new application development. For business analysts, SOA is the bridge between them and the IT organization. It carries the promise that IT designers will understand them better, because the services in SOA reflect the business functions in business process models. For CEOs, SOA is expected to help IT become more responsive to business needs and facilitate competitive business change. Complexities in SOA implementation Although SOA will make it possible for business parties to achieve agility, SOA itself is technically not simple to implement. In some cases, it even makes software development more complex than ever, because with SOA you are building for unknown problems. On one hand, you have to make sure that the SOA blocks you are building are useful blocks. On the other, you need a framework within which you can assemble those blocks to perform business activities. The technology issues associated with SOA are more challenging than vendors would like users to believe. Web services technology has turned SOA into an affordable proposition for most large organizations by providing a universally-accepted, standard foundation. However, web services play a technology role only for the SOA backplane, which is the software infrastructure that enables SOA-related interoperability and integration. The following figure shows the technical complexity of SOA. It has been taken from Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446. As Gartner says, users must understand the complex world of middleware, and point-to-point web service connections only for small-scale, experimental SOA projects. If the number of services deployed grows to more than 20 or 30, then use a middleware-based intermediary—the SOA backplane. The SOA backplane could be an Enterprise Service Bus (ESB), a Message-Oriented Middleware (MOM), or an Object Request Broker (ORB). However, in this article, we will not cover it. We will build only point-to-point services using WCF. Web services There are many approaches to realizing SOA, but the most popular and practical one is—using web services. What is a web service? A web service is a software system designed to support interoperable machine-to-machine interaction over a network. A web service is typically hosted on a remote machine (provider), and called by a client application (consumer) over a network. After the provider of a web service publishes the service, the client can discover it and invoke it. The communications between a web service and a client application use XML messages. A web service is hosted within a web server and HTTP is used as the transport protocol between the server and the client applications. The following diagram shows the interaction of web services: Web services were invented to solve the interoperability problem between applications. In the early 90s, along with the LAN/WAN/Internet development, it became a big problem to integrate different applications. An application might have been developed using C++, or Java, and run on a Unix box, a Windows PC, or even a mainframe computer. There was no easy way for it to communicate with other applications. It was the development of XML that made it possible to share data between applications across hardware boundaries and networks, or even over the Internet. For example, a Windows application might need to display the price of a particular stock. With a web service, this application can make a request to a URL, and/or pass an XML string such as <QuoteRequest><GetPrice Symble='XYZ'/></QuoteRequest>. The requested URL is actually the Internet address of a web service, which, upon receiving the above quote request, gives a response, <QuoteResponse><QuotePrice Symble='XYZ'>51.22</QuotePrice></QuoteResponse/>. The Windows application then uses an XML parser to interpret the response package, and display the price on the screen. The reason it is called a web service is that it is designed to be hosted in a web server, such as Microsoft Internet Information Server, and called over the Internet, typically via the HTTP or HTTPS protocols. This is to ensure that a web service can be called by any application, using any programming language, and under any operating system, as long as there is an active Internet connection, and of course, an open HTTP/HTTPS port, which is true for almost every computer on the Internet. Each web service has a unique URL, and contains various methods. When calling a web service, you have to specify which method you want to call, and pass the required parameters to the web service method. Each web service method will also give a response package to tell the caller the execution results. Besides new applications being developed specifically as web services, legacy applications can also be wrapped up and exposed as web services. So, an IBM mainframe accounting system might be able to provide external customers with a link to check the balance of an account. Web service WSDL In order to be called by other applications, each web service has to supply a description of itself, so that other applications will know how to call it. This description is provided in a language called a WSDL. WSDL stands for Web Services Description Language. It is an XML format that defines and describes the functionalities of the web service, including the method names, parameter names, and types, and returning data types of the web service. For a Microsoft ASMX web service, you can get the WSDL by adding ?WSDL to the end of the web service URL, say http://localhost/MyService/MyService.asmx?WSDL. Web service proxy A client application calls a web service through a proxy. A web service proxy is a stub class between a web service and a client. It is normally auto-generated by a tool such as Visual Studio IDE, according to the WSDL of the web service. It can be re-used by any client application. The proxy contains stub methods mimicking all of methods of the web service so that a client application can call each method of the web service through these stub methods. It also contains other necessary information required by the client to call the web service such as custom exceptions, custom data and class types, and so on. The address of the web service can be embedded within the proxy class, or it can be placed inside a configuration file. A proxy class is always for a specific language. For each web service, there could be a proxy class for Java clients, a proxy class for C# clients, and yet another proxy class for COBOL clients. To call a web service from a client application, the proper proxy class first has to be added to the client project. Then, with an optional configuration file, the address of the web service can be defined. Within the client application, a web service object can be instantiated, and its methods can be called just as for any other normal method. SOAP There are many standards for web services. SOAP is one of them. SOAP was originally an acronym for Simple Object Access Protocol, and was designed by Microsoft. As this protocol became popular with the spread of web services, and its original meaning was misleading, the original acronym was dropped with version 1.2 of the standard. It is now merely a protocol, maintained by W3C. SOAP, now, is a protocol for exchanging XML-based messages over computer networks. It is widely-used by web services and has become its de-facto protocol. With SOAP, the client application can send a request in XML format to a server application, and the server application will send back a response in XML format. The transport for SOAP is normally HTTP / HTTPS, and the wide acceptance of HTTP is one of the reasons why SOAP is widely accepted today.
Read more
  • 0
  • 0
  • 6328

article-image-windows-development-using-visual-studio-2008
Packt
16 Oct 2009
11 min read
Save for later

Windows Development Using Visual Studio 2008

Packt
16 Oct 2009
11 min read
Visual Studio Visual Studio is an environment for developing applications in Windows. It has a number of tools, such as an editor, compilers, linkers, a debugger, and a project manager. It also has several Wizards—tools designed for rapid development. The Wizard you will first encounter is the Application Wizard. It generates code for an Application Framework. The idea is that we use the Application Wizard to design a skeleton application that is later completed with more application-specific code. There is no real magic about wizards, all they do is generate the skeleton code. We could write the code ourselves, but it is a rather tedious job. Moreover, an application can be run in either debug or release mode. In debug mode, additional information is added in order to allow debugging; in release mode, all such information is omitted in order to make the execution as fast as possible. The code of this article is developed with Visual Studio 2008. The Windows 32 bits Application Programming Interface (Win32 API) is a huge C function library. It contains a couple of thousand functions for managing the Windows system. With the help of Win32 API it is possible to totally control the Windows operating system. However, as the library is written in C, it could be a rather tedious job to develop a large application, even though it is quite possible. That is the main reason for the existence of the Microsoft Foundation Classes (MFC). It is a large C++ class library containing many classes encapsulating the functionality of Win32 API. It does also hold some generic classes to handle lists, maps, and arrays. MFC combines the power of Win32 API with the advantages of C++. However, on some occasions MFC is not enough. When that happens, we can simply call an appropriable Win32 API function, even though the application is written in C++ and uses MFC. Most of the classes of MFC belong to a class hierarchy with CObject at the top. On some occasions, we have to let our classes inherit CObject in order to achieve some special functionality. The baseclass Figure in the Draw and Tetris applications inherits CObject in order to read or write objects of unknown classes. The methods UpdateAllViews and OnUpdate communicate by sending pointers to CObject objects. The Windows main class is CWnd. In this environment, there is no function main. Actually, there is a main, but it is embedded in the framework. We do not write our own main function, and there is not one generated by the Application Wizard. Instead, there is the object theApp, which is an instance of the application class. The application is launched by its constructor. When the first version of MFC was released, there was no standard logical type in C++. Therefore, the type BOOL with the values TRUE and FALSE was introduced. After that, the type bool was introduced to C++. We must use BOOL when dealing with MFC method calls, and we could use bool otherwise. However, in order to keep things simple, let us use BOOL everywhere. In the same way, there is a MFC class CString that we must use when calling MFC methods. We could use the C++ built-in class string otherwise. However, let us use CString everywhere. The two classes are more or less equivalent. There are two types for storing a character, char and wchar_t. In earlier version of Windows, you were supposed to use char for handling text, and in more modern versions you use wchar_t. In order to make our application independent of which version it is run on, there are two macros TCHAR and TEXT. TCHAR is the character type that replaces char and wchar_t. TEXT is intended to encapsulate character and string constants. TCHAR *pBuffer;stScore.Format(TEXT("Score: %d."), iScore); There is also the MFC type BYTE which holds a value of the size of one byte, and UINT which is shorthand for unsigned integer. Finally, all generated framework classes have a capital C at the beginning of the name. The classes we write ourselves do not. The Document/View model The applications in this article are based on the Document/View model. Its main idea is to have two classes with different responsibilities. Let us say we name the application Demo, the Application Wizard will name the document class CDemoDoc and the view class will be named CDemoView. The view class has two responsibilities: to accept input from the user by the keyboard or the mouse, and to repaint the client area (partly or completely) at the request of the document class or the system. The document's responsibility is mainly to manage and modify the application data. The model comes in two forms: Single Document Interface (SDI) and Multiple Document Interface (MDI). When the application starts, a document object and a view object are created, and connected to each other. In the SDI, it will continue that way. In the MDI form, the users can then add or remove as many views they want to. There is always exactly one document object, but there may be one or more view objects, or no one at all. The objects are connected to each other by pointers. The document object has a list of pointers to the associated view objects. Each view object has a fieldm_pDocument that points at the document object. When a change in the document's data has occurred, the document instructs all of its views to repaint their client area by calling the method UpdateAllViews in order to reflect the change. The message system Windows is built on messages. When the users press one of the mouse buttons or a key, when they resize a window, or when they select a menu item, a message is generated and sent to the current appropriate class. The messages are routed by a message map. The map is generated by the Application Wizard. It can be modified manually or with the Properties Window View (the Messages or Events button). The message map is declared in the file class' header file as follows: DECLARE_MESSAGE_MAP() The message map is implemented in the class' implementation file as follows: BEGIN_MESSAGE_MAP(this_class, base_class)// Message handlers.END_MESSAGE_MAP() Each message has it own handle, and is connected to a method of a specific form that catches the message. There are different handlers for different types of messages. There are around 200 messages in Windows. Here follows a table with the most common ones. Note that we do not have to catch every message. We just catch those we are interested in, the rest will be handled by the framework. Message Handler/Method Sent WM_CREATE ON_WM_CREATE/OnCreate When the window is created, but not yet showed. WM_SIZE ON_WM_SIZE/OnSize When the window has been resized. WM_MOVE ON_WM_MOVE/OnMove When the window has been moved. WM_SETFOCUS ON_WM_SETFOCUS/ OnSetFocus When the window receives input focus. WM_KILLFOCUS ON_WM_KILLFOCUS/ OnKillFocus When the window loses input focus. WM_VSCROLL ON_WM_VSCROLL/ OnVScroll When the user scrolls the vertical bar. WM_HSCROLL ON_WM_HSCROLL/ OnHScroll When the user scrolls the horizontal bar. WM_LBUTTONDOWN   WM_MBUTTONDOWN   WM_RBUTTONDOWN ON_WM_LBUTTONDOWN/ OnLButtonDown ON_WM_MBUTTONDOWN/ OnMButtonDown ON_WM_RBUTTONDOWN/ OnRButtonDown When the user presses the left, middle, or right mouse button. WM_MOUSEMOVE ON_WM_MOUSEMOVE/ OnMouseMove When the user moves the mouse, there are flags available to decide whether the buttons are pressed. WM_LBUTTONUP   WM_MBUTTONUP   WM_RBUTTONUP ON_WM_LBUTTONUP/ OnLButtonUp ON_WM_MUTTONUP/ OnMButtonUp ON_WM_RUTTONUP/ OnRButtonUp When the user releases the left, middle, or right button. WM_CHAR ON_WM_CHAR/OnChar When the user inputs a writable character of the keyboard. WM_KEYDOWN ON_WM_KEYDOWN/ OnKeyDown When the user presses a key of the keyboard. WM_KEYUP ON_WM_KEYUP/ OnKeyUp When the user releases a key of the keyboard. WM_PAINT ON_WM_PAINT/OnPaint When the client area of the window needs to be repainted, partly or completely. WM_CLOSE ON_WM_CLOSE/OnClose When the user clicks at the close button in the upper right corner of the window. WM_DESTROY ON_WM_DESTROY/ OnDestroy When the window is to be closed. WM_COMMAND ON_COMMAND(Identifier, Name)/OnName   When the user selects a menu item, a toolbar button, or a accelerator key connected to the identifier. WM_COMMAND_ UPDATE ON_COMMAND_ UPDATE_UI(Identifier, Name)/OnUpdateName On idle time, when the system is not busy with any other task, this message is sent in order to enable/disable or to check menu items and toolbar buttons. When a user selects a menu item, a command message is sent to the application. Thanks to MFC, the message can be routed to virtually any class in the application. However, in the applications of this article, all menu messages are routed to the document class. It is possible to connect an accelerator key or a toolbar button to the same message, simply by giving it the same identity number. Moreover, when the system is in idle mode (not busy with any other task) thecommand update message is sent to the application. This gives us an opportunity to check or disable some of the menu items. For instance, the Save item in the File menu should be grayed (disabled) when the document has not been modified and does not have to be saved. Say that we have a program where the users can paint in one of three colors. The current color should be marked by a radio box. The message map and its methods can be written manually or be generated with the Resource View (the View menu in Visual Studio) which can help us generate the method prototype, its skeleton definition, and its entry in the message map. The Resource is a system of graphical objects that are linked to the application. When the framework is created by the Application Wizard, the standard menu bar and toolbar are included. We can add our own menus and buttons in Resource Editor, a graphical tool of Visual Studio. The coordinate system In Windows, there are device (physical) and logical coordinates. There are several logical coordinate mapping systems in Windows. The simplest one is the text system; it simply maps one physical unit to the size of a pixel, which means that graphical figures will have different size monitors with different sizes or resolutions. This system is used in the Ring and Tetris applications. The metric system maps one physical unit to a tenth of a millimeter (low metric) or a hundredth of a millimeter (high metric). There is also the British system that maps one physical unit to a hundredth of an inch (low English) or a thousandth of an inch (high English). The British system is not used in this article. The position of a mouse click is always given in device units. When a part of the client area is invalidated (marked for repainting), the coordinates are also given in device units, and when we create or locate the caret, we use device coordinates. Except for these events, we translate the positions into logical units of our choice. We do not have to write translation routines ourselves, there are device context methods LPtoDP (Logical Point to Device Point) and DPtoLP (Device Point to Logical Point) in the next section that do the job for us. The setting of the logical unit system is done in OnInitialUpdate and OnPrepareDC in the view classes. In the Ring and Tetris Applications, we just ignore the coordinates system and use pixels. In the Draw application, the view class is a subclass of the MFC class CScrollView. It has a method SetScrollSizes that takes the logical coordinate system and the total size of the client area (in logical units). Then the mapping between the device and logical system is done automatically and the scroll bars are set to appropriate values when the view is created and each time its size is changed. void SetScrollSizes(int nMapMode, CSize sizeTotal, const CSize& sizePage = sizeDefault, const CSize& sizeLine = sizeDefault); In the Calc and Word Applications, however, we set the mapping between the device and logical system manually by overriding the OnPrepareDC method. It calls the method SetMapMode which sets the logical horizontal and vertical units to be equal. This ensures that circles will be kept round. The MFC device context method GetDeviceCaps returns the size of the screen in pixels and millimeters. Those values are used in the call to SetWindowExt and SetViewportExt, so that the logical unit is one hundredth of a millimeter also in those applications. The SetWindowOrg method sets the origin of the view's client area in relation to the current positions of the scroll bars, which implies that we can draw figures and text without regarding the current positions of the scroll bars. int SetMapMode(int iMapMode);int GetDeviceCaps(int iIndex) const;CSize SetWindowExt(CSize szScreen);CSize SetViewportExt(CSize szScreen);CPoint SetWindowOrg(CPoint ptorigin);
Read more
  • 0
  • 0
  • 1263

article-image-arrays-and-control-structures-object-oriented-javascript
Packt
16 Oct 2009
4 min read
Save for later

Arrays and Control Structures in Object-Oriented JavaScript

Packt
16 Oct 2009
4 min read
Arrays Now that you know the basic primitive data types in JavaScript, it's time to move to a more interesting data structure—the array. To declare a variable that contains an empty array, you use square brackets with nothing between them: >>> var a = [];>>> typeof a;"object" typeof returns "object", but don't worry about this for the time being, we'll get to that when we take a closer look at objects. To define an array that has three elements, you do this: >>> var a = [1,2,3]; When you simply type the name of the array in the Firebug console, it prints the contents of the array: >>> a[1, 2, 3] So what is an array exactly? It's simply a list of values. Instead of using one variable to store one value, you can use one array variable to store any number of values as elements of the array. Now the question is how to access each of these stored values? The elements contained in an array are indexed with consecutive numbers starting from zero. The first element has index (or position) 0, the second has index 1 and so on. Here's the three-element array from the previous example: Index Value 0 1 1 2 2 3 In order to access an array element, you specify the index of that element inside square brackets. So a[0] gives you the first element of the array a, a[1] gives you the second, and so on. >>> a[0]1>>> a[1]2 Adding/Updating Array Elements Using the index, you can also update elements of the array. The next example updates the third element (index 2) and prints the contents of the new array. >>> a[2] = 'three';"three">>> a[1, 2, "three"] You can add more elements, by addressing an index that didn't exist before. >>> a[3] = 'four';"four">>> a[1, 2, "three", "four"] If you add a new element, but leave a gap in the array, those elements in between are all assigned the undefined value. Check out this example: >>> var a = [1,2,3];>>> a[6] = 'new';"new">>> a[1, 2, 3, undefined, undefined, undefined, "new"] Deleting Elements In order to delete an element, you can use the delete operator. It doesn't actually remove the element, but sets its value to undefined. After the deletion, the length of the array does not change. >>> var a = [1, 2, 3];>>> delete a[1];true>>> a[1, undefined, 3] Arrays of arrays An array can contain any type of values, including other arrays. >>> var a = [1, "two", false, null, undefined];>>> a[1, "two", false, null, undefined]>>> a[5] = [1,2,3][1, 2, 3]>>> a[1, "two", false, null, undefined, [1, 2, 3]] Let's see an example where you have an array of two elements, each of them being an array. >>> var a = [[1,2,3],[4,5,6]];>>> a[[1, 2, 3], [4, 5, 6]] The first element of the array is a[0] and it is an array itself. >>> a[0][1, 2, 3] To access an element in the nested array, you refer to the element index in another set of square brackets. >>> a[0][0]1>>> a[1][2]6 Note also that you can use the array notation to access individual characters inside a string. >>> var s = 'one';>>> s[0]"o">>> s[1]"n">>> s[2]"e" There are more ways to have fun with arrays, but let's stop here for now, remembering that: An array is a data store An array contains indexed elements Indexes start from zero and increment by one for each element in the array To access array elements we use the index in square brackets An array can contain any type of data, including other arrays
Read more
  • 0
  • 0
  • 1850
article-image-user-input-validation-tapestry-5
Packt
16 Oct 2009
9 min read
Save for later

User Input Validation in Tapestry 5

Packt
16 Oct 2009
9 min read
Adding Validation to Components The Start page of the web application Celebrity Collector has a login form that expects the user to enter some values into its two fields. But, what if the user didn't enter anything and still clicked on the Log In button? Currently, the application will decide that the credentials are wrong and the user will be redirected to the Registration page, and receive an invitation to register. This logic does make some sense; but, it isn't the best line of action, as the button might have been pressed by mistake. These two fields, User Name and Password, are actually mandatory, and if no value was entered into them, then it should be considered an error. All we need to do for this is to add a required validator to every field, as seen in the following code: <tr> <td> <t:label t_for="userName"> Label for the first text box</t:label>: </td> <td> <input type="text" t_id="userName" t_type="TextField" t:label="User Name" t_validate="required"/> </td></tr><tr> <td> <t:label t_for="password"> The second label</t:label>: </td><td> <input type="text" t_id="password" t_label="Password" t:type="PasswordField" t_validate="required"/></td></tr> Just one additional attribute for each component, and let's see how this works now. Run the application, leave both fields empty and click on the Log In button. Here is what you should see: Both fields, including their labels, are clearly marked now as an error. We even have some kind of graphical marker for the problematic fields. However, one thing is missing—a clear explanation of what exactly went wrong. To display such a message, one more component needs to be added to the page. Modify the page template, as done here: <t:form t_id="loginForm"> <t:errors/> <table> The Errors component is very simple, but one important thing to remember is that it should be placed inside of the Form component, which in turn, surrounds the validated components. Let's run the application again and try to submit an empty form. Now the result should look like this: This kind of feedback doesn't leave any space for doubt, does it? If you see that the error messages are strongly misplaced to the left, it means that an error in the default.css file that comes with Tapestry distribution still hasn't been fixed. To override the faulty style, define it in our application's styles.css file like this: DIV.t-error LI{ margin-left: 20px;} Do not forget to make the stylesheet available to the page. I hope you will agree that the efforts we had to make to get user input validated are close to zero. But let's see what Tapestry has done in response to them: Every form component has a ValidationTracker object associated with it. It is provided automatically, we do not need to care about it. Basically, ValidationTracker is the place where any validation problems, if they happen, are recorded. As soon as we use the t:validate attribute for a component in the form, Tapestry will assign to that component one or more validators, the number and type of them will depend on the value of the t:validate attribute (more about this later). As soon as a validator decides that the value entered associated with the component is not valid, it records an error in the ValidationTracker. Again, this happens automatically. If there are any errors recorded in ValidationTracker, Tapestry will redisplay the form, decorating the fields with erroneous input and their labels appropriately. If there is an Errors component in the form, it will automatically display error messages for all the errors in ValidationTracker. The error messages for standard validators are provided by Tapestry while the name of the component to be mentioned in the message is taken from its label. A lot of very useful functionality comes with the framework and works for us "out of the box", without any configuration or set-up! Tapestry comes with a set of validators that should be sufficient for most needs. Let's have a more detailed look at how to use them. Validators The following validators come with the current distribution of Tapestry 5: Required—checks if the value of the validated component is not null or an empty string. MinLength—checks if the string (the value of the validated component) is not shorter than the specified length. You will see how to pass the length parameter to this validator shortly. MaxLength—same as above, but checks if the string is not too long. Min—ensures that the numeric value of the validated component is not less than the specified value, passed to the validator as a parameter. Max—as above, but ensures that the value does not exceed the specified limit. Regexp—checks if the string value fits the specified pattern. We can use several validators for one component. Let's see how all this works together. First of all, let's add another component to the Registration page template: <tr> <td><t:label t_for="age"/>:</td> <td><input type="text" t_type="textfield" t_id="age"/></td></tr> Also, add the corresponding property to the Registration page class, age, of type double. It could be an int indeed, but I want to show that the Min and Max validators can work with fractional numbers too. Besides, someone might decide to enter their age as 23.4567. This will be weird, but not against the laws. Finally, add an Errors component to the form at the Registration page, so that we can see error messages: <t:form t_id="registrationForm"> <t:errors/> <table> Now we can test all the available validators on one page. Let's specify the validation rules first: Both User Name and Password are required. Also, they should not be shorter than three characters and not longer than eight characters. Age is required, and it should not be less than five (change this number if you've got a prodigy in your family) and not more than 120 (as that would probably be a mistake). Email address is not required, but if entered, should match a common pattern. Here are the changes to the Registration page template that will implement the specified validation rules: <td> <input type="text" t_type="textfield" t_id="userName" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="passwordfield" t_id="password" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="textfield" t_id="age" t:validate="required,min=5,max=120"/></td>...<input type="text" t_type="textfield" t_id="email" t:validate="regexp"/> As you see, it is very easy to pass a parameter to a validator, like min=5 or maxlength=8. But, where do we specify a pattern for the Regexp validator? The answer is, in the message catalog. Let's add the following line to the app.properties file: email-regexp=^([a-zA-Z0-9_.-])+@(([a-zA-Z0-9-])+.)+([a-zA-Z0-9]{2,4})+$ This will serve as a regular expression for all Regexp validators applied to components with ID email throughout the application. Run the application, go to the Registration page and, try to submit the empty form. Here is what you should see: Looks all right, but the message for the age could be more sensible, something like You are too young! You should be at least 5 years old. We'll deal with this later. However for now, enter a very long username, only two characters for password and an age that is more than the upper limit, and see how the messages will change: Again, looks good, except for the message about age. Next, enter some valid values for User Name, Password and Age. Then click on the check box to subscribe to the newsletter. In the text box for email, enter some invalid value and click on Submit. Here is the result: Yes! The validation worked properly, but the error message is absolutely unacceptable. Let's deal with this, but first make sure that any valid email address will pass the validation.   Providing Custom Error Messages We can provide custom messages for validators in the application's (or page's) message catalog. For such messages we use keys that are made of the validated component's ID, the name of validator and the "message" postfix. Here is an example of what we could add to the app.properties file to change error messages for the Min and Max validators of the Age component as well as the message used for the email validation: email-regexp-message=Email address is not valid.age-min-message=You are too young! You should be at least 5 years old.age-max-message=People do not live that long! Still better, instead of hard-coding the required minimal age into the message, we could insert into the message the parameter that was passed to the Min validator (following the rules for java.text.Format), like this: age-min-message=You are too young! You should be at least %s years old. If you run the application now and submit an invalid value for age, the error message will be much better: You might want to make sure that the other error messages have changed too. We can now successfully validate values entered into separate fields, but what if the validity of the input depends on how two or more different values relate to each other? For example, at the Registration page we want two versions of password to be the same, and if they are not, this should be considered as an invalid input and reported appropriately. Before dealing with this problem however, we need to look more thoroughly at different events generated by the Form component.  
Read more
  • 0
  • 0
  • 7401

article-image-jbi-binding-components-netbeans-ide-6
Packt
16 Oct 2009
4 min read
Save for later

JBI Binding Components in NetBeans IDE 6

Packt
16 Oct 2009
4 min read
Binding Components Service Engines are pluggable components which connect to the Normalized Message Router (NMR) to perform business logic for clients. Binding components are also standard JSR 208 components that plug in to NMR and provide transport independence to NMR and Service Engines. The role of binding components is to isolate communication protocols from JBI container so that Service Engines are completely decoupled from the communication infrastructure. For example, BPEL Service Engine can receive requests to initiate BPEL process while reading files on the local file system. It can receive these requests from SOAP messages, from a JMS message, or from any of the other binding components installed into JBI container. Binding Component is a JSR 208 component that provides protocol independent transport services to other JBI components. The following figure shows how binding components fit into the JBI Container architecture: In this figure, we can see that the role of BC is to send and receive messages both internally and externally from Normalized Message Router using protocols, specific to the binding component. We can also see that any number of binding components can be installed into the JBI container. This figure shows that like Service Engines (SE), binding components do not communicate directly with other binding components or with Service Engines. All communication between individual binding components and between binding components and Service Engines is performed via sending standard messages through the Normalized Message Router. NetBeans Support for Binding Components The following table lists which binding components are installed into the JBI container with NetBeans 5.5 and NetBeans 6.0:   As is the case with Service Engines, binding components can be managed within the NetBeans IDE. The list of Binding Components installed into the JBI container can be displayed by expanding the Servers | Sun Java System Application Server 9 | JBI | Binding Components node within the Services explorer. The lifecycle of binding components can be managed by right-clicking on a binding component and selecting a lifecycle process—Start, Stop, Shutdown, or Uninstall. The properties of an individual binding component can also be obtained by selecting the Properties menu option from the context menu as shown in the following figure. Now that we've discussed what binding components are, and how they communicate both internally and externally to the Normalized Message Router, let's take a closer look at some of the more common binding components and how they are accessed and managed from within the NetBeans IDE. File Binding Component The file binding component provides a communications mechanism for JBI components to interact with the file system. It can act as both a Provider by checking for new files to process, or as a Consumer by outputting files for other processes or components. The figure above shows the file binding component acting as a Provider of messages. In this scenario, a message has been sent to the JBI container, and picked up by a protocol-specific binding component (for example, a SOAP message has been received). A JBI Process then occurs within the JBI container which may include routing the message between many different binding components and Service Engines depending upon the process. Finally, after the JBI Process has completed, the results of the process are sent to File Binding Component which writes out the result to a file. The figure above shows the file binding component acting as a Consumer of messages. In this situation, the File Binding Component is periodically polling the file system looking for files with a specified filename pattern in a specified directory. When the binding component finds a file that matches its criteria, it reads in the file and starts the JBI Process, which may again cause the input message to be routed between many different binding components and Service Engines. Finally, in this example, the results of the JBI Process are output via a Binding Component. Of course, it is possible that a binding component can act as both a provider and a consumer within the same JBI process. In this case, the file binding component would be initially responsible for reading an input message from the file system. After any JBI processing has occurred, the file binding component would then write out the results of the process to a file. Within the NetBeans Enterprise Pack, the entire set of properties for the file binding component can be edited within the Properties window. The properties for the binding component are displayed when either the input or output messages are selected from the WSDL in a composite application as shown in the following figure.
Read more
  • 0
  • 0
  • 1430
Modal Close icon
Modal Close icon