Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-crm-deployment-options
Packt
15 Sep 2010
22 min read
Save for later

CRM Deployment Options

Packt
15 Sep 2010
22 min read
(For more resources on SugarCRM, see here.) Before you investigate that section of this article, it is important to review some of the issues that should be accounted for when deploying your own server. Some of the important items from that list include the following: Selecting an operating system: You must weigh the value of using Windows Server versus Linux (or other operating system) and vice-versa. Hardware configuration of the server: It must be capable of handling your expected immediate load, plus allow for at least 18 to 24 months of expected growth. Identify existing infrastructure that may be repurposed for the task: Your deployment does not require new hardware; it requires hardware capable of handling the current load with some room for growth. Establish a powerful data backup and security plan: The data in your CRM system is arguably the most valuable information your business owns. You must treat it as such, and plan for the worst. You must be able to quickly recover from anything, including theft of hardware and a natural disaster. Internet bandwidth: The current Internet connection at your office may be sufficient for today's e-mail and web browsing needs, but may not be adequate for responsive use of your CRM system, especially if you are using an On-Demand provider. As mentioned previously, this article does not specifically cover the process of installing your CRM system; however, it does discuss the aforementioned issues in detail. This information should help you make well-informed decisions for your deployment. Deployment alternatives One of the advantages of using SugarCRM as your CRM solution is its flexibility in a number of areas. For now, the flexibility we are most concerned with is its ability to be installed in a variety of environments. SugarCRM is rather unique in its breadth of supported deployment options. The four basic options from which you must choose are as follows: On-Demand: In this situation, you pay a fee for the right to use the SugarCRM application. Fees are usually based on a per month and per user model, but can vary greatly depending on the provider. The advantage of this deployment is the quick time to deployment and elimination of responsibilities, such as backing up the server and so forth. Some of the reasons why businesses choose not to use this model include the following: The data resides on someone else's server There is a limited access to the data beyond the application There are limits on some of the customizations that can be applied to the system The latter point can become a real issue if you intend to integrate your CRM system extensively. Collocation: The main difference between this option and the previous one is that you are supplying the server that will host your installation, as opposed to paying a fee for the right to use a preconfigured one. As you are responsible for providing the server, you are also responsible for making the appropriate hardware purchases that will provide redundancy and high availability. You must still pay a monthly fee while also assuming the responsibility of configuring and maintaining the hardware, although many collocation providers offer maintenance services at an additional cost that may fit your budget. In addition, you are also responsible for installing and maintaining your SugarCRM installation; although, this level of control over your server and installation also eliminates the various limitations inherited by On-Demand environments. Collocation is also helpful for addressing bandwidth and disaster recovery needs. Collocation service providers are able to provide high bandwidth connections that are likely to be cost-prohibitive if you were to try to get the same at your office. Their facilities are also equipped with solid security and backup measures. These measures protect your server and its data, part of which includes their inclusion of systems that handle emergencies, such as fire and power outages. On-Premise: This option should be fairly self-explanatory—you buy a server (or use the one you already have), then install the required software on it. In this model, you are taking on the responsibility for maintaining the hardware, installing the appropriate software and furthermore, addressing the security measures, such as backups, security, fire prevention, and others. However, the result is a system over which you have a complete control. Since this type of endeavor is sometimes beyond the financial means of many small businesses, there is a temptation to cut costs by eliminating some of the components, especially lesser valued items, such as security. Remember, the data in your CRM system is the lifeblood of your business. It is arguably the single most important asset of your business. Treat it as such. Shared Server: This is the least expensive, not surprisingly, and also the lowest capacity option. You have your SugarCRM instance hosted on a server whose capabilities you rent on a monthly basis and are shared with other users. Those services include not only server space, but also backup and other security services that were discussed previously. Due to the proliferation of hosting providers, costs for shared hosting services are within the reach of nearly all businesses. However, a vast majority of these service providers are in the business of providing generic hosting platforms that customers tend to use for hosting websites, as opposed to applications, such as SugarCRM. While many are able to provide services that are conducive to a positive experience with SugarCRM, it is not uncommon to run into providers whose services hinder its installation, performance, or overall usage. It is also not unheard of for a hosting provider to make system changes that in turn cause SugarCRM to no longer function. Again, their priority is not that you have a working SugarCRM system, but that their services are able to compete effectively in their market. This also means that the features they offer, such as direct access to the data, will vary by provider. For many businesses, the low cost is reason enough to overlook the potential risks and limitations, but nevertheless, you must be careful in your selection of a service provider if you are choosing this option. A deployment option that continues to rapidly grow in popularity is the use of cloud computing services, such as Amazon Elastic Compute Cloud (Amazon EC2) (http://aws.amazon.com/ec2/). It offers many of the same benefits as the collocation option; however, unlike collocation, you do not need to provide a physical server. You are renting a virtual server at costs as low as approximately US$100 per month (at the time of this writing), allowing you to deploy a very comprehensive solution. In addition to reduced costs, cloud computing also gives you the flexibility to effortlessly scale by adding additional servers by means of a few mouse clicks. This process allows you to double (triple, quadruple, and so on) your processing capacity in minutes—a process that in past years may have taken days to complete. The popularity of such services is such that you can even find a virtual server with SugarCRM already installed on it, eliminating the potentially complex setup work and bringing it closer to the turnkey model of the On-Demand option.   Let us look at a comparison chart of the various options which are as follows:   On-Demand On-Demand On-Premise Shared Server Initial cost Low Medium Medium Low Ongoing cost Medium/High Medium Low Low Initial setup Easy Complex Complex Somewhat complex Your ongoing effort Low Medium Medium Medium/High Custom fit Limited Excellent Excellent Varies Data security Excellent Excellent Self supplied Excellent Performance Excellent Excellent Likely excellent Varies A couple of important points can be extrapolated from the previous chart. First, note that your initial expenses will be higher with either the Collocation or On-Premise option. This makes perfect sense, as both require that you provide a physical server which you may have to purchase—unless you already have a suitable server available. Even then, you may still need to buy a Server Operating System. On a related note, you may also incur additional costs to develop and implement a data backup and security solution. It would also be a wise investment to pay an experienced SugarCRM consultant a nominal fee to install your system. The latter point touches on an unproductive scenario in which too many get trapped. Most entrepreneurs tend to have a "hands-on" mentality, meaning they like to do things for themselves as much as possible. Sometimes, this is out of necessity or shortage of financial resources, while on other occasions different motivators explain the behavior. The danger, however, is that installing a piece of software is sometimes a task best left to someone with more experience. It is not uncommon for some people to spend 3 to 4 hours (or days) attempting to complete the installation, whereas an experienced SugarCRM consultant should be able to do the same in well under an hour. You should weigh the value of your time against the costs that would be incurred from using a consultant. Unless your time is worth less than US$10/hour, you are likely to find that hiring a consultant to perform the task is a far more cost-effective approach. A second point of importance that should be highlighted on the chart relates to post-deployment costs. An On-Demand deployment requires little upfront investment, but in the long run, it may prove to be quite expensive. Your subscription is based on a per user, per month fee that usually starts at around US$30/month. Thus, a five user implementation would represent US$150/month or US$1,800/year. Furthermore, some On-Demand service providers require a commitment of at least 12 months or longer. Such arrangements represent not only a financial commitment, but also one to the CRM solution, underscoring the importance of diligently evaluating whether or not the feature set it offers will meet your business needs. The financial part may be a trivial matter for businesses with a small user count, as it removes the burden of maintaining a server, and others. However, if your business has more than 10 users, you should consider an On-Premise deployment, not only for financial reasons, but also to meet a wide variety of customization needs that may not be feasible in an On-Demand environment. If you use SugarCRM as an On-Demand service, you will, in all likelihood not have a whole server dedicated to running SugarCRM for your business. Instead, your service provider will most likely be using a shared server facility—a controlled portion of the resources of a physical server—to support your business' CRM deployment. If you are planning to use your CRM to house a lot of shared documents for your business, you should check with your On-Demand service provider if any disk space limitations are applicable to your subscription—they are often surprisingly low. From reviewing the table, you should also notice that the Shared-Server approach is the least cost-prohibitive option for deploying a CRM system. It is cost-effective, but your customization capabilities and ability to handle larger loads of data and users will vary greatly depending upon the service provider. You may quickly run into disk space issues if your service provider offers a limited amount of disk space and you intend to use a lot of files, documents, so on in conjunction with your CRM data. Some service providers also place restrictions on the amount of bandwidth you are allowed to use. Larger data and user loads are likely to cause that limit to be reached rather quickly. Due to the wide range of hosting providers and options available today, it is important to evaluate the various features they offer with great scrutiny. Some of the important factors include: disk space limits, shell access, remote MySQL connections, and bandwidth restrictions. In general, even in the best of scenarios, this option should not be used for more than 10 users. For most businesses with more than 10 employees, the choice between the deployment options is a trade-off between cost versus complexity, and effort. There is also an issue of comfort. Although On-Demand providers take a number of precautions to ensure system availability and safeguard data and servers (such as the use of surveillance equipment, secure server areas, and so on), the thought of relying on another party for this safety is sometimes beyond the comfort level of some businesses. Regardless of which option you select for your business, it is beneficial to have at least a basic understanding of the related technology features. Some of these features include bandwidth or connection speeds, performance and scalability, and backup and security procedures. This information will make you a knowledgeable consumer and in turn, assist you in your decision making process. In the case of On-Premise and Collocated deployments, there are additional topics of importance that must be discussed and evaluated. Although we lightly touched on some of these topics earlier in this section, it is important that we examine them in more detail. The section that follows will lead us through this examination. Choosing a server operating system Server and network operating systems have been around for a long time. Many years ago, Novell Netware, Banyan Vines, and numerous variants of UNIX (most notably, Solaris from Sun Microsystems) were the major players in this marketplace, but today, the largest players in network/server operating systems for small to medium-sized businesses are Microsoft Windows and Linux. Choosing which is best for your needs can be a confusing and sometimes a daunting process. Microsoft Windows, for example, is offered in a variety of flavors, including Web Edition, Standard Edition, Datacenter Edition, and others. In addition to pricing, the various editions also vary in their ability to scale, that is, handle more demanding computing needs. For example, 64-bit versions of the Web and Standard Editions are limited to 32 GB of RAM and do not support clustering— a technology used to provide redundancy and increase scalability. Datacenter Edition, on the other hand, not only supports clustering, but is also capable of supporting up to 2 TB of RAM (1000 GB = 1 TB). As you can see, your decision will vary greatly depending on your anticipated needs. If you expect a large volume of data (over a million accounts or contacts), you will want to ensure that you use a version of Windows that supports at least 8 GB of RAM. Data needs ranging in the area of 100,000 or less (accounts or contacts) are likely to function well with 4 GB of RAM, but you should remember the earlier statement that the hardware that you pick today should be capable of handling today's load, plus that expected within the next 18 to 24 months. If you have never used Linux before, but are considering it for your CRM server, you should prepare yourself for a vastly different selection process. First and foremost, Linux is available in many different flavors (commonly referred to as distributions or "distros"). The various distributions differ mostly on the structure of the file system (that is, the location of configuration files and tools) and included software. Strong opinions on which distribution is best are equally plentiful. We will, however, discuss the leading distributions, namely, Red Hat, Fedora Core, CentOS, and Ubuntu. Much like Windows, different iterations of these distributions will support different hardware configurations. However, unlike Windows, the costs incurred in using a more scalable version versus a less capable version do not change because the operating system is free. Red Hat is the exception to this rule. Red Hat's Enterprise Linux Server is a subscription based product ranging in price from US$349 to US$2,499 per year, per server. The subscription includes software updates and support. In contrast, costs for Windows Server 2008 begin at approximately US$470 per server for the Web Edition and can be as high as US$3,000 per processor for the Datacenter Edition. You are also likely to incur additional costs for Client Access Licenses (CAL), a license that allows a computer on your network to legally connect to a server and normally costs approximately US$30 each. These costs are for licenses only and do not include support services. A business with very limited financial resources may conclude that both—Red Hat Enterprise Linux and Windows Server—are beyond their means. Even if you are not in that situation, it is comforting to know that Fedora Core and CentOS are both widely used zero-cost alternatives, and also happen to be derivatives of Red Hat. CentOS in particular is intended to be a clone of Red Hat Enterprise Linux and it has a large following. If you have not heard of it before and are concerned about its viability, the fact that SugarCRM (the company) utilizes 64-bit versions of CentOS for its On-Demand datacenter (at the time of this writing) should help ease your fears. The final distribution mentioned was Ubuntu. Ubuntu is quite different, in that it has a long history of focusing on growing the adoption of Linux on the desktop. It has gained popularity primarily due to its ease of use. Server installations of Ubuntu are definitely possible and many users today are successfully using it for their SugarCRM implementations. In general, choosing one distribution versus another usually becomes a matter of personal preference and comfort. The cost involved in purchasing a server operating system has just been explained and the choice is yours. Clearly, were you to base your decision solely on licensing costs, Linux would be a more cost-effective choice. In practice, either of the operating systems is more than capable of fulfilling your CRM needs and in addition, each requires a certain amount of investment to maintain, and these costs are not readily visible up front. As a result, your selection is usually a choice more directly influenced by licensing and ongoing support costs, more so than licensing costs alone. The level and type of technical resources you have access to will no doubt have some influence on your choice. You will need someone to be on call in case of emergencies, perhaps to come in and set things up for you in the beginning, and possibly to perform backups each week. There are many independent network and server support people who make their living performing this type of work and it should not be difficult to locate a resource. Prices will vary, but working with one of these individuals might cost you US$5,000 to US$10,000 per year and depending on your circumstances, could be a perfect fit. Of course, the On-Demand option at US$30/per user/per month remains an option as well. Specifying your server hardware At home, most of us use our computers as isolated workstations for work or play. A few of us may also connect several computers at home into a network, much in the same manner they are connected at the office. When several computers are connected in a network, there is often value in attaching one or more special computers to the network that is designed to act as a shared resource for all users. These special, typically more powerful computers are called servers. Most of us are familiar enough with a home or office Personal Computer or PC. The usual product is a so-called three-box configuration—the system unit, the display, and the keyboard. While a computer server can look quite similar to a PC, it has a number of fundamental differences relating to hardware that affect cost. Note that there are plenty of low-end servers from the likes of HP and Dell that use non-parity memory, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) disk drives, and a single power supply. They can be used for small SugarCRM installations fairly effectively (those not exceeding 10 users). You should, however, always aim for the highest capacity and reliability that you can afford. Remember, it must serve your needs today and in the future, for a minimum of 18 to 24 months. A lower cost solution may seem attractive today, but rarely pays off in the long run. It is usually best to invest a little more in higher quality components up front, rather than pay for problems down the road as they will be more costly. Error Correction Codes (ECC) memory for example, has the ability to detect and correct the most common forms of errors that could be made by semi-conductor memory when it starts to fail. Typical PC memory, by contrast, does not have this capability (due to the lack of a feature called parity) and certainly does not have the ability to correct those errors. The only time a PC checks to see if its memory actually works is at system boot time—when you turn it on. As you intend to leave your server on all the time, that approach clearly will not work. Similarly, power to a server is typically supplied through two physical power supply units connected to it in a redundant manner. Should one unit fail (not uncommon), the fault is reported, but the system continues to function using the working power supply unit. You would then eliminate immediate impact and be allowed the opportunity to replace the power supply at a scheduled maintenance window of time, as opposed to the time of failure. A Redundant Array of Inexpensive Disks (RAID) hard drive configuration is designed to provide performance and redundancy. These types of configurations require the use of multiple hard drives that work in tandem to split data into smaller chunks that in turn reduce access times. There are varying levels of RAID configurations numbered 0 through 6 (skipping 2). The number of drives required to implement a RAID system will vary depending on the level that you select. It can range from two drives (RAID 0) to as many as four (RAID 6). Redundancy levels are also directly tied to the level that you choose. RAID 0 should never be used in a server environment as it does not offer any sort of redundancy. You should look at using RAID 5 or 6, as they offer the best performance and redundancy. A hard drive failure in such environments would not represent a loss of data. RAID 5 and 6 systems are specifically designed to tolerate such problems without causing loss of data. The Central Processing Unit (CPU) or processor used in a server is typically a multi-core processor (at least two cores). Depending on your anticipated load, you might be better off with a quad core chip. The general rule of thumb is that the more cores, the better the performance as computing work would be divided amongst the various cores. System memory size is perhaps the biggest difference between a server and a PC. Most PC users (other than those doing graphics design and other demanding tasks) work happily with 2 to 4 GB of memory in their PC—usually 2 GB is the norm. In contrast, few servers use less than 2 GB of memory, at least those that perform well. Most use a minimum of 4 GB. With more memory, the work being done for many users can stay in memory simultaneously as it is performed, rather than being sent temporarily out to the hard disk if memory runs out of space. As system memory is at least 100 times faster than the hard disk, anything that involves the hard disk will slow down the system substantially. It is also worth noting that database applications, such as SugarCRM, perform better when more memory is made available to the database server. Some of these differences are summarized in the table that follows:   PC Server Form factor Desktop, Tower Tower, Rack mount Memory type Non-parity ECC (Error Checking and Correcting) Memory size 2 GB to 4 GB 4 GB to 32 GB Hard disk technology PATA / SATA SATA / RAID Hard disk speed 5,400 RPM to 10,000 RPM 10,000 RPM to 15,000 RPM LAN interface Ethernet 10/100, Wireless One or more Giga-Ethernet Power supply Single 250 W to 450W Redundant 500W Processor (CPU) Single Dual Core Chip One or more Quad Core Chip Video Often High Performance for Gaming Low Performance / Generic Users One-Local Many-Remote Servers are used for many tasks. A network may have a specific server to act as a database server , for example. That type of server would be optimized for fast and reliable disk storage and high memory capacity. Another server might be an application server— one on which applications are run, with the results being communicated to the users on the PCs using those applications. An application server is typically optimized with lots of memory and CPU power—to get through all that application processing quickly. An example of an application server is a SugarCRM server—the SugarCRM application is actually running on the server—and multiple user PCs are just running web browsers that display web pages. These web pages communicate to the users what is going on in their particular session. For a business with 10 users or less, a SugarCRM server to be used as a combined database and application server should look something like the following: 500 GB of disk space on a RAID 5 configuration 4 GB of ECC memory Two dual core or one quad core CPU A single Gig-Ethernet connection to the network An Uninterruptible Power Supply (UPS) CentOS Linux or Windows Server Standard Edition (32-bit) For a business with perhaps 25 users, the following would be better suited: 1 TB of disk space on a RAID 5 configuration 8 GB of ECC memory Two quad core CPUs A single Gig-Ethernet connection to the network An Uninterruptible Power Supply (UPS) CentOS Linux or Windows Server Standard Edition (64-bit) For a business with 100 users, the server specifications would resemble the following: 2 TB of disk space on a RAID 6 configuration 32 GB of ECC memory Four quad core CPUs Dual Gig-Ethernet connections to the network An Uninterruptible Power Supply (UPS) CentOS Linux (64-bit) or Windows Server Enterprise Edition (32 or 64-bit)
Read more
  • 0
  • 0
  • 1247

article-image-introduction-oracle-service-bus-oracle-service-registry
Packt
15 Sep 2010
6 min read
Save for later

Introduction to Oracle Service Bus & Oracle Service Registry

Packt
15 Sep 2010
6 min read
(For more resources on BPEL, SOA and Oracle see here.) If we want our SOA architecture to be highly fexible and agile, we have to ensure loose coupling between different components. As service interfaces and endpoint addresses change over time, we have to remove all point-to-point connections between service providers and service consumers by introducing an intermediate layer—Enterprise Service Bus (ESB). ESB is a key component of every mature SOA architecture and provides several important functionalities, such as message routing, transformation between message types and protocols, the use of adapters, and so on. Another important requirement for providing fexibility is service-reuse. This can be achieved through the use of UDDI (Universal Description, Discovery and Integration) compliant service registry, which enables us to publish and discover services. Using service registry we can also implement dynamic endpoint lookup, so that service consumers retrieve actual service endpoint address from registry in runtime. Oracle Service Bus architecture and features ESB provides means to manage connections, control the communication between services, supervise the services and their SLAs (Service Level Agreements), and much more. The importance of the ESB often becomes visible after the frst development iteration of SOA composite application has taken place. For example, when a service requires a change in its interface or payload, the ESB can provide the transformation capabilities to mask the differences to existing service consumers. ESB can also mask the location of services, making it easy to migrate services to different servers. There are plenty other scenarios, where ESB is important. In this article, we will look at the Oracle Service Bus (OSB). OSB presents a communication backbone for transport and routing of messages across an enterprise. It is designed for high-throughput and reliable message delivery to a variety of service providers and consumers. It supports XML as a native data type, however, other data types are also supported. As an intermediary, it processes incoming service request messages, executes the routing logic and transforms these messages if needed. It can also transform between different transport protocols (HTTP, JMS, File, FTP, and so on.). Service response messages follow the inverse path. The message processing is specifed in the message fow defnition of a proxy service. OSB provides some functionalities that are similar to the functionalities of the Mediator component within the SOA Composite, such as routing, validation, fltering, and transformation. The major difference is that the Mediator is a mediation component that is meant to work within the SOA Composite and is deployed within a SOA composition application. The OSB on the other hand is a standalone service bus. In addition to providing the communication backbone for all SOA (and non-SOA) applications, OSB mission is to shield application developers from changes in the service endpoints and to prevent those systems from being overloaded with requests from upstream applications. In addition to the Oracle Service Bus, we can also use the Mediator service component, which also provides mediation capabilities, but only within SOA composite applications. On the other hand, OSB is used for inter-application communication. The following figure shows the functional architecture of Oracle Service Bus (OSB). We can see that OSB can be categorized into four functional layers: Messaging layer: Provides support to reliably connect any service by leveraging standards, such as HTTP/SOAP, WS-I, WS-Security, WS-Policy, WS-Addressing, SOAP v1.1, SOAP v1.2, EJB, RMI, and so on. It even supports the creation of custom transports using the Custom Transport Software Development Kit (SDK). Security layer: Provides security at all levels: Transport Security (SSL), Message Security (WS-Policy, WS-Security, and so on), Console Security (SSO and role based access) and Policy (leverages WS-Security and WS-Policy). Composition layer: Provides confguration-driven composition environment. We can use either the Eclipse plug-in environment or web-based Oracle Service Bus Console. We can model message fows that contain content-based routing, message validation, and exception handling. We can also use message transformations (XSLT, XQuery), service callouts (POJO, Web Services), and a test browser. Automatic synchronization with UDDI registries is also supported. Management layer: Provides a unifed dashboard for service monitoring and management. We can defne and monitor Service Level Agreements (SLAs), alerts on operation metrics and message pipelines, and view reports. Proxy services and business services OSB uses a specifc terminology of proxy and business services. The objective of OSB is to route message between business services and service consumers through proxy services. Proxy services are generic intermediary web services that implement the mediation logic and are hosted locally on OSB. Proxy services route messages to business services and are exposed to service consumers. A proxy service is confgured by specifying its interface, type of transport, and its associated message processing logic. Message fow defnitions are used to defne the proxy service message handling capabilities. Business services describe the enterprise services that exchange messages with business processes and which we want to virtualize using the OSB. The defnition of a business service is similar to that of a proxy service, however, the business services does not have a message fow defnition. Message fow modeling Message fows are used to defne message processing logic of proxy services. Message fow modeling includes defning a sequence of activities, where activities are individual actions, such as transformations, reporting, publishing and exception management. Message fow modeling can be performed using a visual development environment (Eclipse or Oracle Service Bus Console). Message fow defnitions are defned using components, such as pipelines, branch nodes and route nodes, as shown in the following fgure: A pipeline is a sequence of stages, representing a one-way processing path. It is used to specify message fow for service requests and responses. If a service defnes more operations, a pipeline might optionally branch into operational pipelines. There are three types of pipelines: Request pipelines are used to specify the request path of the message flow Response pipelines are used to specify the response path of a message flow Error pipelines are used as error handlers. Request and response pipelines are paired together as pipeline pairs. Branch nodes are used as exclusive switches, where the processing can follow one of the branches. A variable in the message context is used as a lookup variable to determine which branch to follow. Route nodes are used to communicate with another service (in most cases a business service). They cannot have any descendants in the message fow. When the route node sends the request message, the request processing is fnished. On the other side, when it receives a response message, the response processing begins. Each pipeline is a sequence of stages that contain user-defned message processing actions. We can choose between a variety of supported actions, such as Publish, Service Callout, For Each, If... Then..., Raise Error, Reply, Resume, Skip, Delete, Insert, Replace, Validate, Alert, Log, Report, and more. Later in this article, we will show you how to use a pipeline on the Travel Approval process. However, let us frst look at the Oracle Service Registry, which we will use together with the OSB.
Read more
  • 0
  • 0
  • 6791

article-image-web-services-apache-ofbiz
Packt
14 Sep 2010
12 min read
Save for later

Web Services in Apache OFBiz

Packt
14 Sep 2010
12 min read
  Apache OfBiz Cookbook Over 60 simple but incredibly effective recipes for taking control of OFBiz Optimize your OFBiz experience and save hours of frustration with this timesaving collection of practical recipes covering a wide range of OFBiz topics. Get answers to the most commonly asked OFBiz questions in an easy-to-digest reference style of presentation. Discover insights into OFBiz design, implementation, and best practices by exploring real-life solutions. Each recipe in this Cookbook is crafted to describe not only "how" to accomplish a specific task, but also "why" the technique works to ensure you get the most out of your OFBiz implementation. Read more about this book (For more resources on Apache see here.) Introduction Ask five people what "web services" are and you will likely get at least six different opinions. Because the term evokes such ambiguity, we need to set ground rules for this article and define what we mean by "web services". Therefore, for the purposes of this book, "web services" are the interactive exchange of messages from one system to another using the Internet as the network transport and HTTP/HTTPS as the messaging protocol. Message exchange transpires without human intervention and may be one-way—that is, called without an immediate response expected—or two-way. Web services operate as producer/consumer systems where the producer—called the "service provider"—offers one or more "services" to the "consumer"—sometimes referred to as the "client". In the web services world, Internet-based service providers advertise and deliver service from locations throughout the Web. The Internet, and the Web in particular, serve as the highway over which potential web service clients travel to find service providers, make contact, and deliver products. Service-oriented by design, OFBiz is a natural for building and deploying both web service clients and web service providers. Any OFBiz web application (webapp) may both consume web services and act as a web service provider within the same application or in an unlimited number of OFBiz webapps. Within the web service producer/consumer model, service providers are responsible for accepting and validating requests for service and delivering the product. Consumers must find service providers, request service, and accept delivery of the product. There are a number of ad-hoc and formal standards that have evolved over the last few years to help facilitate the business of enabling web services, including, but not limited to, URL parameter passing, XML-RPC, and Simple Object Access Protocol (SOAP) based messaging. OFBiz provides builtin support with tools and integration points to make implementing both service provider and consumer web services a snap. Requesting web services using URL parameters There are many web service providers that require nothing more than URL parameter passing to request a service. These service providers take HTTP/HTTPS request parameters as appended to a prospective consumer's URL, and process requests according to the passed parameter values. Services are delivered back to the requestor through the client's HTTP/ HTTPS response message or as a separate HTTP/HTTPS request message exchange. An example of such a real world web service is the PayPal Payments Standard payment processing service . This web service expects requests to come in on an advertised URL with request particulars appended to the URL as request parameters. Prospective consuming systems send HTTP/HTTPS request messages to the PayPal URL asking for service. Once a request for service has been accepted by PayPal, the Payments Standard web service responds and delivers service using the HTTP/HTTPS response message. A separate web service, also involving PayPal, called the Instant Payment Notification (IPN) web service is an example of a web service in which parameters are passed on the URL from the service provider to a consumer such as OFBiz. In this case, OFBiz listens on a configured URL for an HTTP/HTTPS request message from PayPal. When one is received, OFBiz responds appropriately, taking delivery of PayPal service. Note: to implement a PayPal IPN client within an OFBiz web application, you must provide PayPal a destination URL that maps to a valid OFBiz request-map, and then process any incoming URL parameters according to the PayPal IPN directions. Getting ready To act as a web service client and pass parameters on the URL within an OFBiz Java Event or Service, make sure you include the following Java packages in your program: import java.io.IOException;import java.net.URL;import java.net.URLConnection;import javax.Servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse; How to do it... To request a service using URL parameters, follow these steps: Within an existing OFBiz Java program (either an Event, Service, or any other) method, create a Java HashMap containing the name/value pairs that you wish to pass on the URL. For example: Map parameters = UtilMisc.toMap("param1", "A", "param2", "B"); Use the OFBiz-provided UtilHttp.urlEncodeArgs method to properly encode the parameters directly from the HashMap into a Java String. For example: String encodedParameters = UtilHttp.urlEncodeArgs(parameters, false); Build a URL and add the encoded parameter string to the URL: String redirectUrl = "http://www.somehost.com";String redirectString = redirectUrl + "?" + encodedParameters; Send the request by way of the HttpServletResponse object's redirect method: try { response.sendRedirect(redirectString); } catch (IOException e) {// process errors here return "error";} How it works... In a very simple web service client scenario, OFBiz acts on behalf of a user or another OFBiz process and makes a web service consumer request of a remote service provider. The request is sent using a HTTP/HTTPS request message with one or more service parameters appended to the URL. The URL is the location on the web of the service provider. Because the scenario described above is a one-way message exchange—that is, the request message is delivered to the destination URL, but OFBiz does not wait around for a response message—there must be assumptions made about how the web service provider will deliver service. Many times, service is delivered through a totally separate message exchange, initiated by first making the one-way request as described earlier. To illustrate how this may play out, we consider the PayPal Payments Standard web service. This web service may be invoked from OFBiz in at least two different ways. For example, one approach is to include an HTML form on an OFBiz web page with a Submit button. The Submit button when clicked redirects the browser (with the request message) to the PayPal web service site passing the form contents as is. An alternative implementation is to have an HTML form on an OFBiz web page with a Submit button that, when clicked, forwards the browser's request of an OFBiz Service (or Event). In this case, the OFBiz Event or Service will take the form attribute values (from the request message) and create a URL for the PayPal web service location. The original form attribute values and any other information as provided by the context or through database reads is appended to the URL. OFBiz then redirects the user's original request message using the sendRedirect method on the HttpServletResponse object to effectively send the user, by way of a browser and an appropriately crafted URL, to the PayPal web service. Building URL request parameters out of plain Java strings can be tricky given the nature of the characters used to construct request parameters and delimit multiple parameter strings. For example, how do you pass a space character as part of a parameter when spaces are used as delimiters? Enter URL encoding. URL encoding takes certain characters, deemed "special" characters, and "escapes" them so that they may be used as part of a request parameter string. OFBiz provides an easy-to-use encoder (and decoder) method(s): the UrlEncodeArgs method on the UtilHttp utility that takes as an argument a Java Map and returns an encoded string that may then be appended to a URL as shown earlier. Note: an exhaustive treatment of URL encoding is beyond the scope of this article. For more information on HTML and ASCII character codes and encoding symbols, please see the HTML Codes page There's more... The UrlEncodeArgs method has two modes of operation: selecting the false method-parameter value will encode using only an ampersand (&) symbol while the true parameter value tells OFBiz to use &amp as the encoding string. Based on the web service provider's instructions, you will need to determine which mode is appropriate for your application. Many web services do not require encoded values. You will need to verify for each service whether or not it is necessary to encode and decode URL parameters. For example, the PayPal IPN web service sends request parameters without any encoding. When returning IPN messages, the client web service is, however, instructed to encode parameters. PayPal IPN is strictly a server-to-server (there is no human intervention) messaging system where message traffic is transported across the web from one server URL to another. PayPal is the web service provider while, in this scenario, OFBiz acts as the web service client. It works something like shown in the following diagram: Requesting web services using an HttpClient Many web services are implemented using HTTP/HTTPS request methods and parameters passed as part of the request's message body. These requests for service mimic a user sitting at a browser submitting HTML forms and waiting for server response. Service providers read HTTP/HTTPS request header information and name/value request message parameter pairs, and deliver service through the HTTP/HTTPS response message. Writing clients for these types of web services usually require a synchronous call to the service provider. From within your OFBiz client code, you initiate a call to a service provider and then wait until a response (or timeout) is received. Unlike the PayPal Payments Standard service described earlier, the OFBiz client program does not redirect HTTP/HTTPS request messages to another URL. There are a number of examples within the out-of-the-box OFBiz project of service providers that use the HTTP/HTTPS request message body to exchange information with OFBiz clients. They include, but are not limited to: Authorize.net Payment Services ClearCommerce Payment Services Go Software RiTA ValueLink Prepaid/Gift Card Payment Services DHL, FedEx, United Parcel Services (UPS), and United States Post Office Shipping Services CDYNE Web Based Services PayPal Payments Pro Getting ready The first step in writing any web service client is to gather the following information about how the target web service operates: The URL on the web for the service provider Any connection parameters and/or HTTP/HTTPS request message header settings that must be passed as required by the service provider The HTTP/HTTPS connection verb (get, post, or other) Within a Java program, to send and receive web service messages as a client and use the built-in HTTP client utility provided by OFBiz, make sure you have the following Java packages imported in your program: import org.ofbiz.base.util.HttpClient;import org.ofbiz.base.util.HttpClientException; How to do it... You can request a web service by following these steps: Create a connection string that includes the URL of the target web service and the type of request: String connectString = "http://www.some_web_service_url.com/serviceName"; Within your Java program, create an instance of the HttpClient object with the URL/connection string passed to the constructor as shown here: HttpClient http = new HttpClient(connectString); Create the content of your request as dictated by the target web service provider. For example, some web services expect XML documents; others, simple string parameters. A web service that expects a string of name/value pairs could be coded as follows: http.setParameter("Param1", "X");http.setParameter("Param2", "Y"); Send your request using the appropriate method on the HttpClient object. For "get" requests, use the get method. For "post" requests, use the post method as shown here: try { response = http.post();}catch (HttpClientException e) { // Process error conditions here} Handle any service response inline. Unlike the asynchronous nature of the PayPal IPN web service described earlier, HttpClient based web services process return calls inline with the initial web service call. Under the covers, the HttpClient utility handles all the network connection set up and lower-level message transmissions. There is no need to release or close the connection as OFBiz manages the handoff of connections. How it works... When using the HttpClient to access web services remote to OFBiz, you send the consumer-side call synchronously; that is, you wait for the return from the remote web service call within your program. The OFBiz integration of the HttpClient utility manages the details necessary to open the network connection, maintain direct request and response message exchanges, and close the connection upon completion of processing. There's more... The OFBiz implementation of the HttpClient object provides several convenience constructors, which may be useful depending on your processing needs. These include: // To create a new client object and connect using a URL object instead of a StringURL url = "https://www.some_host.com/";HttpClient http = new HttpClient(url);// To create a new client object using a Java Map containing request parametersHttpClient http = new HttpClient(url, UtilMisc.toMap("param1", "X", "param2", "Y");//To create a new client object with a parameter map and header settingsHttpClient http = new HttpClient(connectString, UtilMisc.toMap("param1", "X"), UtilMisc.toMap("User-Agent, "Mozilla/4.0")); See also OFBiz provides an integration of the Apache HttpClient software package: the Jakarta Commons HTTP Client that is accessed by creating a new HttpClient object. Any method you can call on, the original Apache HttpClient object is available in the OFBiz implementation. This includes full support for HTTPS (SSL) clients. For more information, please see the Jakarta Commons HTTP Client web page
Read more
  • 0
  • 0
  • 3413
Visually different images

article-image-process-driven-soa-development
Packt
13 Sep 2010
9 min read
Save for later

Process Driven SOA Development

Packt
13 Sep 2010
9 min read
(For more resources on Oracle, see here.) Business Process Management and SOA One of the major benefits of a Service-Oriented Architecture is its ability to align IT with business processes. Business processes are important because they define the way business activities are performed. Business processes change as the company evolves and improves its operations. They also change in order to make the company more competitive. Today, IT is an essential part of business operations. Companies are simply unable to do business without IT support. However, this places a high level of responsibility on IT. An important part of this responsibility is the ability of IT to react to changes in a quick and efficient manner. Ideally, IT must instantly respond to business process changes. In most cases, however, IT is not flexible enough to adapt application architecture to the changes in business processes quickly. Software developers require time to modify application behavior. In the meantime, the company is stuck with old processes. In a highly competitive marketplace such delays are dangerous, and the threat is exacerbated by a reliance on traditional software development to make quick changes within an increasingly complex IT architecture. The major problem with traditional approaches to software development is the huge semantic gap between IT and the process models. The traditional approach to software development has been focused on functionalities rather than on end-to-end support for business processes. It usually requires the definition of use cases, sequence diagrams, class diagrams, and other artifacts, which bring us to the actual code in a programming language such as Java, C#, C++, and so on. SOA reduces the semantic gap by introducing a development model that aligns the IT development cycle with the business process lifecycle. In SOA, business processes can be executed directly and integrated with existing applications through services. To understand this better, let's look at the four phases of the SOA lifecycle: Process modeling: This is the phase in which process analysts work with process owners to analyze the business process and define the process model. They define the activity flow, information flow, roles, and business documents. They also define business policies and constraints, business rules, and performance measures. Performance measures are often called Key Performance Indicators (KPIs). Examples of KPIs include activity turnaround time, activity cost, and so on. Usually Business Process Modeling Notation (BPMN) is used in this phase. Process implementation: This is the phase in which developers work with process analysts to implement the business process, with the objective of providing end-to-end support for the process. In an SOA approach, the process implementation phase includes process implementation with the Business Process Execution Language (BPEL) and process decomposition to the services, implementation or reuse of services, and integration. Process execution and control: This is the actual execution phase, in which the process participants execute various activities of the process. In the end-to-end support for business processes, it is very important that IT drives the process and directs process participants to execute activities, and not vice versa, where the actual process drivers are employees. In SOA, processes execute on a process server. Process control is an important part of this phase, during which process supervisors or process managers control whether the process is executing optimally. If delays occur, exceptions arise, resources are unavailable, or other problems develop, process supervisors or managers can take corrective actions. Process monitoring and optimization: This is the phase in which process owners monitor the KPIs of the process using Business Activity Monitoring (BAM). Process analysts, process owners, process supervisors, and key users examine the process and analyze the KPIs while taking into account changing business conditions. They examine business issues and make optimizations to the business process. The following figure shows how a process enters this cycle, and goes through the various stages: Once optimizations have been identified and selected, the process returns to the modeling phase, where optimizations are applied. Then the process is re-implemented and the whole lifecycle is repeated. This is referred to as an iterative-incremental lifecycle, because the process is improved at each stage. Organizational aspects of SOA development SOA development, as described in the previous section, differs considerably from traditional development. SOA development is process-centric and keeps the modeler and the developer focused on the business process and on end-to-end support for the process, thereby efficiently reducing the gap between business and IT. The success of the SOA development cycle relies on correct process modeling. Only when processes are modeled in detail can we develop end-to-end support that will work. Exceptional process fl ows also have to be considered. This can be a difficult task, one that is beyond the scope of the IT department (particularly when viewed from the traditional perspective). To make process-centric SOA projects successful, some organizational changes are required. Business users with a good understanding of the process must be motivated to actively participate in the process modeling. Their active participation must not be taken for granted, lest they find other work "more useful," particularly if they do not see the added value of process modeling. Therefore, a concise explanation as to why process modeling makes sense can be a very valuable time investment. A good strategy is to gain top management support. It makes enormous sense to explain two key factors to top management—first, why a process centric approach and end-to-end support for processes makes sense, and second, why the IT department cannot successfully complete the task without the participation of business users. Usually top management will understand the situation rather quickly and will instruct business users to participate. Obviously, the proposed process-centric development approach must become an ongoing activity. This will require the formalization of certain organizational structures. Otherwise, it will be necessary to seek approval for each and every project. We have already seen that the proposed approach outgrows the organizational limits of the IT department. Many organizations establish a BPM/SOA Competency Center, which includes business users and all the other profiles required for SOA development. This also includes the process analyst, process implementation, service development, and presentation layer groups, as well as SOA governance. Perhaps the greatest responsibility of SOA development is to orchestrate the aforementioned groups so that they work towards a common goal. This is the responsibility of the project manager, who must work in close connection with the governance group. Only in this way can SOA development be successful, both in the short term (developing end-to-end applications for business processes), and in the long term (developing a fl exible, agile IT architecture that is aligned with business needs). Technology aspects of SOA development SOA introduces technologies and languages that enable the SOA development approach. Particularly important is BPMN, which is used for business process modeling, and BPEL, which is used for business process execution. BPMN is the key technology for process modeling. The process analyst group must have in-depth knowledge of BPMN and process modeling concepts. When modeling processes for SOA, they must be modeled in detail. Using SOA, we model business processes with the objective of implementing them in BPEL and executing them on the process server. Process models can be made executable only if all the relevant information is captured that is needed for the actual execution. We must identify individual activities that are atomic from the perspective of the execution. We must model exceptional scenarios too. Exceptional scenarios define how the process behaves when something goes wrong—and in the real world, business processes can and do go wrong. We must model how to react to exceptional situations and how to recover appropriately. Next, we automate the process. This requires mapping of the BPMN process model into the executable representation in BPEL. This is the responsibility of the process implementation group. BPMN can be converted to BPEL almost automatically and vice versa, which guarantees that the process map is always in sync with the executable code. However, the executable BPEL process also has to be connected with the business services. Each process activity is connected with the corresponding business service. Business services are responsible for fulfilling the individual process activities. SOA development is most efficient if you have a portfolio of business services that can be reused, and which includes lower-level and intermediate technical services. Business services can be developed from scratch, exposed from existing systems, or outsourced. This task is the responsibility of the service development group. In theory, it makes sense for the service development group to first develop all business services. Only then would the process implementation group start to compose those services into the process. However, in the real world this is often not the case, because you will probably not have the luxury of time to develop the services first and only then start the processes. And even if you do have enough time, it would be difficult to know which business services will be required by processes. Therefore, both groups usually work in parallel, which is a great challenge. It requires interaction between them and strict, concise supervision of the SOA governance group and the project manager; otherwise, the results of both groups (the process implementation group and the service development group) will be incompatible. Once you have successfully implemented the process, it can be deployed on the process server. In addition to executing processes, a process server provides other valuable information, including a process audit trail, lists of successfully completed processes, and a list of terminated or failed processes. This information is helpful in controlling the process execution and in taking any necessary corrective measures. The services and processes communicate using the Enterprise Service Bus (ESB). The services and processes are registered in the UDDI-compliant service registry. Another part of the architecture is the rule engine, which serves as a central place for business rules. For processes with human tasks, user interaction is obviously important, and is connected to identity management. The SOA platform also provides BAM. BAM helps to measure the key performance indicators of the process, and provides valuable data that can be used to optimize processes. The ultimate goal of each BAM user is to optimize process execution, to improve process efficiency, and to sense and react to important events. BAM ensures that we start optimizing processes where it makes most sense. Traditionally, process optimization has been based on simulation results, or even worse, by guessing where bottlenecks might be. BAM, on the other hand, gives more reliable and accurate data, which leads to better decisions about where to start with optimizations. The following figure illustrates the SOA layers:
Read more
  • 0
  • 0
  • 4587

article-image-apache-ofbiz-services
Packt
09 Sep 2010
10 min read
Save for later

Apache OFBiz Services

Packt
09 Sep 2010
10 min read
  Apache OfBiz Cookbook Over 60 simple but incredibly effective recipes for taking control of OFBiz Optimize your OFBiz experience and save hours of frustration with this timesaving collection of practical recipes covering a wide range of OFBiz topics. Get answers to the most commonly asked OFBiz questions in an easy-to-digest reference style of presentation. Discover insights into OFBiz design, implementation, and best practices by exploring real-life solutions. Each recipe in this Cookbook is crafted to describe not only "how" to accomplish a specific task, but also "why" the technique works to ensure you get the most out of your OFBiz implementation. Read more about this book (For more resources on Apache see here.) Introduction OFBiz has been characterized as having an "event-driven, service-oriented architecture". Long before it was fashionable to build complex enterprise computer systems using service-oriented techniques, OFBiz implemented a number of architectural features enabling a service-oriented design. These features include: A context-aware Service Engine available for use across the entire framework or, if configured, externally through supported network interfaces. OFBiz Service consumers need not concern themselves with the location of the called Service nor with a Service's implementation details. OFBiz handles all that transparently to both the service provider and consumer. Multiple invocation methods including: inline or synchronous with the calling program; out-of-band or asynchronous from the caller's processing logic and/or as a scheduled job for recurring execution. OFBiz handles all input, output, and context parameter transfers seamlessly to both Service provider and Service caller. Chaining of Services for a true, event-driven Service platform and implementation of complex workflows. Services may be configured to be invoked based on external events or triggers. Once triggered, an OFBiz Service may call other Service(s) based on additional triggering Events and/or conditions. Any combination of Services may be chained together to form Service Event Condition Action(s) or SECAs. A fully integrated job scheduler for recurring and single use asynchronous job scheduling. The OFBiz job scheduler handles all the mundane coordination tasks associated with job scheduling, including calendar lookups, frequency, and interval timing. Service creation and implementation tools, including selectable input and output validations based on configured parameter types; authentication and authorization checks integrated with OFBiz user login processing, and even localization preservation across Service calls. The heart of the OFBiz service-oriented implementation is the Service Engine factory. Using a factory pattern, OFBiz provides an easily extendable Service management and invocation tool supporting any number of concurrent Services and any number of third-party execution engines including, but not limited to: Java, Groovy, Javascript, JPython, and the OFBiz "simple" Service (based on the OFBiz Mini-Language.) By offloading Service implementation to programming language-specific engines, OFBiz Services may be written and implemented in any language that suits the developer's fancy. The Service Engine factory may be called from anywhere in the framework to handle the details of Service invocation, as shown in the following diagram: Managing existing OFBiz Services Out-of-the-box OFBiz comes with many hundreds of Services. To find and otherwise manage OFBiz Services, a fully featured Service management dashboard is provided. Privileged users may conveniently handle all Service related tasks using the OFBiz WebTools toolkit as described. How to do it... To access the OFBiz Service management main web page, navigate to the following WebTools URL: https://localhost:8443/webtools/control/ServiceList When prompted for a username and password, login as the administrative user (username "admin", password "ofbiz"). How it works... WebTools provides a dashboard-like UI to view, manage, run, and support OFBiz Services. The main web page for OFBiz Service support is found at: https://localhost:8443/webtools/ServiceList (Note the use of case-sensitive addressing). This web page is shown in the following figure: There's more... Asynchronous and scheduled Service requests are managed by the OFBiz job scheduler. The job scheduler consists of multiple execution threads as configured in the ~framework/ service/config/serviceengine.xml file. Threads run from one or more thread "pools". From the OFBiz Thread List web page, you may see at a glance the configuration of the OFBiz job scheduler as well as thread and thread pool usage as shown here: Calling a Service from an HTML form Services may be called directly from HTML forms by setting the HTML form's action element to point to the URL of a controller.xml request-map that resolves to either an OFBiz Event that calls a Service or directly to a Service. To demonstrate this, the following section uses the WebTool's Run Service HTML form to invoke a Service. Getting ready For the purposes of this recipe, we shall use an existing Service called testScv and invoke it from the WebTools UI. In this example, we will not be creating an HTML form nor creating a Service, but rather using an existing form and Service: To view the results of the testScv Service's execution, open an OFBiz console window from the command line Alternatively, to view the results of the execution, run the Unix tail command tail -f ofbiz.log on the ofbiz.log file ~runtime/ofbiz.log while performing this recipe How to do it... Services may be called directly from HTML forms by following these steps: Navigate directly to the Run Service WebTools web page by selecting the Run Service URL: ~webtools/control/runService. From the Run Service form, enter in testScv in the field labeled Service. Leave the field labeled Pool as is. Hit the Submit button to bring up the Schedule Job web page. On the Schedule Job web page, enter in any values for the requested form fields. These fields correspond to any Service INPUT parameters as configured in the Service definition for testScv. Hit the Submit button. The testScv has been called by using an HTML form. To verify that the input parameters entered in step 6 earlier are processed by the testScv, inspect the ofbiz.log file as shown: How it works... Any OFBiz Service may be called from an OFBiz webapp HTML form simply by: Creating an HTML form with an action attribute URL for the target Service. Creating a controller.xml entry with a request-map for the target Service that matches the HTML form's action URL. For example, the HTML form for the Run Service tool has an action value as shown in the following code snippet: <form name= "scheduleForm" method= "POST" action= "/webtools/control/scheduleService" /> <input type="text" name="testScv" /> <!-- Stuff Intentionally Removed ></form> When the form is submitted, the URL set within the form's action (webtools/control/scheduledService) is intercepted by the WebTools controller servlet, which consults its controller.xml file to determine how to handle this request. The controller.xml entry for this URL is shown here: <request-map uri="scheduleService"> <security https="true" auth="true"/> event type="java" path="org.ofbiz.webapp.event.CoreEvents" invoke="scheduleService"/> <response name="success" type="view" value="FindJob"/> <response name="sync_success" type="view" value="serviceResult"/> <response name="error" type="view" value="scheduleJob"/></request-map> The URL request is mapped to an OFBiz Event called scheduleService. Inside the scheduleService Event, a call is made to the Service Engine to invoke testScv using the Java Service implementation engine as shown in the testScv Service definition file: <service name="testScv" engine="java" export="true" validate="false" require-new-transaction="true" location="org.ofbiz.common.CommonServices" invoke="testScv"> <description>Test service</description> <attribute name="defaultValue" type="Double" mode="IN" default-value="999.9999"/> <attribute name="message" type="String" mode="IN" optional="true"/> <attribute name="resp" type="String" mode="OUT"/></service> After testScv has been executed, processing control returns to the OFBiz request handler (part of the controller servlet) and then back to the calling webapp as configured in the controller.xml file. Calling asynchronous Services from HTML forms You can use WebTools and HTML forms to run a Service asynchronously either one time or on a recurring basis. The following demonstrates the steps necessary to schedule testScv to be executed one time, asynchronously, as a scheduled job. Getting ready Navigate directly to the Schedule Job web page. ~https://localhost:8443/webtools/control/scheduleJob How to do it... To execute testScv asynchronously follow, these steps: In the Service form field, enter in testScv as shown in the screenshot. Leave the Job field empty. OFBiz will pick a unique name automatically. Use the calendar pop-up icon directly next to the Start Date/Time field to select any date and time after the current wall clock time. Submit this form by clicking the Submit button to bring up the Service Parameters form, which allows the caller to provide alternative input parameters to the Service. Add INPUT parameters as shown: Submit the Service Parameters form. testScv will be scheduled to run at the specified time. How it works... Scheduled Services are run asynchronously from the calling program. As such, requests for scheduling are handled by the OFBiz job scheduler. Each scheduled Service is assigned a unique job identifier (jobId) and execution pool by the job scheduler. After the Service is scheduled for execution, control returns to the calling program. There's more... Using the OFBiz job scheduler Job List web page, you may find all scheduled jobs. In the following screenshot, testScv is shown as scheduled for execution on the Job List Search Results web page as specified in the recipe: Because the testScv only writes output to the logfile, we may verify successful execution and scheduling by observing the ofbiz.log runtime logfile as shown: Calling a Service many times from an HTML form It is possible to call a Service multiple times from a single HTML form (for example, one time for each row in the form) by placing a line similar to the following with a service-multi event type defined for the controller.xml request-map entry of the target Service. How to do it... Follow these steps to call a Service multiple times: Use the event type service-multi within the controller.xml request-map entry as shown here: <request-map uri="someService" /> <event type="service-multi" invoke="someService"/> <!-- Other request-map statements intentionally left out --></request-map> If using an OFBiz Form Widget, add a line similar to the following to the Form Widget definition. Note, the list-name is the name of the list that is generating multiple rows for the HTML form: <form name="someFormName" type="multi" use-row-submit="true" list-name="someList" target="someServiceName" / <!-- Form fields removed for reading clarity --></form> If using a FreeMarker template, add lines similar to the following: <form name="mostrecent" mode="POST" action="<@ofbizUrl>someService</@ofbizUrl>"/> <#assign row=0/> <#list someList as listItem> <#-- HTML removed for reading clarity. Each row has a unique input name associated with it allowing this single Form to be submitted to the "someServiceName" Service from each row --> <input type="radio" name="someFormField_o_${row}" value="someValue" checked/> <input type="radio" name="someFormField_o_${row}" value="someValue"/> </#list></form> How it works... The Event type service-multi provides a convenient shortcut for coding HTML forms that are embedded within lists. Each list item is automatically converted to a unique form field so that a single Service may be called from any row within the list.
Read more
  • 0
  • 0
  • 1964

article-image-drools-jboss-rules-50complex-event-processing
Packt
08 Sep 2010
15 min read
Save for later

Drools JBoss Rules 5.0:Complex Event Processing

Packt
08 Sep 2010
15 min read
(For more resources on JBoss see here.) CEP and ESP CEP and ESP are styles of processing in an Event Driven Architecture (General introduction to Event Driven Architecture can be found at: http://elementallinks.typepad.com/bmichelson/2006/02/eventdriven_arc.html). One of the core benefits of such an architecture is that it provides loose coupling of its components. A component simply publishes events about actions that it is executing and other components can subscribe/listen to these events. The producer and the subscriber are completely unaware of each other. A subscriber listens for events and doesn't care where they come from. Similarly, a publisher generates events and doesn't know anything about who is listening to those events. Some orchestration layer then deals with the actual wiring of subscribers to publishers. An event represents a significant change of state. It usually consists of a header and a body. The header contains meta information such as its name, time of occurrence, duration, and so on. The body describes what happened. For example, if a transaction has been processed, the event body would contain the transaction ID, the amount transferred, source account number, destination account number, and so on. CEP deals with complex events. A complex event is a set of simple events. For example, a sequence of large withdrawals may raise a suspicious transaction event. The simple events are considered to infer that a complex event has occurred. ESP is more about real-time processing of huge volume of events. For example, calculating the real-time average transaction volume over time. More information about CEP and ESP can be found on the web site, http://complexevents.com/ or in a book written by Prof. David Luckham, The Power of Events. This book is considered the milestone for the modern research and development of CEP. There are many existing pure CEP/ESP engines, both commercial and open source. Drools Fusion enhances the rule based programming with event support. It makes use of its Rete algorithm and provides an alternative to existing engines. Drools Fusion Drools Fusion is a Drools module that is a part of the Business Logic Integration Platform. It is the Drools event processing engine covering both CEP and ESP. Each event has a type, a time of occurrence, and possibly, a duration. Both point in time (zero duration) and interval-based events are supported. Events can also contain other data like any other facts—properties with a name and type. All events are facts but not all facts are events. An event's state should not be changed. However, it is valid to populate the unpopulated values. Events have clear life cycle windows and may be transparently garbage collected after the life cycle window expires (for example, we may be interested only in transactions that happened in the last 24 hours). Rules can deal with time relationships between events. Fraud detection It will be easier to explain these concepts by using an example—a fraud detection system. Fraud in banking systems is becoming a major concern. The amount of online transactions is increasing every day. An automatic system for fraud detection is needed. The system should analyze various events happening in a bank and, based on a set of rules, raise an appropriate alarm. This problem cannot be solved by the standard Drools rule engine. The volume of events is huge and it happens asynchronously. If we simply inserted them into the knowledge session, we would soon run out of memory. While the Rete algorithm behind Drools doesn't have any theoretical limitation on number of objects in the session, we could use the processing power more wisely. Drools Fusion is the right candidate for this kind of task. Problem description Let's consider the following set of business requirements for the fraud detection system: If a notification is received from a customer about a stolen card, block this account and any withdrawals from this account. Check each transaction against a blacklist of account numbers. If the transaction is transferring money from/to such an account, then flag this transaction as suspicious with the maximum severity. If there are two large debit transactions from the same account within a ninety second period and each transaction is withdrawing more than 300% of the average monthly (30 days) withdrawal amount, flag these transactions as suspicious with minor severity. If there is a sequence of three consecutive, increasing, debit transactions originating from a same account within a three minute period and these transactions are together withdrawing more than 90% of the account's average balance over 30 days, then flag those transactions as suspicious with major severity and suspend the account. If the number of withdrawals over a day is 500% higher than the average number of withdrawals over a 30 day period and the account is left with less than 10% of the average balance over a month (30 days), then flag the account as suspicious with minor severity. Duplicate transactions check—if two transactions occur in a time window of 15 seconds that have the same source/destination account number, are of the same amount, and just differ in the transaction ID, then flag those transactions as duplicates. Monitoring: Monitor the average withdrawal amount over all of the accounts for 30 days. Monitor the average balance across all of the accounts. Design and modeling Looking at the requirements, we'll need a way of flagging a transaction as suspicious. This state can be added to an existing Transaction type, or we can externalize this state to a new event type. We'll do the latter. The following new events will be defined: TransactionCreatedEvent—An event that is triggered when a new transaction is created. It contains a transaction identifier, source account number, destination account number, and the actual amount transferred. TransactionCompletedEvent—An event that is triggered when an existing transaction has been processed. It contains the same fields as the TransactionCreatedEvent class. AccountUpdatedEvent—An event that is triggered when an account has been updated. It contains the account number, current balance, and the transaction identifier of a transaction that initiated this update. SuspiciousAccount—An event triggered when there is some sort of a suspicion around the account. It contains the account number and severity of the suspicion. It is an enumeration that can have two values MINOR and MAJOR. This event's implementation is shown in the following code. SuspiciousTransaction—Similar to SuspiciousAccount, this is an event that flags a transaction as suspicious. It contains a transaction identifier and severity level. LostCardEvent—An event indicating that a card was lost. It contains an account number. One of events described—SuspiciousAccount—is shown in the following code. It also defines SuspiciousAccountSeverity enumeration that encapsulates various severity levels that the event can represent. The event will define two properties. One of them is already mentioned severity and the other one will identify the account—accountNumber. /*** marks an account as suspicious*/public class SuspiciousAccount implements Serializable { public enum SuspiciousAccountSeverity { MINOR, MAJOR}private final Long accountNumber;private final SuspiciousAccountSeverity severity;public SuspiciousAccount(Long accountNumber, SuspiciousAccountSeverity severity) { this.accountNumber = accountNumber; this.severity = severity; } private transient String toString; @Override public String toString() { if (toString == null) { toString = new ToStringBuilder(this).appendSuper( super.toString()).append("accountNumber", accountNumber).append("severity", severity) .toString(); } return toString;} Code listing 1: Implementation of SuspiciousAccount event. Please note that the equals and hashCode methods in SuspiciousAccount from the preceding code listing are not overridden. This is because an event represents an active entity, which means that each instance is unique. The toString method is implemented using org.apache.commons.lang.builder.ToStringBuilder. All of these event classes are lightweight, and they have no references to other domain classes (no object reference; only a number—accountNumber—in this case). They are also implementing the Serializable interface. This makes them easier to transfer between JVMs. As best practice, this event is immutable. The two properties above (accountNumber and severity) are marked as final. They can be set only through a constructor (there are no set methods—only two get methods). The get methods were excluded from this code listing. The events themselves don't carry a time of occurrence—a time stamp (they easily could, if we needed it; we'll see how in the next set of code listings). When the event is inserted into the knowledge session, the rule engine assigns such a time stamp to FactHandle that is returned. (Remember? session.insert(..) returns a FactHandle). In fact, there is a special implementation of FactHandle called EventFactHandle. It extends the DefaultFactHandle (which is used for normal facts) and adds few additional fields, for example—startTimestamp and duration. Both contain millisecond values and are of type long. Ok, we now have the event classes and we know that there is a special FactHandle for events. However, we still don't know how to tell Drools that our class represents an event. Drools type declarations provide this missing link. It can define new types and enhance existing types. For example, to specify that the class TransactionCreatedEvent is an event, we have to write: declare TransactionCreatedEvent @role( event )end Code listing 2: Event role declaration (cep.drl file). This code can reside inside a normal .drl file. If our event had a time stamp property or a duration property, we could map it into startTimestamp or duration properties of EventFactHandle by using the following mapping: @duration( durationProperty ) Code listing 3: Duration property mapping. The name in brackets is the actual name of the property of our event that will be mapped to the duration property of EventFactHandle. This can be done similarly for startTimestamp property. As an event's state should not be changed (only unpopulated values can be populated), think twice before declaring existing beans as events. Modification to a property may result in an unpredictable behavior. In the following examples, we'll also see how to define a new type declaration Fraud detection rules Let's imagine that the system processes thousands of transactions at any given time. It is clear that this is challenging in terms of time and memory consumption. It is simply not possible to keep all of the data (transactions, accounts, and so on) in memory. A possible solution would be to keep all of the accounts in memory as there won't be that many of them (in comparison to transactions) and keep only transactions for a certain period. With Drools Fusion, we can do this very easily by declaring that a Transaction is an event. The transaction will then be inserted into the knowledge session through an entry-point. Each entry point defines a partition in the input data storage, reducing the match space and allowing patterns to target specific partitions. Matching data from a partition requires explicit reference at the pattern declaration. This makes sense, especially if there are large quantities of data and only some rules are interested in them. We'll look at entry points in the following example. If you are still concerned about the volume of objects in memory, this solution can be easily partitioned, for example, by account number. There might be more servers, each processing only a subset of accounts (simple routing strategy might be: accountNumber module totalNumberOfServersInCluster). Then each server would receive only appropriate events. Notification The requirement we're going to implement here is essentially to block an account whenever a LostCardEvent is received. This rule will match two facts: one of type Account and one of type LostCardEvent. The rule will then set the the status of this account to blocked. The implementation of the rule is as follows: rule notification when $account : Account( status != Account.Status.BLOCKED ) LostCardEvent( accountNumber == $account.number ) from entry-point LostCardStream then modify($account) { setStatus(Account.Status.BLOCKED) };end Code listing 4: Notification rule that blocks an account (cep.drl file). As we already know, Account is an ordinary fact from the knowledge session. The second fact—LostCardEvent—is an event from an entry point called LostCardStream. Whenever a new event is created and goes through the entry point, LostCardStream, this rule tries to match (checks if its conditions can be satisfied). If there is an Account in the knowledge session that didn't match with this event yet, and all conditions are met, the rule is activated. The consequence sets the status of the account to blocked in a modify block. As we're updating the account in the consequence and also matching on it in the condition, we have to add a constraint that matches only the non-blocked accounts to prevent looping (see above: status != Account.Status.BLOCKED). Test configuration setup Following the best practice that every code/rule needs to be tested, we'll now set up a class for writing unit tests. All of the rules will be written in a file called cep.drl. When creating this file, just make sure it is on the classpath. The creation of knowledgeBase won't be shown. It is similar to the previous tests that we've written. We just need to change the default knowledge base configuration slightly: KnowledgeBaseConfiguration config = KnowledgeBaseFactory .newKnowledgeBaseConfiguration();config.setOption( EventProcessingOption.STREAM ); Code listing 5: Enabling STREAM event processing mode on knowledge base configuration. This will enable the STREAM event processing mode. KnowledgeBaseConfiguration from the preceding code is then used when creating the knowledge base—KnowledgeBaseFactory.newKnowledgeBase(config). Part of the setup is also clock initialization. We already know that every event has a time stamp. This time stamp comes from a clock which is inside the knowledge session. Drools supports several clock types, for example, a real-time clock or a pseudo clock. The real-time clock is the default and should be used in normal circumstances. The pseudo clock is especially useful for testing as we have complete control over the time. The following initialize method sets up a pseudo clock. This is done by setting the clock type on KnowledgeSessionConfiguration and passing this object to the newStatefulKnowledgeSession method of knowledgeBase. The initialize method then makes this clock available as a test instance variable called clock when calling session.getSessionClock(), as we can see in the following code: public class CepTest { static KnowledgeBase knowledgeBase; StatefulKnowledgeSession session; Account account; FactHandle accountHandle; SessionPseudoClock clock; TrackingAgendaEventListener trackingAgendaEventListener; WorkingMemoryEntryPoint entry; @Before public void initialize() throws Exception { KnowledgeSessionConfiguration conf = KnowledgeBaseFactory.newKnowledgeSessionConfiguration(); conf.setOption( ClockTypeOption.get( "pseudo" ) ); session = knowledgeBase.newStatefulKnowledgeSession(conf, null); clock = (SessionPseudoClock) session.getSessionClock(); trackingAgendaEventListener = new TrackingAgendaEventListener(); session.addEventListener(trackingAgendaEventListener); account = new Account(); account.setNumber(123456l); account.setBalance(BigDecimal.valueOf(1000.00)); accountHandle = session.insert(account); Code listing 6: Unit tests setup (CepTest.java file). The preceding initialize method also creates an event listener and passes it into the session. The event listener is called TrackingAgendaEventListener. It simply tracks all of the rule executions. It is useful for unit testing to verify whether a rule fired or not. Its implementation is as follows: public class TrackingAgendaEventListener extends DefaultAgendaEventListener { List<String> rulesFiredList = new ArrayList<String>(); @Override public void afterActivationFired( AfterActivationFiredEvent event) { rulesFiredList.add(event.getActivation().getRule() .getName()); } public boolean isRuleFired(String ruleName) { for (String firedRuleName : rulesFiredList) { if (firedRuleName.equals(ruleName)) { return true; } } return false; } public void reset() { rulesFiredList.clear(); } } Code listing 7: Agenda event listener that tracks all rules that have been fired. DefaultAgendaEventListener comes from the org.drools.event.rule package that is part of drools-api.jar file as opposed to the org.drools.event package that is part of the old API in drools-core.jar. All of the Drools agenda event listeners must implement the AgendaEventListener interface. TrackingAgendaEventListener in the preceding code extends DefaultAgendaEventListener so that we don't have to implement all of the methods defined in the AgendaEventListener interface. Our listener just overrides the afterActivationFired method that will be called by Drools every time a rule's consequence has been executed. Our implementation of this method adds the fired rule name into a list of fired rules—rulesFiredList. Then the convenience method isRuleFired takes a ruleName as a parameter and checks if this rule has been executed/fired. The reset method is useful for clearing out the state of this listener, for example, after session.fireAllRules is called. Now, back to the test configuration setup. The last part of the initialize method from code listing 6 is account object creation (account = new Account(); ...). This is for convenience purposes so that every test does not have to create one. The account balance is set to 1000. The account is inserted into the knowledge session and its FactHandle is stored so that the account object can be easily updated. Testing the notification rule The test infrastructure is now fully set up and we can write a test for the notification rule from code listing 4 as follows: @Test public void notification() throws Exception { session.fireAllRules(); assertNotSame(Account.Status.BLOCKED,account.getStatus()); entry = session .getWorkingMemoryEntryPoint("LostCardStream"); entry.insert(new LostCardEvent(account.getNumber())); session.fireAllRules(); assertSame(Account.Status.BLOCKED, account.getStatus()); } Code listing 8: Notification rule's unit test (CepTest.java file). The test verifies that the account is not blocked by accident first. Then it gets the LostCardStream entry point from the session by calling: session.getWorkingMemoryEntryPoint("LostCardStream"). Then the code listing demonstrates how an event can be inserted into the knowledge session through an entry-point—entry.insert(new LostCardEvent(...)). In a real application, you'll probably want to use Drools Pipeline for inserting events into the knowledge session. It can be easily connected to an existing ESB (Enterprise Service Bus) or a JMS topic or queue. Drools entry points are thread-safe. Each entry point can be used by a different thread; however, no two threads can concurrently use the same entry point. In this case, it makes sense to start the engine in fireUntilHalt mode in a separate thread like this: new Thread(new Runnable() { public void run() { session.fireUntilHalt(); } }).start(); Code listing 9: Continuous execution of rules. The engine will then continuously execute activations until the session.halt() method is called. The test then verifies that the status of the account is blocked. If we simply executed session.insert(new LostCardEvent(..)); the test would fail because the rule wouldn't see the event.
Read more
  • 0
  • 0
  • 3812
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-business-processes-bpel
Packt
08 Sep 2010
10 min read
Save for later

Business Processes with BPEL

Packt
08 Sep 2010
10 min read
(For more resources on BPEL, SOA and Oracle see here.) BPEL business process example We describe an oversimplified scenario, where the client invokes the business process, specifying the name of the employee, the destination, the departure date, and the return date. The BPEL business process first checks the employee travel status. We will assume that a service exists through which such a check can be made. Then the BPEL process will check the price for the flight ticket with two airlines—American Airlines and Delta Airlines. Again we will suppose that both airline companies provide a service through which such checks can be made. Finally, the BPEL process will select the lower price and return the travel plan to the client. For the purpose of this example, we first build a synchronous BPEL process, to maintain simplicity. This means that the client will wait for the response. Later in this article, we modify the example and make the BPEL process asynchronous. We will assume that the service for checking the employee travel status is synchronous. This is reasonable because such data can be obtained immediately and returned to the caller. To acquire the plane ticket prices we use asynchronous invocations. Again, this is reasonable because it might take a little longer to confirm the plane travel schedule. We assume that both airlines offer a service and that both Web Services are identical(provide equal port types and operations). This assumption simplifies our example. In real-world scenarios, you will usually not have the choice about the services but will have to use whatever services are provided by your partners. If you have the luxury of designing the Web Services along with the BPEL process, consider which is the best interface. Usually we use asynchronous services for long-lasting operations and synchronous services for operations that return a result in a relatively short time. If we use asynchronous services, the BPEL process is usually asynchronous as well. In our example, we first develop a synchronous BPEL process that invokes two asynchronous airline Web Services. This is legal, but not recommended in real-world scenarios since the client may have to wait for an arbitrarily long time. In the real world, the solution would be to develop an asynchronous BPEL process, which we will cover later in this article. We invoke Web Services of both airlines concurrently and asynchronously. This means that our BPEL process will have to implement the callback operation (and a port type), through which the airlines will return the flight ticket confirmation. Finally, the BPEL process returns the best airline ticket to the client. In this example, to maintain simplicity, we will not implement any fault handling, which is crucial in real-world scenarios. Let's start by presenting the BPEL process activities using a UML activity diagram. In each activity, we have used the stereotype to indicate the BPEL operation used. Although the presented process might seem very simple, it will offer a good start for learning BPEL. To develop the BPEL process, we will go through the following steps: Get familiar with the involved services Define the WSDL for the BPEL process Define partner link types Define partner links Declare variables Write the process logic definition Involved services Before we can start writing the BPEL process definition, we have to get familiar with all services invoked from our business process. These services are sometimes called partner services. In our example, three services are involved: The Employee Travel Status service The American Airlines service The Delta Airlines service The two airline services share equal WSDL descriptions. The services used in this example are not real, so we will have to write WSDLs and even implement them to run the example. In real-world scenarios we would obviously use real Web Services exposed by partners involved in the business process. The services and the BPEL process example can be downloaded from http://www.packtpub.com The example runs on the Oracle SOA Suite. Web Service descriptions are available through WSDL. WSDL specifies the operations and port types Web Services offer, the messages they accept, and the types they define. We will now look at both Web Services. Employee Travel Status service Understanding the services that a business process interacts with is crucial to writing the BPEL process definition. Let's look into the details of our Employee Travel Status service. It provides the EmployeeTravelStatusPT port type through which the employee travel status can be checked using the EmployeeTravelStatus operation. The operation will return the travel class an employee can use—economy, business, or first. This is shown in the following figure: The operation is a synchronous request/response operation as we can see from the WSDL: <?xml version="1.0" encoding="utf-8" ?> <definitions targetNamespace="http://packtpub.com/service/employee/" > ... <portType name="EmployeeTravelStatusPT"> <operation name="EmployeeTravelStatus"> <input message="tns:EmployeeTravelStatusRequestMessage" /> <output message="tns:EmployeeTravelStatusResponseMessage" /> </operation> </portType> ... The EmployeeTravelStatus operation consists of an input and an output message. To maintain simplicity, the fault is not declared. The definitions of input and output messages are also a part of the WSDL: ... <message name="EmployeeTravelStatusRequestMessage"> <part name="employee" element="tns:Employee" /> </message> <message name="EmployeeTravelStatusResponseMessage"> <part name="travelClass" element="tns:TravelClass" /> </message> ... The EmployeeTravelStatusRequestMessage message has a single part—employee of element Employee with type EmployeeType, while the EmployeeTravelStatusResponseMessage has a part called travelClass, of element TravelClass and type TravelClassType. The EmployeeType and the TravelClassType types are defined within the WSDL under the &lttypes> element: ... <types> <xs:schema elementFormDefault="qualified" targetNamespace="http://packtpub.com/service/employee/"> <xs:complexType name="EmployeeType"> <xs:sequence> <xs:element name="FirstName" type="xs:string" /> <xs:element name="LastName" type="xs:string" /> <xs:element name="Department" type="xs:string" /> </xs:sequence> </xs:complexType> <xs:element name="Employee" type="EmployeeType"/> ... EmployeeType is a complex type and has three elements: first name, last name, and department name. TravelClassType is a simple type that uses the enumeration to list the possible classes: ... <xs:simpleType name="TravelClassType"> <xs:restriction base="xs:string"> <xs:enumeration value="Economy"/> <xs:enumeration value="Business"/> <xs:enumeration value="First"/> </xs:restriction> </xs:simpleType> <xs:element name="TravelClass" type="TravelClassType"/> </xs:schema> </types> ... Now let us look at the airline service. Airline service The Airline Service is an asynchronous web service. Therefore, it specifies two port types. The first, FlightAvailabilityPT, is used to check the flight availability using the FlightAvailability operation. To return the result, the service specifies the second port type, FlightCallbackPT. This port type specifies the FlightTicketCallback operation. Although the Airline Service defines two port types, it only implements the FlightAvailabilityPT. FlightCallbackPT is implemented by the BPEL process, which is the client of the web service. The architecture of the service is schematically shown as follows: Flight Availability port type FlightAvailability is an asynchronous operation, containing only the input message. <?xml version="1.0" encoding="utf-8" ?> <definitions targetNamespace="http://packtpub.com/service/airline/" > ... <portType name="FlightAvailabilityPT"> <operation name="FlightAvailability"> <input message="tns:FlightTicketRequestMessage" /> </operation> </portType> ... The definition of the input message is shown as follows. It consists of two parts—the flightData part and the travelClass part: <message name="FlightTicketRequestMessage"> <part name="flightData" element="tns:FlightRequest" /> <part name="travelClass" element="emp:TravelClass" /> </message> The travelClass part is the same as that used in the Employee Travel Status service. The flightData part is of element FlightRequest, which is defined as follows: ... <types> <xs:schema elementFormDefault="qualified" targetNamespace="http://packtpub.com/service/airline/"> <xs:complexType name="FlightRequestType"> <xs:sequence> <xs:element name="OriginFrom" type="xs:string" /> <xs:element name="DestinationTo" type="xs:string" /> <xs:element name="DesiredDepartureDate" type="xs:date" /> <xs:element name="DesiredReturnDate" type="xs:date" /> </xs:sequence> </xs:complexType> <xs:element name="FlightRequest" type="FlightRequestType"/> ... FlightRequestType is a complex type and has four elements through which we specify the flight origin and destination, the desired departure data, and the desired return date. Flight Callback port type The Airline Service needs to specify another port type for the callback operation through which the BPEL process receives the flight ticket response messages. The service will only specify this port type, which is implemented by the BPEL process. We define the FlightCallbackPT port type with the FlightTicketCallback operation, which has the TravelResponseMessage input message: ... <portType name="FlightCallbackPT"> <operation name="FlightTicketCallback"> <input message="tns:TravelResponseMessage" /> </operation> </portType> ... TravelResponseMessage consists of a single part called confirmationData: ... <message name="TravelResponseMessage"> <part name="confirmationData" element="tns:FlightConfirmation" /> </message> ... Now that we are familiar with both services, we can define the BPEL process. Remember that our BPEL process is an actual web service. Therefore, we first have to write the WSDL for the BPEL process. <xs:complexType name="FlightConfirmationType"> <xs:sequence> <xs:element name="FlightNo" type="xs:string" /> <xs:element name="TravelClass" type="tns:TravelClassType" /> <xs:element name="Price" type="xs:float" /> <xs:element name="DepartureDateTime" type="xs:dateTime" /> <xs:element name="ReturnDateTime" type="xs:dateTime" /> <xs:element name="Approved" type="xs:boolean" /> </xs:sequence> </xs:complexType> <xs:element name="FlightConfirmation" type="FlightConfirmationType"/> </xs:schema> </types> WSDL for the BPEL process The business travel BPEL process is exposed as a service. We need to define the WSDL for it. The process will have to receive messages from its clients and return results. So it has to expose a port type that will be used by the client to start the process and get the reply. We define the TravelApprovalPT port type with the TravelApproval operation, as shown in the following figure: We have already said that the BPEL process is synchronous. The TravelApproval operation will be of synchronous request/response type. <?xml version="1.0" encoding="utf-8" ?> <definitions targetNamespace="http://packtpub.com/bpel/travel/" > ... <portType name="TravelApprovalPT"> <operation name="TravelApproval"> <input message="tns:TravelRequestMessage" /> <output message="aln:TravelResponseMessage" /> </operation> </portType> ... We also have to define messages. The TravelRequestMessage consists of two parts: employee: The employee data, which we reuse from the Employee Travel Status service definition flightData: The flight data, which we reuse from the airline service definition ... <import namespace="http://packtpub.com/service/employee/" location="./Employee.wsdl"/> <import namespace="http://packtpub.com/service/airline/" location="./Airline.wsdl"/> ... <message name="TravelRequestMessage"> <part name="employee" element="emp:Employee" /> <part name="flightData" element="aln:FlightRequest" /> </message> ... For the output message, we use the same message used to return the flight information from the airline service: the TravelResponseMessage defined in the aln namespace. This is reasonable because the BPEL process will get the TravelResponseMessage from both airlines, select the most appropriate (the cheapest), and return the same message to the client. As we have already imported the Airline WSDL, we are done. When writing the WSDL for the BPEL process, we usually do not define the binding (&ltbinding>) and the service (&ltservice>) sections. These are usually generated by the BPEL execution environment (BPEL server). Before we can start writing the BPEL process, we still need to define partner link types.
Read more
  • 0
  • 0
  • 2032

article-image-human-interactions-bpel
Packt
08 Sep 2010
13 min read
Save for later

Human Interactions in BPEL

Packt
08 Sep 2010
13 min read
(For more resources on BPEL, SOA and Oracle see here.) Human interactions in business processes The main objective of BPEL has been to standardize the process automation. BPEL business processes make use of services and externalize their functionality as services. BPEL processes are defined as a collection of activities through which services are invoked. BPEL does not make a distinction between services provided by applications and other interactions, such as human interactions, which are particularly important. Real-world business processes namely often integrate not only systems and services, but also humans. Human interactions in business processes can be very simple, such as approval of certain tasks or decisions, or complex, such as delegation, renewal, escalation, nomination, chained execution, and so on. Human interactions are not limited to approvals and can include data entries, process monitoring and management, process initiation, exception handling, and so on. Task approval is the simplest and probably the most common human interaction. In a business process for opening a new account, a human interaction might be required to decide whether the user is allowed to open the account. In a travel approval process, a human might approve the decision from which airline to buy the ticket (as shown in the following figure). If the situation is more complex, a business process might require several users to make approvals, either in sequence or in parallel. In sequential scenarios, the next user often wants to see the decision made by the previous user. Sometimes, particularly in parallel human interactions, users are not allowed to see the decisions taken by other users. This improves the decision potential. Sometimes one user does not even know which other users are involved, or whether any other users are involved at all. A common scenario for involving more than one user is workflow with escalation. Escalation is typically used in situations where an activity does not fulfill a time constraint. In such a case, a notification is sent to one or more users. Escalations can be chained, going first to the first-line employees and advancing to senior staff if the activity is not fulfilled. Sometimes it is difficult or impossible to define in advance which user should perform an interaction. In this case, a supervisor might manually nominate the task to other employees; the nomination can also be made by a group of users or by a decision-support system. In other scenarios, a business process may require a single user to perform several steps that can be defined in advance or during the execution of the process instance. Even more complex processes might require that one workflow is continued with another workflow. Human interactions are not limited to only approvals; they may also include data entries or process management issues, such as process initiation, suspension, and exception management. This is particularly true for long-running business processes, where, for example, user exception handling can prevent costly process termination and related compensation for those activities that have already been successfully completed. As a best practice for human workflows, it is usually not wise to associate human interactions directly to specific users; it is better to connect tasks to roles and then associate those roles with individual users. This gives business processes greater flexibility, allowing any user with a certain role to interact with the process and enabling changes to users and roles to be made dynamically. To achieve this, the process has to gain access to users and roles, stored in the enterprise directory, such as LDAP (Lightweight Directory Access Protocol). Workflow theory has defined several workflow patterns, which specify the abovedescribed scenarios in detail. Examples of workflow patterns include sequential workflow, parallel workflow, workflow with escalation, workflow with nomination, ad-hoc workflow, workflow continuation, and so on. Human Tasks in BPEL So far we have seen that human interaction in business processes can get quite complex. Although BPEL specification does not specifically cover human interactions, BPEL is appropriate for human workflows. BPEL business processes are defined as collections of activities that invoke services. BPEL does not make a distinction between services provided by applications and other interactions, such as human interactions. There are mainly two approaches to support human interactions in BPEL. The first approach is to use a human workflow service. Several vendors today have created workflow services that leverage the rich BPEL support for asynchronous services. In this fashion, people and manual tasks become just another asynchronous service from the perspective of the orchestrating process and the BPEL processes stay 100% standard. The other approach has been to standardize the human interactions and go beyond the service invocations. This approach resulted in the workflow specifications emerging around BPEL with the objective to standardize the explicit inclusion of human tasks in BPEL processes. The BPEL4People specification has emerged, which was originally put forth by IBM and SAP in July 2005. Other companies, such as Oracle, Active Endpoints, and Adobe joined later. Finally, this specification is now being advanced within the OASIS BPEL4People Technical Committee. The BPEL4People specification contains two parts: BPEL4People version 1.0, which introduces BPEL extensions to address human interactions in BPEL as a first-class citizen. It defines a new type of basic activity, which uses human tasks as an implementation, and allows specifying tasks local to a process or use tasks defined outside of the process definition. BPEL4People is based on the WS-HumanTask specification that it uses for the actual specification of human tasks. Web Services Human Task (WS-HumanTask) version 1.0 introduces the definition of human tasks, including their properties, behavior, and a set of operations used to manipulate human tasks. It also introduces a coordination protocol in order to control autonomy and lifecycle of service-enabled human tasks in an interoperable manner. The most important extensions introduced in BPEL4People are people activities and people links. People activity is a new BPEL activity used to define user interactions; in other words, tasks that a user has to perform. For each people activity, the BPEL server must create work items and distribute them to users eligible to execute them. People activities can have input and output variables and can specify deadlines. To specify the implementation of people activities, BPEL4People introduced tasks. Tasks specify actions that users must perform. Tasks can have descriptions, priorities, deadlines, and other properties. To represent tasks to users, we need a client application that provides a user interface and interacts with tasks. It can query available tasks, claim and revoke them, and complete or fail them. To associate people activities and the related tasks with users or groups of users, BPEL4People introduced people links. People links are somewhat similar to partner links; they associate users with one or more people activities. People links are usually associated with generic human roles, such as process initiator, process stakeholders, owners, and administrators. The actual users that are associated with people activities can be determined at design time, deployment time, or runtime. BPEL4People anticipates the use of directories such as LDAP to select users. However, it doesn't define the query language used to select users. Rather, it foresees the use of LDAP filters, SQL, XQuery, or other methods. BPEL4People proposes complex extensions to the BPEL specification. However, so far it is still quite high level and doesn't yet specify the exact syntax of the new activities mentioned above. Until the specification becomes more concrete, we don't expect vendors to implement the proposed extensions. But while BPEL4People is early in the standardization process, it shows a great deal of promise. The BPEL4People proposal raises an important question: Is it necessary to introduce such complex extensions to BPEL to cover user interactions? Some vendor solutions model user interactions as just another web service, with well-defined interfaces for both BPEL processes and client applications. This approach does not require any changes to BPEL. To become portable, it would only need an industry-wide agreement on the two interfaces. And, of course, both interfaces can be specified with WSDL, which gives developers great flexibility and lets them use practically any environment, language, or platform that supports Web Services. Clearly, a single standard approach has not yet been adopted for extending BPEL to include Human Tasks and workflow services. However, this does not mean that developers cannot use BPEL to develop business processes with user interactions. Human Task integration with BPEL To interleave user interactions with service invocations in BPEL processes we can use a workflow service, which interacts with BPEL using standard WSDL interfaces. This way, the BPEL process can assign user tasks and wait for responses by invoking the workflow service using the same syntax as for any other service. The BPEL process can also perform more complex operations such as updating, completing, renewing, routing, and escalating tasks. After the BPEL process has assigned tasks to users, users can act on the tasks by using the appropriate applications. The applications communicate with the workflow service by using WSDL interfaces or another API (such as Java) to acquire the list of tasks for selected users, render appropriate user interfaces, and return results to the workflow service, which forwards them to the BPEL process. User applications can also perform other tasks such as reassign, escalate, route, suspend, resume, and withdraw. Finally, the workflow service may allow other communication channels, such as e-mail and SMS, as shown in the following figure: Oracle Human Workflow concepts Oracle SOA Suite 11g provides the Human Workflow component, which enables including human interaction in BPEL processes in a relatively easy way. The Human Workflow component consists of different services that handle various aspects of human interaction with business process and expose their interfaces through WSDL; therefore, BPEL processes invoke them just like any other service. The following figure shows the overall architecture of the Oracle Workflow services: As we can see in the previous figure, the Workflow consists of the following services: Task Service exposes operations for task state management, such as operations to update a task, complete a task, escalate a task, reassign a task, and so on. When we add a human task to the BPEL process, the corresponding partner link for the Task Service is automatically created. Task Assignment Service provides functionality to route, escalate, reassign tasks, and more. Task Query Service enables retrieving the task list for a user based on a search criterion. Task Metadata Service enables retrieving the task metadata. Identity Service provides authentication and authorization of users and lookup of user properties and privileges. Notification Service enables sending of notifications to users using various channels (e-mail, voice message, IM, SMS, and so on). User Metadata Service manages metadata, related to workflow users, such as user work queues, preferences, and so on. Runtime Configuration Service provides functionality for managing metadata used in the task service runtime environment. Evidence Store Service supports management of digitally-signed workflow tasks. BPEL processes use the Task Service to assign tasks to users. More specifically, tasks can be assigned to: Users: Users are defined in an identity store configured with the SOA infrastructure. Groups: Groups contain individual users, which can claim a task and act upon it. Application roles: Used to logically group users and other roles. These roles are application specific and are not stored in the identity store. Assigning tasks to groups or roles is more flexible, as every user in a certain group (role) can review the task to complete it. Oracle SOA Suite 11g provides three methods for assigning users, groups, and application roles to tasks: Static assignment: Static users, groups, or application roles can be assigned at design time. Dynamic assignment: We can define an XPath expression to determine the task participant at runtime. Rule-based assignment: We can create a list of participants with complex expressions. Once the user has completed the task, the BPEL process receives a callback from the Task Service with the result of the user action. The BPEL process continues to execute. The Oracle Workflow component provides several possibilities regarding how users can review the tasks that have been assigned to them, and take the corresponding actions. The most straightforward approach is to use the Oracle BPM Worklist application. This application comes with Oracle SOA Suite 11g and allows users to review the tasks, to see the task details, and to select the decision to complete the task. If the Oracle BPM Worklist application is not appropriate, we can develop our own user interface in Java (using JSP, JSF, Swing, and so on) or almost any other environment that supports Web Services (such as .NET for example). In this respect, the Workflow service is very flexible and we can use a portal, such as Oracle Portal, a web application, or almost any other application to review the tasks. The third possibility is to use e-mail for task reviews. We use e-mails over the Notification service. Workflow patterns To simplify the development of workflows, Oracle SOA Suite 11g provides a library of workflow patterns (participant types). Workflow patterns define typical scenarios of human interactions with BPEL processes. The following participant types are supported: Single approver: Used when a participant maps to a user, group, or role. Parallel: Used if multiple users have to act in parallel (for example, if multiple users have to provide their opinion or vote). The percentage of required user responses can be specified. Serial: Used if multiple users have to act in a sequence. A management chain or a list of users can be specified. FYI (For Your Information): Used if a user only needs to be notified about a task, but a user response is not required. With these, we can realize various workflow patterns, such as: Simple workflow: Used if a single user action is required, such as confirmation, decision, and so on. A timeout can also be specified. Simple workflow has two extension patterns: Escalation: Provides the ability to escalate the task to another user or role if the original user does not complete the task in the specified amount of time. Renewal: Provides the ability to extend the timeout if the user does not complete the task in the specified time. Sequential workflow: Used if multiple users have to act in a sequence. A management chain or a list of users can be specified. Sequential workflow has one extension pattern: Escalation: Same functionality as above. Parallel workflow: Used if multiple users have to act in parallel (for example, if multiple users have to provide their opinion or vote). The percentage of required user responses can be specified. This pattern has an extension pattern: Final reviewer: Is used when the final review has to act after parallel users have provided feedback. Ad-hoc (dynamic) workflow: Used to assign the task to one user, who can then route the task to other user. The task is completed when the user does not route it forward. FYI workflow: Used if a user only needs to be notified about a task, but a user response is not required. Task continuation: Used to build complex workflow patterns as a chain of simple patterns (those described above).
Read more
  • 0
  • 0
  • 2426

article-image-bpel4people
Packt
08 Sep 2010
10 min read
Save for later

BPEL4People

Packt
08 Sep 2010
10 min read
Introduction BPEL4People has been developed to provide extensive support for human interactions in BPEL processes and to standardize the human interactions. Originally, the BPEL4People specification was defined by IBM and SAP. Other companies, such as Oracle, Active Endpoints, and Adobe have also joined. Today, this specification has being advanced within the OASIS BPEL4People Technical Committee. The BPEL4People specification contains two parts: BPEL4People version 1.0, which introduces BPEL extensions to address human interactions in BPEL Web Services Human Task (WS-HumanTask) version 1.0 introduces the definition of human tasks BPEL4People is defined in a way that it is layered on top of the BPEL language. We will now have a brief look at the WS-HumanTask and then at the BPEL4People. Brief look at WS-HumanTask The WS-HumanTask specification introduces Human Tasks to BPEL. Human Tasks are services, implemented by humans. A Human Task has two interfaces. One interface exposes the service offered by the task, like a translation service or an approval service. The second interface allows people to deal with tasks, for example to query for human tasks waiting for them, and to work on these tasks. This is very similar to human tasks in WebSphere, with two main differences—WS-HumanTask standardizes tasks among different vendors, and WS-HumanTask introduces new activities for specifying the properties of human tasks. In WebSphere, we had to use the Integration Developer GUI instead. WS-HumanTask makes a distinction between Human Tasks and notifications. Notifications are a special type of Human Task that allows the sending of information about noteworthy business events to users. Notifications are always delivered one-way. There is no response from notifications expected. Overall structure The overall structure of the human interactions definition is as follows: <?xml version="1.0" encoding="UTF-8"?><htd:humanInteractions targetNamespace="anyURI" expressionLanguage="anyURI"? queryLanguage="anyURI"?> <htd:extensions>? <htd:extension namespace="anyURI" mustUnderstand="yes|no"/>+ </htd:extensions> <htd:import namespace="anyURI"? location="anyURI"? importType="anyURI" />* <htd:logicalPeopleGroups>? <htd:logicalPeopleGroup name="NCName" reference="QName"?>+ <htd:parameter name="NCName" type="QName" />* </htd:logicalPeopleGroup></htd:logicalPeopleGroups><htd:tasks>? <htd:task name="NCName">+ ... </htd:task> </htd:tasks><htd:notifications>? <htd:notification name="NCName">+ ... </htd:notification> </htd:notifications></htd:humanInteractions> Human Tasks The most important is the definition of the Human Task. The definition includes the following: Interface Priority People assignments Delegation Presentation elements Outcome Search priorities Renderings Deadlines (start and competition deadlines) The WS-HumanTask specification foresees the following syntax to define a human task: <htd:task name="NCName"> <htd:interface portType="QName" operation="NCName" responsePortType="QName"? responseOperation="NCName"?/> <htd:priority expressionLanguage="anyURI"?>? integer-expression </htd:priority> <htd:peopleAssignments> ... </htd:peopleAssignments> <htd:delegation potentialDelegatees= "anybody|nobody|potentialOwners|other"/>? <htd:from>? ... </htd:from> </htd:delegation> <htd:presentationElements> ... </htd:presentationElements> <htd:outcome part="NCName" queryLanguage="anyURI">? queryContent </htd:outcome> <htd:searchBy expressionLanguage="anyURI"?>? expression </htd:searchBy> <htd:renderings>? <htd:rendering type="QName">+ ... </htd:rendering> </htd:renderings> <htd:deadlines>? <htd:startDeadline>* ... </htd:startDeadline> <htd:completionDeadline>* ... </htd:completionDeadline> </htd:deadlines> </htd:task> Escalations Within the deadlines, escalations can be defined. An example of defining an escalation is shown as follows: <htd:escalation name="highPrio"> <htd:condition> <![CDATA[ (htd:getInput("OrderRequest")/amount < 1000 && htd:getInput("OrderRequest")/prio <= 10) ]]> </htd:condition> <htd:notification name="ClaimApprovalOverdue"> <htd:interface portType="tns:ClaimsHandlingPT" operation="escalate" /> <htd:peopleAssignments> <htd:recipients> <htd:from logicalPeopleGroup="Manager"> <htd:argument name="region"> htd:getInput("OrderRequest")/region </htd:argument> </htd:from> </htd:recipients> </htd:peopleAssignments> <htd:presentationElements> <htd:name> Order approval overdue. </htd:name> </htd:presentationElements> </htd:notification> </htd:escalation> In a similar way, a reassignment could be done. Notifications Notifications are defined with the following: Interface Priority People assignments Presentation elements Renderings An example is shown as follows: <htd:notification name="NCName"> <htd:interface portType="QName" operation="NCName"/> <htd:priority expressionLanguage="anyURI"?>? integer-expression </htd:priority> <htd:peopleAssignments> <htd:recipients> ... </htd:recipients> <htd:businessAdministrators>? ... </htd:businessAdministrators> </htd:peopleAssignments> <htd:presentationElements> ... </htd:presentationElements> <htd:renderings>? ... </htd:renderings> </htd:notification> Programming interface The WS-HumanTask specification also defines the API for applications that are involved with the life cycle of a human task or a notification. It provides several types of operations, including: Participant operations, such as operation for claiming tasks, starting, stopping, suspending tasks, completing tasks, setting priority, delegating, and so on Simple query operations, such as getMyTasks and getMyTaskAbstracts Advanced query operation, which provides several possibilities for retrieving the tasks Administrative operations for nominating and activating tasks, and setting generic human roles The specification also defined XPath extension functions to retrieve Human Task properties. Now that we are familiar with WS-HumanTask, let us have a brief look at the BPEL4People specification. Brief look at BPEL4People BPEL4People is a BPEL extension, which adds several activities to the BPEL language. The most important extensions introduced in BPEL4People are people activities and people links. People activities are used to define human interactions. For each people activity, the BPEL server must create work items and distribute them to users eligible to execute them. People activities can have input and output variables and can specify deadlines. This is very similar to what we have seen in the WebSphere support for human tasks. To specify the implementation of people activities, BPEL4People introduces tasks. Tasks specify actions that users must perform. Tasks can have descriptions, priorities, deadlines, and other properties. Tasks can be represented in-line, or using WS-HumanTask. To associate people activities and the related tasks with users or groups of users, BPEL4People introduces people links. They associate users with one or more people activities. People links are usually associated with generic human roles, such as process initiator, process stakeholders, owners, and administrators. Overall structure The overall structure of BPEL4People extensions is as follows: <bpel:process ... > ... <bpel:extensions> <bpel:extension namespace="http://www.example.org/BPEL4People" mustUnderstand="yes"/> <bpel:extension namespace="http://www.example.org/WS-HT" mustUnderstand="yes"/> </bpel:extensions> <bpel:import importType="http://www.example.org/WS-HT" …/> <b4p:humanInteractions>? <htd:logicalPeopleGroups>? <htd:logicalPeopleGroup name="NCName">+ ... </htd:logicalPeopleGroup> </htd:logicalPeopleGroups> <htd:tasks>? <htd:task name="NCName">+ ... </htd:task> </htd:tasks> <htd:notifications>? <htd:notification name="NCName">+ ... </htd:notification> </htd:notifications> </b4p:humanInteractions> <b4p:peopleAssignments> ...</b4p:peopleAssignments> ...<bpel:extensionActivity> <b4p:peopleActivity name="NCName" ...> ... </b4p:peopleActivity> </bpel:extensionActivity> ...</bpel:process> Let us have a look at the new elements: The element <b4p:humanInteractions><b4p:/humanInteractions> contains declarations of elements from the WS-HumanTask namespace, such as <htd_logicalpeoplegroups></htd_logicalpeoplegroups>, <htd_tasks></htd_tasks>, and <htd_notifications></htd_notifications>. The element <htd_logicalpeoplegroup></htd_logicalpeoplegroup> specifies a logical people group used in an inline human task or a people activity. The <htd_task></htd_task> element is used to provide the definition of an inline human task. The <htd_notification></htd_notification> element is used to provide the definition of an inline notification. The element <b4p_peopleassignments></b4p_peopleassignments> is used to assign people to process-related generic human roles. The new activity <b4p_peopleactivity></b4p_peopleactivity> is used to model human interactions within BPEL processes. People assignments BPEL4People defines generic human roles. These roles define what a person or a group of people can do with the process instance. The specification defines three process-related generic human roles: Process initiator: Person that triggered the process instance Process stakeholders: People who can influence the progress of a process instance, for example, by adding ad-hoc attachments, forwarding a task, or simply observing the progress of the process instance Business administrators: People allowed to perform administrative actions on the business process instances The syntax of people assignments looks like this: <b4p:peopleAssignments> <b4p:processInitiator>? <htd:from ...> ... </htd:from> </b4p:processInitiator> <b4p:processStakeholders>? <htd:from ...> ... </htd:from> </b4p:processStakeholders> <b4p:businessAdministrators>? <htd:from ...> ... </htd:from> </b4p:businessAdministrators> </b4p:peopleAssignments> People activities People activities are used to integrate human interactions within BPEL processes. A people activity can be integrated with an inline Human Task or with a standalone Human Task. An inline task can be defined as part of a people activity. A standalone Human Task is defined separately and can be used several times. To specify Human Tasks, we use the WS-HumanTask specification. The overall syntax is as follows: <b4p:peopleActivity name="NCName" inputVariable="NCName"? outputVariable="NCName"? isSkipable="xsd:boolean"? standard-attributes> standard-elements ( <htd:task>...</htd:task> | <b4p:localTask>...</b4p:localTask> | <b4p:remoteTask>...</b4p:remoteTask> | <htd:notification>...</htd:notification> | <b4p:localNotification>...</b4p:localNotification> | <b4p:remoteNotification>...</b4p:remoteNotification> ) <b4p:scheduledActions>? ... </b4p:scheduledActions> <bpel:toParts>? <bpel:toPart part="NCName" fromVariable="BPELVariableName"/>+ </bpel:toParts> <bpel:fromPart part="NCName" toVariable="BPELVariableName"/>+ <bpel:fromPart>? </bpel:fromParts> <b4p:attachmentPropagation fromProcess="all|none" toProcess="all|newOnly|none"/>? </b4p:peopleActivity> Summary In this article, we have covered the BPEL4People and WS-HumanTask specifications. These specifications have standardized the approach to BPEL human interactions and made it portable through different SOA process servers. We have overviewed both specifications and have seen that there are no major conceptual differences between the current Oracle support and the approach taken in specifications. At the time of writing, it has not been clear how much support these specifications will get from SOA platform vendors. Further resources on this subject: Human Interactions in BPEL [Article] Business Processes with BPEL [Article] Web Services, SOA, and WS-BPEL Technologies [Article] SOA—Service Oriented Architecture [Article] Oracle Web Services Manager: Authentication and Authorization [Article]
Read more
  • 0
  • 0
  • 1293

article-image-python-multimedia-animation-examples-using-pyglet
Packt
31 Aug 2010
7 min read
Save for later

Python Multimedia: Animation Examples using Pyglet

Packt
31 Aug 2010
7 min read
(For more resources on Python, see here.) Single image animation Imagine that you are creating a cartoon movie where you want to animate the motion of an arrow or a bullet hitting a target. In such cases, typically it is just a single image. The desired animation effect is accomplished by performing appropriate translation or rotation of the image. Time for action – bouncing ball animation Lets create a simple animation of a 'bouncing ball'. We will use a single image file, ball.png, which can be downloaded from the Packt website. The dimensions of this image in pixels are 200x200, created on a transparent background. The following screenshot shows this image opened in GIMP image editor. The three dots on the ball identify its side. We will see why this is needed. Imagine this as a ball used in a bowling game. The image of a ball opened in GIMP appears as shown in the preceding image. The ball size in pixels is 200x200. Download the files SingleImageAnimation.py and ball.png from the Packt website. Place the ball.png file in a sub-directory 'images' within the directory in which SingleImageAnimation.py is saved. The following code snippet shows the overall structure of the code. 1 import pyglet2 import time34 class SingleImageAnimation(pyglet.window.Window):5 def __init__(self, width=600, height=600):6 pass7 def createDrawableObjects(self):8 pass9 def adjustWindowSize(self):10 pass11 def moveObjects(self, t):12 pass13 def on_draw(self):14 pass15 win = SingleImageAnimation()16 # Set window background color to gray.17 pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1)1819 pyglet.clock.schedule_interval(win.moveObjects, 1.0/20)2021 pyglet.app.run() Although it is not required, we will encapsulate event handling and other functionality within a class SingleImageAnimation. The program to be developed is short, but in general, it is a good coding practice. It will also be good for any future extension to the code. An instance of SingleImageAnimation is created on line 14. This class is inherited from pyglet.window.Window. It encapsulates the functionality we need here. The API method on_draw is overridden by the class. on_draw is called when the window needs to be redrawn. Note that we no longer need a decorator statement such as @win.event above the on_draw method because the window API method is simply overridden by this inherited class. The constructor of the class SingleImageAnimation is as follows: 1 def __init__(self, width=None, height=None):2 pyglet.window.Window.__init__(self,3 width=width,4 height=height,5 resizable = True)6 self.drawableObjects = []7 self.rising = False8 self.ballSprite = None9 self.createDrawableObjects()10 self.adjustWindowSize() As mentioned earlier, the class SingleImageAnimation inherits pyglet.window.Window. However, its constructor doesn't take all the arguments supported by its super class. This is because we don't need to change most of the default argument values. If you want to extend this application further and need these arguments, you can do so by adding them as __init__ arguments. The constructor initializes some instance variables and then calls methods to create the animation sprite and resize the window respectively. The method createDrawableObjects creates a sprite instance using the ball.png image. 1 def createDrawableObjects(self):2 """3 Create sprite objects that will be drawn within the4 window.5 """6 ball_img= pyglet.image.load('images/ball.png')7 ball_img.anchor_x = ball_img.width / 28 ball_img.anchor_y = ball_img.height / 2910 self.ballSprite = pyglet.sprite.Sprite(ball_img)11 self.ballSprite.position = (12 self.ballSprite.width + 100,13 self.ballSprite.height*2 - 50)14 self.drawableObjects.append(self.ballSprite) The anchor_x and anchor_y properties of the image instance are set such that the image has an anchor exactly at its center. This will be useful while rotating the image later. On line 10, the sprite instance self.ballSprite is created. Later, we will be setting the width and height of the Pyglet window as twice of the sprite width and thrice of the sprite height. The position of the image within the window is set on line 11. The initial position is chosen as shown in the next screenshot. In this case, there is only one Sprite instance. However, to make the program more general, a list of drawable objects called self.drawableObjects is maintained. To continue the discussion from the previous step, we will now review the on_draw method. def on_draw(self): self.clear() for d in self.drawableObjects: d.draw() As mentioned previously, the on_draw function is an API method of class pyglet.window.Window that is called when a window needs to be redrawn. This method is overridden here. The self.clear() call clears the previously drawn contents within the window. Then, all the Sprite objects in the list self.drawableObjects are drawn in the for loop. The preceding image illustrates the initial ball position in the animation. The method adjustWindowSize sets the width and height parameters of the Pyglet window. The code is self-explanatory: def adjustWindowSize(self): w = self.ballSprite.width * 3 h = self.ballSprite.height * 3 self.width = w self.height = h So far, we have set up everything for the animation to play. Now comes the fun part. We will change the position of the sprite representing the image to achieve the animation effect. During the animation, the image will also be rotated, to give it the natural feel of a bouncing ball. 1 def moveObjects(self, t):2 if self.ballSprite.y - 100 < 0:3 self.rising = True4 elif self.ballSprite.y > self.ballSprite.height*2 - 50:5 self.rising = False67 if not self.rising:8 self.ballSprite.y -= 59 self.ballSprite.rotation -= 610 else:11 self.ballSprite.y += 512 self.ballSprite.rotation += 5 This method is scheduled to be called 20 times per second using the following code in the program. pyglet.clock.schedule_interval(win.moveObjects, 1.0/20) To start with, the ball is placed near the top. The animation should be such that it gradually falls down, hits the bottom, and bounces back. After this, it continues its upward journey to hit a boundary somewhere near the top and again it begins its downward journey. The code block from lines 2 to 5 checks the current y position of self.ballSprite. If it has hit the upward limit, the flag self.rising is set to False. Likewise, when the lower limit is hit, the flag is set to True. The flag is then used by the next code snippet to increment or decrement the y position of self.ballSprite. The highlighted lines of code rotate the Sprite instance. The current rotation angle is incremented or decremented by the given value. This is the reason why we set the image anchors, anchor_x and anchor_y at the center of the image. The Sprite object honors these image anchors. If the anchors are not set this way, the ball will be seen wobbling in the resultant animation. Once all the pieces are in place, run the program from the command line as: $python SingleImageAnimation.py This will pop up a window that will play the bouncing ball animation. The next illustration shows some intermediate frames from the animation while the ball is falling down. What just happened? We learned how to create an animation using just a single image. The image of a ball was represented by a sprite instance. This sprite was then translated and rotated on the screen to accomplish a bouncing ball animation. The whole functionality, including the event handling, was encapsulated in the class SingleImageAnimation.
Read more
  • 0
  • 0
  • 5181
article-image-python-multimedia-fun-animations-using-pyglet
Packt
31 Aug 2010
8 min read
Save for later

Python Multimedia: Fun with Animations using Pyglet

Packt
31 Aug 2010
8 min read
(For more resources on Python, see here.) So let's get on with it. Installation prerequisites We will cover the prerequisites for the installation of Pyglet in this section. Pyglet Pyglet provides an API for multimedia application development using Python. It is an OpenGL-based library, which works on multiple platforms. It is primarily used for developing gaming applications and other graphically-rich applications. Pyglet can be downloaded from http://www.pyglet.org/download.html. Install Pyglet version 1.1.4 or later. The Pyglet installation is pretty straightforward. Windows platform For Windows users, the Pyglet installation is straightforward—use the binary distribution Pyglet 1.1.4.msi or later. You should have Python 2.6 installed. For Python 2.4, there are some more dependencies. We won't discuss them in this article, because we are using Python 2.6 to build multimedia applications. If you install Pyglet from the source, see the instructions under the next sub-section, Other platforms. Other platforms The Pyglet website provides a binary distribution file for Mac OS X. Download and install pyglet-1.1.4.dmg or later. On Linux, install Pyglet 1.1.4 or later if it is available in the package repository of your operating system. Otherwise, it can be installed from source tarball as follows: Download and extractthetarballextractthetarball the tarball pyglet-1.1.4.tar.gz or a later version. Make sure that python is a recognizable command in shell. Otherwise, set the PYTHONPATH environment variable to the correct Python executable path. In a shell window, change to the mentioned extracted directory and then run the following command: python setup.py install Review the succeeding installation instructions using the readme/install instruction files in the Pyglet source tarball. If you have the package setuptools (http://pypi.python.org/pypi/setuptools) the Pyglet installation should be very easy. However, for this, you will need a runtime egg of Pyglet. But the egg file for Pyglet is not available at http://pypi.python.org. If you get hold of a Pyglet egg file, it can be installed by running the following command on Linux or Mac OS X. You will need administrator access to install the package: $sudo easy_install -U pyglet Summary of installation prerequisites Package Download location Version Windows platform Linux/Unix/OS X platforms Python http://python.org/download/releases/ 2.6.4 (or any 2.6.x) Install using binary distribution Install from binary; also install additional developer packages (For example, with python-devel in the package name in a rpm-based Linux distribution).   Build and install from the source tarball. Pyglet http://www.pyglet.org/download.html 1.1.4 or later Install using binary distribution (the .msi file) Mac: Install using disk image file (.dmg file). Linux: Build and install using the source tarball. Testing the installation Before proceeding further, ensure that Pyglet is installed properly. To test this, just start Python from the command line and type the following: >>>import pyglet If this import is successful, we are all set to go! A primer on Pyglet Pyglet provides an API for multimedia application development using Python. It is an OpenGL-based library that works on multiple platforms. It is primarily used for developing gaming and other graphically-rich applications. We will cover some important aspects of Pyglet framework. Important components We will briefly discuss some of the important modules and packages of Pyglet that we will use. Note that this is just a tiny chunk of the Pyglet framework. Please review the Pyglet documentation to know more about its capabilities, as this is beyond the scope of this article. Window The pyglet.window.Window module provides the user interface. It is used to create a window with an OpenGL context. The Window class has API methods to handle various events such as mouse and keyboard events. The window can be viewed in normal or full screen mode. Here is a simple example of creating a Window instance. You can define a size by specifying width and height arguments in the constructor. win = pyglet.window.Window() The background color for the image can be set using OpenGL call glClearColor, as follows: pyglet.gl.glClearColor(1, 1, 1, 1) This sets a white background color. The first three arguments are the red, green, and blue color values. Whereas, the last value represents the alpha. The following code will set up a gray background color. pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1) The following illustration shows a screenshot of an empty window with a gray background color. Image The pyglet.image module enables the drawing of images on the screen. The following code snippet shows a way to create an image and display it at a specified position within the Pyglet window. img = pyglet.image.load('my_image.bmp')x, y, z = 0, 0, 0img.blit(x, y, z) A later section will cover some important operations supported by the pyglet.image module. Sprite This is another important module. It is used to display an image or an animation frame within a Pyglet window discussed earlier. It is an image instance that allows us to position an image anywhere within the Pyglet window. A sprite can also be rotated and scaled. It is possible to create multiple sprites of the same image and place them at different locations and with different orientations inside the window. Animation Animation module is a part of pyglet.image package. As the name indicates, pyglet.image.Animation is used to create an animation from one or more image frames. There are different ways to create an animation. For example, it can be created from a sequence of images or using AnimationFrame objects. An animation sprite can be created and displayed within the Pyglet window. AnimationFrame This creates a single frame of an animation from a given image. An animation can be created from such AnimationFrame objects. The following line of code shows an example. animation = pyglet.image.Animation(anim_frames) anim_frames is a list containing instances of AnimationFrame. Clock Among many other things, this module is used for scheduling functions to be called at a specified time. For example, the following code calls a method moveObjects ten times every second. pyglet.clock.schedule_interval(moveObjects, 1.0/10) Displaying an image In the Image sub-section, we learned how to load an image using image.blit. However, image blitting is a less efficient way of drawing images. There is a better and preferred way to display the image by creating an instance of Sprite. Multiple Sprite objects can be created for drawing the same image. For example, the same image might need to be displayed at various locations within the window. Each of these images should be represented by separate Sprite instances. The following simple program just loads an image and displays the Sprite instance representing this image on the screen. 1 import pyglet23 car_img= pyglet.image.load('images/car.png')4 carSprite = pyglet.sprite.Sprite(car_img)5 window = pyglet.window.Window()6 pyglet.gl.glClearColor(1, 1, 1, 1)78 @window.event9 def on_draw():10 window.clear()11 carSprite.draw()1213 pyglet.app.run() On line 3, the image is opened using pyglet.image.load call. A Sprite instance corresponding to this image is created on line 4. The code on line 6 sets white background for the window. The on_draw is an API method that is called when the window needs to be redrawn. Here, the image sprite is drawn on the screen. The next illustration shows a loaded image within a Pyglet window. In various examples in this article, the file path strings are hardcoded. We have used forward slashes for the file path. Although this works on Windows platform, the convention is to use backward slashes. For example, images/car.png is represented as imagescar.png. Additionally, you can also specify a complete path to the file by using the os.path.join method in Python. Regardless of what slashes you use, the os.path.normpath will make sure it modifies the slashes to fit to the ones used for the platform. The use of oos.path.normpath is illustrated in the following snippet: import osoriginal_path = 'C:/images/car.png"new_path = os.path.normpath(original_path) The preceding image illustrates Pyglet window showing a still image. Mouse and keyboard controls The Window module of Pyglet implements some API methods that enable user input to a playing animation. The API methods such as on_mouse_press and on_key_press are used to capture mouse and keyboard events during the animation. These methods can be overridden to perform a specific operation. Adding sound effects The media module of Pyglet supports audio and video playback. The following code loads a media file and plays it during the animation. 1 background_sound = pyglet.media.load(2 'C:/AudioFiles/background.mp3',3 streaming=False)4 background_sound.play() The second optional argument provided on line 3 decodes the media file completely in the memory at the time the media is loaded. This is important if the media needs to be played several times during the animation. The API method play() starts streaming the specified media file.
Read more
  • 0
  • 0
  • 6393

article-image-microsoft-lightswitch-application-using-sql-azure-database
Packt
30 Aug 2010
3 min read
Save for later

Microsoft LightSwitch Application using SQL Azure Database

Packt
30 Aug 2010
3 min read
(For more resources on Microsoft, see here.) Your computer has to satisfy the system requirements that you can look up at the product site (while downloading) and you should have an account on Microsoft Windows Azure Services. Although this article retrieves data from SQL Azure, you can retrieve database from a local server or other data sources as well. However it is presently limited to SQL Server databases. The article content was developed using Microsoft LightSwitch Beta 1, SQL Azure database, on an Acer 4810TZ-4011 notebook with Windows 7 Ultimate OS. Installing Microsoft LightSwitch The LightSwitch beta is now available at this site here, the file name is vs_vslsweb.exe: http://www.microsoft.com/visualstudio/en-us/lightswitch When you download and install the program you may get into the problem that some requirement not being present. While installing the program for this article there was an initial problem. The Microsoft LightSwitch requires Microsoft SQL Server Compact 3.5 SP2. While this was already present on the computer it did not recognize. However in addition to SP2 there were also present the Microsoft SQL Server Compact 3.5 SP1 as well as SQL Server Compact 4.0. After removing Microsoft SQL Server Compact SP1 and SP2 the program installed without further problems after installing SQL Server Compact 3.5 SP2 again. Please review this link (http://hodentek.blogspot.com/2010/08/are-you-ready-to-see-light-with.html) for more detailed information. The next image shows the Compact products presently installed on this machine. Creating a LightSwitch Program After installation you may not find a shortcut that displays an icon for Microsoft LightSwitch. But you may find a Visual Studio 2010 shortcut as shown. The Visual Studio 2010 Express is a different product which is free to install. You cannot create a LightSwitch application with Visual Studio 2010 Express. Click on Microsoft Visual Studio 2010 shown highlighted. This opens the program with a splash screen. After a while the user interface displays the Start Page as shown. You can have more than one instance open at a time. The Recent Projects is a catalog of all projects in the Visual Studio 2010 default project directory. Just as you cannot develop a LightSwitch application with VS 2010 Express, you cannot open a project developed in VS 2010 Express with the LightSwitch interface as you will encounter the message shown. This means that LightSwitch projects are isolated in the development environment although the same shell program is used. When you click File | New Project you will see the New Project window displayed as shown here. Make sure you set the target to .NET Framework 4.0 otherwise you may not see any projects. It is strictly .NET Framework 4.0 for now. Also trying to create, File | New web site will not show any templates no matter what the .NET Framework you have chosen. In order to see Team Project you must have a Team Foundation Server present. In what follows we will be creating LightSwitch application (default name is Application1 for both C# and VB). From what is displayed you will see more Silverlight project templates than LightSwitch project templates. In fact you have just one template either in C# or in VB. Highlight LightSwitch Application (VB) and change the default name from Application1 to something different. Herein it is named SwitchOn as shown. If you were to look at the project properties in the property window you will see that the filename of the project is SwitchOn.lsproj. This file type is exclusively used by LightSwitch. The folder structure of the project is deceptively simple consisting of Data Sources and Screens.
Read more
  • 0
  • 0
  • 2152

article-image-python-multimedia-working-audios
Packt
30 Aug 2010
14 min read
Save for later

Python Multimedia: Working with Audios

Packt
30 Aug 2010
14 min read
(For more resources on Python, see here.) So let's get on with it! Installation prerequisites Since we are going to use an external multimedia framework, it is necessary to install the necessary to install the packages mentioned in this section. GStreamer GStreamer is a popular open source multimedia framework that supports audio/video manipulation of a wide range of multimedia formats. It is written in the C programming language and provides bindings for other programming languages including Python. Several open source projects use GStreamer framework to develop their own multimedia application. Throughout this article, we will make use of the GStreamer framework for audio handling. In order to get this working with Python, we need to install both GStreamer and the Python bindings for GStreamer. Windows platform The binary distribution of GStreamer is not provided on the project website http://www.gstreamer.net/. Installing it from the source may require considerable effort on the part of Windows users. Fortunately, GStreamer WinBuilds project provides pre-compiled binary distributions. Here is the URL to the project website: http://www.gstreamer-winbuild.ylatuya.es The binary distribution for GStreamer as well as its Python bindings (Python 2.6) are available in the Download area of the website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download You need to install two packages. First, the GStreamer and then the Python bindings to the GStreamer. Download and install the GPL distribution of GStreamer available on the GStreamer WinBuilds project website. The name of the GStreamer executable is GStreamerWinBuild-0.10.5.1.exe. The version should be 0.10.5 or higher. By default, this installation will create a folder C:gstreamer on your machine. The bin directory within this folder contains runtime libraries needed while using GStreamer. Next, install the Python bindings for GStreamer. The binary distribution is available on the same website. Use the executable Pygst-0.10.15.1-Python2.6.exe pertaining to Python 2.6. The version should be 0.10.15 or higher. GStreamer WinBuilds appears to be an independent project. It is based on the OSSBuild developing suite. Visit http://code.google.com/p/ossbuild/ for more information. It could happen that the GStreamer binary built with Python 2.6 is no longer available on the mentioned website at the time you are reading this book. Therefore, it is advised that you should contact the developer community of OSSBuild. Perhaps they might help you out! Alternatively, you can build GStreamer from source on the Windows platform, using a Linux-like environment for Windows, such as Cygwin (http://www.cygwin.com/). Under this environment, you can first install dependent software packages such as Python 2.6, gcc compiler, and others. Download the gst-python-0.10.17.2.tar.gz package from the GStreamer website http://www.gstreamer.net/. Then extract this package and install it from sources using the Cygwin environment. The INSTALL file within this package will have installation instructions. Other platforms Many of the Linux distributions provide GStreamer package. You can search for the appropriate gst-python distribution (for Python 2.6) in the package repository. If such a package is not available, install gst-python from the source as discussed in the earlier the Windows platform section. If you are a Mac OS X user, visit http://py26-gst-python.darwinports.com/. It has detailed instructions on how to download and install the package Py26-gst-python version 0.10.17 (or higher). Mac OS X 10.5.x (Leopard) comes with the Python 2.5 distribution. If you are using packages using this default version of Python, GStreamer Python bindings using Python 2.5 are available on the darwinports website: http://gst-python.darwinports.com/ PyGobject There is a free multiplatform software utility library called 'GLib'. It provides data structures such as hash maps, linked lists, and so on. It also supports the creation of threads. The 'object system' of GLib is called GObject. Here, we need to install the Python bindings for GObject. The Python bindings are available on the PyGTK website at: http://www.pygtk.org/downloads.html. Windows platform The binary installer is available on the PyGTK website. The complete URL is: http://ftp.acc.umu.se/pub/GNOME/binaries/win32/pygobject/2.20/?. Download and install version 2.20 for Python 2.6. Other platforms For Linux, the source tarball is available on the PyGTK website. There could even be binary distribution in the package repository of your Linux operating system. The direct link to the Version 2.21 of PyGObject (source tarball) is: http://ftp.gnome.org/pub/GNOME/sources/pygobject/2.21/ If you are a Mac user and you have Python 2.6 installed, a distribution of PyGObject is available at http://py26-gobject.darwinports.com/. Install version 2.14 or later. Summary of installation prerequisites The following table summarizes the packages needed for this article. Package Download location Version Windows platform Linux/Unix/OS X platforms GStreamer http://www.gstreamer.net/ 0.10.5 or later Install using binary distribution available on the Gstreamer WinBuild website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download Use GStreamerWinBuild-0.10.5.1.exe (or later version if available). Linux: Use GStreamer distribution in package repository. Mac OS X: Download and install by following instructions on the website: http://gstreamer.darwinports.com/. Python Bindings for GStreamer http://www.gstreamer.net/ 0.10.15 or later for Python 2.6 Use binary provided by GStreamer WinBuild project. See http://www.gstreamer-winbuild.ylatuya.es for details pertaining to Python 2.6. Linux: Use gst-python distribution in the package repository. Mac OS X: Use this package (if you are using Python2.6): http://py26-gst-python.darwinports.com/. Linux/Mac: Build and install from the source tarball. Python bindings for GObject "PyGObject" Source distribution: http://www.pygtk.org/downloads.html 2.14 or later for Python 2.6 Use binary package from pygobject-2.20.0.win32-py2.6.exe Linux: Install from source if pygobject is not available in the package repository. Mac: Use this package on darwinports (if you are using Python2.6) See http://py26-gobject.darwinports.com/ for details. Testing the installation Ensure that the GStreamer and its Python bindings are properly installed. It is simple to test this. Just start Python from the command line and type the following: >>>import pygst If there is no error, it means the Python bindings are installed properly. Next, type the following: >>>pygst.require("0.10")>>>import gst If this import is successful, we are all set to use GStreamer for processing audios and videos! If import gst fails, it will probably complain that it is unable to work some required DLL/shared object. In this case, check your environment variables and make sure that the PATH variable has the correct path to the gstreamer/bin directory. The following lines of code in a Python interpreter show the typical location of the pygst and gst modules on the Windows platform. >>> import pygst>>> pygst<module 'pygst' from 'C:Python26libsite-packagespygst.pyc'>>>> pygst.require('0.10')>>> import gst>>> gst<module 'gst' from 'C:Python26libsite-packagesgst-0.10gst__init__.pyc'> Next, test if PyGObject is successfully installed. Start the Python interpreter and try importing the gobject module. >>import gobject If this works, we are all set to proceed! A primer on GStreamer In this article, we will be using GStreamer multimedia framework extensively. Before we move on to the topics that teach us various audio processing techniques, a primer on GStreamer is necessary. So what is GStreamer? It is a framework on top of which one can develop multimedia applications. The rich set of libraries it provides makes it easier to develop applications with complex audio/video processing capabilities. Fundamental components of GStreamer are briefly explained in the coming sub-sections. Comprehensive documentation is available on the GStreamer project website. GStreamer Application Development Manual is a very good starting point. In this section, we will briefly cover some of the important aspects of GStreamer. For further reading, you are recommended to visit the GStreamer project website: http://www.gstreamer.net/documentation/ gst-inspect and gst-launch We will start by learning the two important GStreamer commands. GStreamer can be run from the command line, by calling gst-launch-0.10.exe (on Windows) or gst-launch-0.10(on other platforms). The following command shows a typical execution of GStreamer on Linux. We will see what a pipeline means in the next sub-section. $gst-launch-0.10 pipeline_description GStreamer has a plugin architecture. It supports a huge number of plugins. To see more details about any plugin in your GStreamer installation, use the command gst-inspect-0.10 (gst-inspect-0.10.exe on Windows). We will use this command quite often. Use of this command is illustrated here. $gst-inspect-0.10 decodebin Here, decodebin is a plugin. Upon execution of the preceding command, it prints detailed information about the plugin decodebin. Elements and pipeline In GStreamer, the data flows in a pipeline. Various elements are connected together forming a pipeline, such that the output of the previous element is the input to the next one. A pipeline can be logically represented as follows: Element1 ! Element2 ! Element3 ! Element4 ! Element5 Here, Element1 through to Element5 are the element objects chained together by the symbol !. Each of the elements performs a specific task. One of the element objects performs the task of reading input data such as an audio or a video. Another element decodes the file read by the first element, whereas another element performs the job of converting this data into some other format and saving the output. As stated earlier, linking these element objects in a proper manner creates a pipeline. The concept of a pipeline is similar to the one used in Unix. Following is a Unix example of a pipeline. Here, the vertical separator | defines the pipe. $ls -la | more Here, the ls -la lists all the files in a directory. However, sometimes, this list is too long to be displayed in the shell window. So, adding | more allows a user to navigate the data. Now let's see a realistic example of running GStreamer from the command prompt. $ gst-launch-0.10 -v filesrc location=path/to/file.ogg ! decodebin ! audioconvert ! fakesink For a Windows user, the gst command name would be gst-launch-0.10.exe. The pipeline is constructed by specifying different elements. The !symbol links the adjacent elements, thereby forming the whole pipeline for the data to flow. For Python bindings of GStreamer, the abstract base class for pipeline elements is gst.Element, whereas gst.Pipeline class can be used to created pipeline instance. In a pipeline, the data is sent to a separate thread where it is processed until it reaches the end or a termination signal is sent. Plugins GStreamer is a plugin-based framework. There are several plugins available. A plugin is used to encapsulate the functionality of one or more GStreamer elements. Thus we can have a plugin where multiple elements work together to create the desired output. The plugin itself can then be used as an abstract element in the GStreamer pipeline. An example is decodebin. We will learn about it in the upcoming sections. A comprehensive list of available plugins is available at the GStreamer website http://gstreamer.freedesktop.org. In almost all applications to be developed, decodebin plugin will be used. For audio processing, the functionality provided by plugins such as gnonlin, audioecho, monoscope, interleave, and so on will be used. Bins In GStreamer, a bin is a container that manages the element objects added to it. A bin instance can be created using gst.Bin class. It is inherited from gst.Element and can act as an abstract element representing a bunch of elements within it. A GStreamer plugin decodebin is a good example representing a bin. The decodebin contains decoder elements. It auto-plugs the decoder to create the decoding pipeline. Pads Each element has some sort of connection points to handle data input and output. GStreamer refers to them as pads. Thus an element object can have one or more "receiver pads" termed as sink pads that accept data from the previous element in the pipeline. Similarly, there are 'source pads' that take the data out of the element as an input to the next element (if any) in the pipeline. The following is a very simple example that shows how source and sink pads are specified. >gst-launch-0.10.exe fakesrc num-bufferes=1 ! fakesink The fakesrc is the first element in the pipeline. Therefore, it only has a source pad. It transmits the data to the next linkedelement, that is fakesink which only has a sink pad to accept elements. Note that, in this case, since these are fakesrc and fakesink, just empty buffers are exchanged. A pad is defined by the class gst.Pad. A pad can be attached to an element object using the gst.Element.add_pad() method. The following is a diagrammatic representation of a GStreamer element with a pad. It illustrates two GStreamer elements within a pipeline, having a single source and sink pad. Now that we know how the pads operate, let's discuss some of special types of pads. In the example, we assumed that the pads for the element are always 'out there'. However, there are some situations where the element doesn't have the pads available all the time. Such elements request the pads they need at runtime. Such a pad is called a dynamic pad. Another type of pad is called ghost pad. These types are discussed in this section.   Dynamic pads Some objects such as decodebin do not have pads defined when they are created. Such elements determine the type of pad to be used at the runtime. For example, depending on the media file input being processed, the decodebin will create a pad. This is often referred to as dynamic pad or sometimes the available pad as it is not always available in elements such as decodebin. Ghost pads As stated in the Bins section a bin object can act as an abstract element. How is it achieved? For that, the bin uses 'ghost pads' or 'pseudo link pads'. The ghost pads of a bin are used to connect an appropriate element inside it. A ghost pad can be created using gst.GhostPad class. Caps The element objects send and receive the data by using the pads. The type of media data that the element objects will handle is determined by the caps (a short form for capabilities). It is a structure that describes the media formats supported by the element. The caps are defined by the class gst.Caps. Bus A bus refers to the object that delivers the message generated by GStreamer. A message is a gst.Message object that informs the application about an event within the pipeline. A message is put on the bus using the gst.Bus.gst_bus_post() method. The following code shows an example usage of the bus. 1 bus = pipeline.get_bus()2 bus.add_signal_watch()3 bus.connect("message", message_handler) The first line in the code creates a gst.Bus instance. Here the pipeline is an instance of gst.PipeLine. On the next line, we add a signal watch so that the bus gives out all the messages posted on that bus. Line 3 connects the signal with a Python method. In this example, the message is the signal string and the method it calls is message_handler. Playbin/Playbin2 Playbin is a GStreamer plugin that provides a high-level audio/video player. It can handle a number of things such as automatic detection of the input media file format, auto-determination of decoders, audio visualization and volume control, and so on. The following line of code creates a playbin element. playbin = gst.element_factory_make("playbin") It defines a property called uri. The URI (Uniform Resource Identifier) should be an absolute path to a file on your computer or on the Web. According to the GStreamer documentation, Playbin2 is just the latest unstable version but once stable, it will replace the Playbin. A Playbin2 instance can be created the same way as a Playbin instance. gst-inspect-0.10 playbin2 With this basic understanding, let us learn about various audio processing techniques using GStreamer and Python.
Read more
  • 0
  • 0
  • 5981
article-image-playback-audio-video-and-create-media-playback-component-using-javafx
Packt
26 Aug 2010
5 min read
Save for later

Playback Audio with Video and Create a Media Playback Component Using JavaFX

Packt
26 Aug 2010
5 min read
(For more resources on Java, see here.) Playing audio with MediaPlayer Playing audio is an important aspect of any rich client platform. One of the celebrated features of JavaFX is its ability to easily playback audio content. This recipe shows you how to create code that plays back audio resources using the MediaPlayer class. Getting ready This recipe uses classes from the Media API located in the javafx.scene.media package. As you will see in our example, using this API you are able to load, configure, and playback audio using the classes Media and MediaPlayer. For this recipe, we will build a simple audio player to illustrate the concepts presented here. Instead of using standard GUI controls, we will use button icons loaded as images. If you are not familiar with the concept of loading images, review the recipe Loading and displaying images with ImageView in the previous article. In this example we will use a JavaFX podcast from Oracle Technology Network TechCast series where Nandini Ramani discusses JavaFX. The stream can be found at http://streaming.oracle.com/ebn/podcasts/media/8576726_Nandini_Ramani_030210.mp3. How to do it... The code given next has been shortened to illustrate the essential portions involved in loading and playing an audio stream. You can get the full listing of the code in this recipe from ch05/source-code/src/media/AudioPlayerDemo.fx. def w = 400;def h = 200;var scene:Scene;def mediaSource = "http://streaming.oracle.com/ebn/podcasts/media/ 8576726_Nandini_Ramani_030210.mp3";def player = MediaPlayer {media:Media{source:mediaSource}}def controls = Group { layoutX:(w-110)/2 layoutY:(h-50)/2 effect:Reflection{ fraction:0.4 bottomOpacity:0.1 topOffset:3 } content:[ HBox{spacing:10 content:[ ImageView{id:"playCtrl" image:Image{url:"{__DIR__}play-large.png"} onMouseClicked:function(e:MouseEvent){ def playCtrl = e.source as ImageView; if(not(player.status == player.PLAYING)){ playCtrl.image = Image{url:"{__DIR__}pause-large.png"} player.play(); }else if(player.status == player.PLAYING){ playCtrl.image = Image{url:"{__DIR__}play-large.png"} player.pause(); } } } ImageView{id:"stopCtrl" image:Image{url:"{__DIR__}stop-large.png"} onMouseClicked:function(e){ def playCtrl = e.source as ImageView; if(player.status == player.PLAYING){ playCtrl.image = Image{url:"{__DIR__}play-large.png"} player.stop(); } } } ]} ]} When the variable controls is added to a scene object and the application is executed, it produces the screen shown in the following screenshot: How it works... The Media API is comprised of several components which, when put together, provides the mechanism to stream and playback the audio source. To playback audio requires two classes, including Media and MediaPlayer. Let's take a look at how these classes are used to playback audio in the previous example. The MediaPlayer—the first significant item in the code is the declaration and initialization of a MediaPlayer instance assigned to the variable player. To load the audio file, we assign an instance of Media to player.media. The Media class is used to specify the location of the audio. In our example, it is a URL that points to an MP3 file. The controls—the play, pause, and stop buttons are grouped in the Group object called controls. They are made of three separate image files: play-large.png, pause-large.png, and stop-large.png, loaded by two instances of the ImageView class. The ImageView objects serve to display the control icons and to control the playback of the audio: When the application starts, imgView displays image play-large.png. When the user clicks on the image, it invokes its action-handler function, which firsts detects the status of the MediaPlayer instance. If it is not playing, it starts playback of the audio source by calling player.play() and replaces the play-large.png with the image pause-large.png. If, however, audio is currently playing, then the audio is stopped and the image is replaced back with play-large.png. The other ImageView instance loads the stop-large.png icon. When the user clicks on it, it calls its action-handler to first stop the audio playback by calling player.stop(). Then it toggles the image for the "play" button back to icon play-large.png. As mentioned in the introduction, JavaFX will play the MP3 file format on any platform where the JavaFX format is supported. Anything other than MP3 must be supported natively by the OS's media engine where the file is played back. For instance, on my Mac OS, I can play MPEG-4, because it is a supported playback format by the OS's QuickTime engine. There's more... The Media class models the audio stream. It exposes properties to configure the location, resolves dimensions of the medium (if available; in the case of audio, that information is not available), and provides tracks and metadata about the resource to be played. The MediaPlayer class itself is a controller class responsible for controlling playback of the medium by offering control functions such as play(), pause(), and stop(). It also exposes valuable playback data including current position, volume level, and status. We will use these additional functions and properties to extend our playback capabilities in the recipe Controlling media playback in this article. See also Accessing media assets Loading and displaying images with ImageView  
Read more
  • 0
  • 0
  • 2665

article-image-manipulating-images-javafx
Packt
25 Aug 2010
4 min read
Save for later

Manipulating Images with JavaFX

Packt
25 Aug 2010
4 min read
(For more resources on Java, see here.) One of the most celebrated features of JavaFX is its inherent support for media playback. As of version 1.2, JavaFX has the ability to seamlessly load images in different formats, play audio, and play video in several formats using its built-in components. To achieve platform independence and performance, the support for media playback in JavaFX is implemented as a two-tiered strategy: Platform-independent APIs—the JavaFX SDK comes with a media API designed to provide a uniform set of interfaces to media functionalities. Part of the platform-independence offerings include a portable codec (On2's VP6), which will play on all platforms where JavaFX media playback is supported. Platform-dependent implementations—to boost media playback performance, JavaFX also has the ability to use the native media engine supported by the underlying OS. For instance, playback on the Windows platform may be rendered by the Windows DirectShow media engine (see next recipe). This two-part article shows you how to use the supported media rendering components, including ImageView, MediaPlayer, and MediaView. These components provide high-level APIs that let developers create applications with engaging and interactive media content. Accessing media assets You may have seen the use of variable __DIR__ when accessing local resources, but may not fully know about its purpose and how it works. So, what does that special variable store? In this recipe, we will explore how to use the __DIR__ special variable and other means of loading resources locally or remotely. Getting ready The concepts presented in this recipe are used widely throughout the JavaFX application framework when pointing to resources. In general, classes that point to a local or remote resource uses a string representation of a URL where the resource is stored. This is especially true for the ImageView and MediaPlayer classes discussed in this article. How to do it... This recipe shows you three ways of creating a URL to point to a local or remote resource used by a JavaFX application. The full listing of the code presented here can be found in ch05/source-code/src/UrlAccess.fx. Using the __DIR__ pseudo-variable to access assets as packaged resources: var resImage = "{__DIR__}image.png"; Using a direct reference to a local file: var localImage = "file:/users/home/vladimir/javafx/ch005/source-code/src/image.png"; Using a URL to access a remote file: var remoteImage = "http://www.flickr.com/3201/2905493571_a6db13ce1b_d.jpg" How it works... Loading media assets in JavaFX requires the use of a well-formatted URL that points to the location of the resources. For instance, both the Image and the Media classes (covered later in this article series) require a URL string to locate and load the resource to be rendered. The URL must be an absolute path that specifies the fully-realized scheme, device, and resource location. The previous code snippets show the following three ways of accessing resources in JavaFX: __DIR__ pseudo-variable—often, you will see the use of JavaFX's pseudo variable __DIR__, used when specifying the location of a resource. It is a special variable that stores the String value of the directory where the executing class that referenced __DIR__ is located. This is valuable, especially when the resource is embedded in the application's JAR file. At runtime, __DIR__ stores the location of the resource in the JAR file, making it accessible for reading as a stream. In the previous code, for example, the expression {__DIR__}image.png explodes as jar:file:/users/home/vladimir/javafx/ch005/source-code/dist/source-code.jar!/image.png. Direct reference to local resources—when the application is deployed as a desktop application, you can specify the location of your resources using URLs that provides the absolute path to where the resources are located. In our code, we use file:/users/home/vladimir/javafx/ch005/source-code/src/image.png as the absolute fully qualified path to the image file image.png. Direct reference to remote resources—finally, when loading media assets, you are able to specify the path of a fully-qualified URL to a remote resource using HTTP. As long as there are no subsequent permissions required, classes such as Image and Media are able to pull down the resource with no problem. For our code, we use a URL to a Flickr image http://www.flickr.com/3201/2905493571_a6db13ce1b_d.jpg. There's more... Besides __DIR__, JavaFX provides the __FILE__ pseudo variable as well. As you may well guess, __FILE__ resolves to the fully qualified path of the of the JavaFX script file that contains the __FILE__ pseudo variable. At runtime, when your application is compiled, this will be the script class that contains the __FILE__ reference.
Read more
  • 0
  • 0
  • 6887