Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-soa-java-business-integration-part-2
Packt
16 Oct 2009
6 min read
Save for later

SOA with Java Business Integration (part 2)

Packt
16 Oct 2009
6 min read
(For more resources on this subject, see here.) Provider—Consumer Contract In the JBI environment, the provider and consumer always interact based on a services model. A service interface is the common aspect between them. WSDL 1.1 and 2.0 are used to define the contract through the services interface. The following figure represents the two parts of the WSDL representation of a service: In the Abstract Model, WSDL describes the propagation of a message through a type system. A message has sequence and cardinality specified by its Message Exchange Pattern (MEP). A Message can be a Fault Message also. An MEP is associated with one or more messages using an Operation. An Interface can contain a single Operation or a group of Operations represented in an abstract fashion—independent of wire formats and transport protocols. An Interface in the Abstract Model is bound to a specific wire format and transport protocol via Binding. A Binding is associated with a network address in an Endpoint and a single Service in the concrete model aggregates multiple Endpoints implementing common interfaces. Detached Message Exchange JBI-based message exchange occurs between a Provider and Consumer in a detached fashion. This means, the Provider and Consumer never interact directly. In technical terms, they never share the same thread context of execution. Instead, the Provider and Consumer use JBI NMR as an intermediary. Thus, the Consumer sends a request message to the NMR. The NMR, using intelligent routers decides the best matched service provider and dispatches the message on behalf of the Consumer. The Provider component can be a different component or the same component as the Consumer itself. The Provider can be an SE or a BC and based on the type it will execute the business process by itself or delegate the actual processing to the remotely bound component. The response message is sent back to the NMR by the Provider, and the NMR in turn passes it back to the Consumer. This completes the message exchange. The following figure represents the JBI-based message exchange: There are multiple patterns by which messages are exchanged, which we will review shortly. Provider—Consumer Role Though a JBI component can function as a Consumer, a Provider, or as both a Consumer and Provider, there is clear cut distinction between the Provider and Consumer roles. These roles may be performed by bindings or engines, in any combination of the two. When a binding acts as a service Provider, an external service is implied. Similarly, when the binding acts as a service Consumer, an external Consumer is implied. In the same way, the use of a Service Engines in either role implies a local actor for that role. This is shown in the following figure: The Provider and Consumer interact with each other through the NMR. When they interact, they perform the distinct responsibilities (not necessarily in the same order). The following is the list of responsibilities, performed by the Provider and Consumer while interacting with NMR: Provider: Once deployed, the JBI activates the service provider endpoint. Provider: Provider then publishes the service description in WSDL format. Consumer: Consumer then discovers the required service. This can happen at design time (static binding) or run time (dynamic binding). Consumer: Invokes the queried service. Provider and Consumer: Send and respond to message exchanges according to the MEP, and state of the message exchange instance. Provider: Provides the service by responding to the function invocations. Provider and Consumer: Responds with status (fault or done) to complete the message exchange. During run-time activation, a service provider activates the actual services it provides, making them known to the NMR. It can now route service invocations to that service. javax.jbi.component.ComponentContext context ;// Initialized via. AOPjavax.jbi.messaging.DeliveryChannel channel = context. getDeliveryChannel();javax.jbi.servicedesc.ServiceEndpoint serviceEndpoint = null; if (service != null && endpoint != null) { serviceEndpoint = context.activateEndpoint (service, endpoint); } The Provider creates a WSDL described service available through an endpoint. As described in the Provider-Consumer contract, the service implements a WSDL-based interface, which is a collection of operations. The consumer creates a message exchange to send a message to invoke a particular service. Since consumers and providers only share the abstract service definition, they are decoupled from each other. Moreover, several services can implement the same WSDL interface. Hence, if a consumer sends a message for a particular interface, the JBI might find more than one endpoint conforming to the interface and can thus route to the best-fit endpoint. Message Exchange A message exchange is the "Message Packet" transferred between a consumer and a provider in a service invocation. It represents a container for normalized messages which are described by an exchange pattern. Thus message exchange encapsulates the following: Normalized message Message exchange metadata Message exchange state Thus, message exchange is the JBI local portion of a service invocation. Service Invocation An end-to-end interaction between a service consumer and a service provider is a service invocation. Service consumers employ one or more service invocation patterns. Service invocation through a JBI infrastructure is based on a 'pull' model, where a component accepts message exchange instances when it is ready. Thus, once a message exchange instance is created, it is sent back and forth between the two participating components, and this continues till the status of the message exchange instance is either set to 'done' or 'error', and sent one last time between the two components. Message Exchange Patterns (MEP) Service consumers interact with service providers for message exchange employing one or more service invocation patterns. The MEP defines the names, sequence, and cardinality of messages in an exchange. There are many service invocation patterns, and, from a JBI perspective, any JBI-compliant ESB implementation must support the following four service invocations: One-Way: Service consumer issues a request to the service provider. No error (fault) path is provided. Reliable One-Way: Service consumer issues a request to the service provider. Provider may respond with a fault if it fails to process the request. Request-Response: Service Consumer issues a request to the service provider, with expectation of response. Provider may respond with a fault if it fails to process request. Request Optional-Response: Service consumer issues a request to the service provider, which may result in a response. Both consumer and provider have the option of generating a fault in response to a message received during the interaction. The above service invocations can be mapped to four different MEPs that are listed as follows. In-Only MEP In-Only MEP is used for one-way exchanges. The following figure diagrammatically explains the In-Only MEP: In the In-Only MEP normal scenario, the sequence of operations is as follows: Service Consumer initiates with a message. Service Provider responds with the status to complete the message exchange. In the In-Only MEP normal scenario, since the Consumer issues a request to the Provider with no error (fault) path, any errors at the Provider-level will not be propagated to the Consumer.    
Read more
  • 0
  • 0
  • 1028

article-image-soa-java-business-integration-part-1
Packt
16 Oct 2009
5 min read
Save for later

SOA with Java Business Integration (part 1)

Packt
16 Oct 2009
5 min read
SOA—The Motto We have been doing integration for many decades in proprietary or ad-hoc manner. Today, the buzz word is SOA and in the integration space, we are talking about Service Oriented Integration (SOI). Let us look into the essentials of SOA and see whether the existing standards and APIs are sufficient in the integration space. Why We Need SOA We have been using multiple technologies for developing application components, and a few of them are listed as follows: Remote Procedure Call (RPC) Common Object Request Broker Architecture (CORBA) Distributed Component Object Model (DCOM) .NET remoting Enterprise Java Beans (EJBs) Java Remote Method Invocation (RMI) One drawback, which can be seen in almost all these technologies, is their inability to interoperate. In other words, if a .NET remoting component has to send bytes to a Java RMI component, there are workarounds that may not work all the times. Next, all the above listed technologies follow the best Object Oriented Principles (OOP), especially hiding the implementation details behind interfaces. This will provide loose coupling between the provider and the consumer, which is very important especially in distributed computing environments. Now the question is, are these interfaces abstract enough? To rephrase the question, can a Java RMI runtime make sense out of a .NET interface? Along these lines, we can point out a full list of doubts or deficiencies which exist in today's computing environment. This is where SOA brings new promises. What is SOA SOA is all about a set of architectural patterns, principles, and best practices to implement software components in such a way that we overcome much of the deficiencies identified in traditional programming paradigms. SOA speaks about services implemented based on abstract interfaces where only the abstract interface is exposed to the outside world. Hence the consumers are unaware of any implementation details. Moreover, the abstract model is neutral of any platform or technology. This means, components or services implemented in any platform or technology can interoperate. We will list out few more features of SOA here: Standards-based (WS-* Specifications) Services are autonomous and coarse grained Providers and consumers are loosely coupled The list is not exhaustive, but we have many number of literature available speaking on SOA, so let us not repeat it here. Instead we will see the importance of SOA in the integration context. SOA and Web Services SOA doesn't mandate any specific platform, technology, or even a specific method of software engineering, but time has proven that web service is a viable technology to implement SOA. However, we need to be cautious in that using web services doesn't lead to SOA by itself, or implement it. Rather, since web services are based on industry accepted standards like WSDL, SOAP, and XML; it is one of the best available means to attain SOA. Providers and consumers agree to a common interface called Web Services Description Language (WSDL) in SOA using web services. Data is exchanged normally through HTTP protocol, in Simple Object Access Protocol (SOAP) format. WSDL WSDL is the language of web services, used to specify the service contract to be agreed upon by the provider and consumer. It is a XML formatted information, mainly intended to be machine processable (but human readable too, since it is XML). When we host a web service, it is normal to retrieve the WSDL from the web service endpoint. Also, there are mainly two approaches in working with WSDL, which are listed as follows: Start from WSDL, create and host the web service and open the service for clients; tools like wsdl2java help us to do this. Start from the types already available, generate the WSDL and then continue; tools like java2wsdl help us here. Let us now quickly run through the main sections within a WSDL. A WSDL structure is as shown here: <?xml version="1.0" encoding="UTF-8"?><wsdl:definitions targetNamespace= "http://versionXYZ.ws.servicemix.esb.binildas.com" …> <wsdl:types> <schema elementFormDefault="qualified" targetNamespace="http://version20061231.ws. servicemix.esb.binildas.com" > </schema> </wsdl:types> <wsdl:message name="helloResponse"> <!-- other code goes here --> </wsdl:message> <wsdl:portType name="IHelloWeb"> <!-- other code goes here --> </wsdl:portType> <wsdl:binding name="HelloWebService20061231SoapBinding" type="impl:IHelloWeb"> <!-- other code goes here --> </wsdl:binding> <wsdl:service name="IHelloWebService"> <wsdl:port binding="impl:HelloWebService20061231SoapBinding" name="HelloWebService20061231"> <wsdlsoap:address location="http://localhost:8080/AxisEndToEnd20061231/ services/HelloWebService20061231"/> </wsdl:port> </wsdl:service></wsdl:definitions> We will now run through the main sections of a typical WSDL: types: The data types exchanged are expressed here as an XML schema. message: This section details about the message formats (or the documents) exchanged. portType: The PortType can be looked at as the abstract interface definition for the exposed service. binding: The PortType has to be mapped to specific data formats and protocols, which will be detailed out in the binding section. port: The port gives the URL representation of the service endpoint. service: Service can contain a collection of port elements. Since JBI is based on WSDL, we can deal with many WSDL instances.    
Read more
  • 0
  • 0
  • 1163

article-image-simplifying-parallelism-complexity-c
Packt
16 Oct 2009
7 min read
Save for later

Simplifying Parallelism Complexity in C#

Packt
16 Oct 2009
7 min read
Specializing the algorithms for segmentation with classes So far, we have been developing applications that split work into multiple independent jobs and created classes to generalize the algorithms for segmentation. We simplified the creation of segmented and parallelized algorithms, generalizing behaviors to simplify our code and to avoid repeating the same code on every new application. However, we did not do that using inheritance, a very powerful object-oriented capability that simplifies code re-use. C# is an object-oriented programming language that supports inheritance and offers many possibilities to specialize behaviors to simplify our code and to avoid some synchronization problems related to parallel programming. How can we use C# object-oriented capabilities to define specific segmented algorithms prepared for running each piece in an independent thread using ParallelAlgorithm and ParallelAlgorithmPiece as the base classes? The answer is very simple—by using inheritance and the factory method class creational pattern (also known as virtual constructor). Thus, we can advance into creating a complete framework to simplify the algorithm optimization process. Again, we can combine multithreading with object-oriented capabilities to reduce our development time and avoid synchronization problems. Besides, using classes to specialize the process of splitting a linear algorithm into many pieces will make it easier for the developers to focus on generating very independent parts that will work well in a multithreading environment, while avoiding side-effects. Time for action – Preparing the parallel algorithm classes for the factory method You made the necessary changes to the ParallelAlgorithmPiece and the ParallelAlgorithm classes to possibly find planets similar to Mars in the images corresponding to different galaxies. NASA's CIO was impressed with your parallel programming capabilities. Nevertheless, he is an object-oriented guru, and he gave you the advice to apply the factory method pattern to specialize the parallel algorithm classes in each new algorithm. That could make the code simpler, more re-usable, and easier to maintain. He asked you to do so. The NASA scientists would then bring you another huge image processing challenge for your parallel programming capabilities—a sunspot analyzer. If you resolve this problem using the factory method pattern or something like that, he will hire you! However, be careful, because you must avoid some synchronization problems! First, we are going to create a new project with tailored versions of the ParallelAlgorithmPiece and ParallelAlgorithm classes. This way, later, we will be able to inherit from these classes and apply the factory method pattern to specialize in parallel algorithms: Create a new C# Project using the Windows Forms Application template in Visual Studio or Visual C# Express. Use SunspotsAnalyzer as the project's name. Open the code for Program.cs. Replace the line [STAThread] with the following line (before the Main method declaration): [MTAThread] Copy the file that contains the original code of the ParallelAlgorithmPiece and the ParallelAlgorithm classes (ParallelAlgorithm.cs) and include them in the project. Add the abstract keyword before the declarations of theParallelAlgorithmPiece and the ParallelAlgorithm classes, as shown in the following lines (we do not want to create instances directly from these abstract classes): abstract class ParallelAlgorithmPieceabstract class ParallelAlgorithm Change the ThreadMethod method declaration in the ParallelAlgorithmPiece class (add the abstract keyword to force us to override it in subclasses): public abstract void ThreadMethod(object poThreadParameter); Add the following public abstract method to create each parallel algorithm piece in the ParallelAlgorithm class (the key to the factory method pattern): public abstract ParallelAlgorithmPieceCreateParallelAlgorithmPiece(int priThreadNumber); Add the following constructor with a parameter to the ParallelAlgorithmPiece class: public ParallelAlgorithmPiece(int priThreadNumberToAssign){priThreadNumber = priThreadNumberToAssign;} Copy the original code of the ParallelAlgorithmPiece class CreatePieces method and paste it in the ParallelAlgorithm class (we move it to allow creation of parallel algorithm pieces of different subclasses). Replace the lloPieces[i].priBegin and lloPieces[i].priEnd private variables' access with their corresponding public properties access lloPieces[i].piBegin and lloPieces[i].piEnd. Change the new CreatePieces method declaration in the ParallelAlgorithm class (remove the static clause and add the virtual keyword to allow us to override it in subclasses and to access instance variables): public virtual List<ParallelAlgorithmPiece>CreatePieces(long priTotalElements, int priTotalParts) Replace the line lloPieces[i] = new ParallelAlgorithmPiece(); in the CreatePieces method declaration in the ParallelAlgorithm class with the following line of code (now the creation is encapsulated in a method, and also, a great bug is corrected, which we will explain later): lloPieces.Add(CreateParallelAlgorithmPiece(i)); Comment the following line of code in the CreatePieces method in the ParallelAlgorithm class (now the new ParallelAlgorithmPiece constructor assigns the value to piThreadNumber): //lloPieces[i].piThreadNumber = i; Replace the line prloPieces = ParallelAlgorithmPiece. CreatePieces(priTotalElements, priTotalParts); in the CreateThreads method declaration in the ParallelAlgorithm class with the following line of code (now the creation is done in the new CreatePieces method): prloPieces = CreatePieces(priTotalElements, priTotalParts); Change the StartThreadsAsync method declaration in the ParallelAlgorithm class (add the virtual keyword to allow us to override it in subclasses): public virtual void StartThreadsAsync() Change the CollectResults method declaration in the ParallelAlgorithm class (add the abstract keyword to force us to override it in subclasses): public abstract void CollectResults(); What just happened? The code required to create subclasses to implement algorithms, following a variation of the factory method class creational pattern, is now held in the ParallelAlgorithmPiece and ParallelAlgorithm classes. Thus, when we create new classes that will inherit from these two classes, we can easily implement a parallel algorithm. We must just fill in the gaps and override some methods, and we can then focus on the algorithm problems instead of working hard on the splitting techniques. We also solved some bugs related to the previous versions of these classes. Using C# programming language's excellent object-oriented capabilities, we can avoid many problems related to concurrency and simplify the development process using high-performance parallel algorithms. Nevertheless, we must master many object-oriented design patterns to help us in reducing the complexity added by multithreading and concurrency. Defining the class to instantiate One of the main problems that arise when generalizing an algorithm is that the generalized code needed to coordinate the parallel algorithm must create instances of the subclasses that represent the pieces. Using the concepts introduced by the factory method class creational pattern, we solved this problem with great simplicity. We made the necessary changes to the ParallelAlgorithmPiece and ParallelAlgorithm classes to implement a variation of this design pattern. First, we added a constructor to the ParallelAlgorithmPiece class with the thread or piece number as a parameter. The constructor assigns the received value to the priThreadNumber private variable, accessed by the piThreadNumber property: public ParallelAlgorithmPiece(int priThreadNumberToAssign){priThreadNumber = priThreadNumberToAssign;} The subclasses will be able to override this constructor to add any additional initialization code. We had to move the CreatePieces method from the ParallelAlgorithmPiece class to the ParallelAlgorithm class. We did this because each ParallelAlgorithm subclass will know which ParallelAlgorithmPiece subclass to create for each piece representation. Thus, we also made the method virtual, to allow it to be overridden in subclasses. Besides, now it is an instance method and not a static one. There was an intentional bug left in the previous CreatePieces method. As you must master lists and collections management in C# in order to master parallel programming, you should be able to detect and solve this little problem. The method assigned the capacity, but did not add elements to the list. Hence, we must use the add method using the result of the new CreateParallelAlgorithmPiece method. lloPieces.Add(CreateParallelAlgorithmPiece(i)); The creation is now encapsulated in this method, which is virtual, and allows subclasses to override it. The original implementation is shown in the following lines: public virtual ParallelAlgorithmPiece CreateParallelAlgorithmPiece(int priThreadNumber){return (new ParallelAlgorithmPiece(priThreadNumber));} It returns a new ParallelAlgorithmPiece instance, sending the thread or piece number as a parameter. Overriding this method, we can return instances of any subclass of ParallelAlgorithmPiece. Thus, we let the ParallelAlgorithm subclasses decide which class to instantiate. This is the principle of the factory method design pattern. It lets a class defer instantiation to subclasses. Hence, each new implementation of a parallel algorithm will have its new ParallelAlgorithm and ParallelAlgorithmPiece subclasses. We made additional changes needed to keep conceptual integrity with this new approach for the two classes that define the behavior of a parallel algorithm that splits work into pieces using multithreading capabilities.
Read more
  • 0
  • 0
  • 2148
Visually different images

article-image-competitive-service-and-contract-management-sap-business-one-implementation-part-1
Packt
16 Oct 2009
5 min read
Save for later

Competitive Service and Contract Management in SAP Business ONE Implementation: Part 1

Packt
16 Oct 2009
5 min read
What we will learn in this article? In this article, we will cover the service module and highlight how it fits in with the sales and opportunities management functionality. The key features, from taking a service call to contracting management, will be explained. In order to establish a practical platform for this, a case study will be expanded to utilize the service module. As a part of this section, a complete workflow will be configured from setting the right parameters in the admin section to connecting the information for service personnel. We will learn about : Key terms - The common terminology related to service management will be covered. There is nothing major waiting for you here. We will simply learn what the terms entail with regards to the SAP system. Service module core functions – In this section, the available functions and features will be put into perspective—what is available and how much we can expect from it. For example, you will learn what service operations mean. Case study and your own project – The available features of the service module will then be implemented for the case study. By doing so, knowledge will be provided to implement the service module in your own business. We will review some guidelines which will enable you to translate the case study implementation into a set of activities for your own project. Key terms Let's start with the key terms related to service and contract management in SAP Business ONE. By looking at the key terms, you will get an understanding of what can be accomplished with this module Service contract templates In the admin section, we can define the service contract templates that can later be used as a basis for actual contracts. Please note the template character. All of the parameters we define here will automatically populate the relevant details in the contract once a template is selected. The following screenshot shows that a contract template can be created not only on a per-customer basis, but also for item groups, and a specific item based on serial numbers. In addition, please note the Reminder setting. You can set a reminder which will provide an alert prior to the expiry of the contract. This way, you can be sure that you contact all customers and allow them to renew their contracts. Serial number contract The most common usage may be the Serial Number contract type. Each product will have a specific contractual service eligibility based on the serial number. Consequently, if a customer purchases an item that is managed by a serial number, a warranty contract template can be associated. This will create a customer equipment card. Customer contract However, please note that this concept does not need to be used only for items and serial numbers. As you can see in the previous screenshot, we can create contract templates for customers and whole item groups. For the case study, I will use this concept to create a service contract for key customers. We can then use the service functionality in SAP to make sure these customers get priority treatment. You see, we can use the service functionality in this creative way to improve our service quality. Item group contract Just as we may decide to use the service module to guarantee a specific response time to customers, we can make sure that a specific product line is managed. For example, if you have a new specifi c product line that requires technical expertise for implementation, then a service contract may be offered to customers. Therefore, all customers who purchase an item that belongs to this group will be eligible to purchase a contract and receive the relevant expert support. Customer equipment card The customer equipment card applies to the contracts that are managed by a serial number. Since the serial number contract applies a unique contract situation for each item, you will need to have the relevant information to be able to categorize a service call. For example, you need to know the serial number to identify the customer and the relevant warranty that remains for the specific serial number. In addition, if a customer calls you, you"ll want to be able to look up all of the serial numbers for this customer. This can be done using the customer equipment card. Please note that a customer equipment card is automatically created if a customer purchases an item that is managed by a serial number. You can assign a service contract when this happens on the level of sales order. Service calls Service calls are incoming service requests from business partners. The following screenshot shows that service calls can have many sources. For example, service calls may come via emails, phone calls, web requests, or any other defined Origin. I am highlighting this because, in essence, you can use the service module for any activity where you want to apply a specifi c response time management. Queues As mentioned above, the service module can be used to manage any kind of service-related activity. Queues allow your personnel to be associated with named groups. In the following screenshot, I have created two queues. The first queue is for 1st Level and the second queue for Resolution and Sales. Consequently, I assign personnel who can best accomplish the relevant tasks to each queue as follows: Knowledge base As you work on service calls, you can build a knowledge base that documents how problems were resolved. For example, if the problem resolution department resolved a problem, it will be documented as a part of the service call. If the service call is for a serialized item, then this knowledge is available to the first-level support the next time a problem is reported for the same item type. Therefore, this can take the workload off of the specialized personnel as they can avoid repetitive tasks. In addition, new employees can benefit from the knowledge already acquired for resolved service calls.
Read more
  • 0
  • 0
  • 1714

article-image-competitive-service-and-contract-management-sap-business-one-implementation-part-2
Packt
16 Oct 2009
8 min read
Save for later

Competitive Service and Contract Management in SAP Business ONE Implementation: Part 2

Packt
16 Oct 2009
8 min read
In the first half of this 2-part article series, we looked at the service module so as to evaluate potential actions that are triggered, based on service-related information. We also introduced a concept which explained how to utilize the service module features to establish a guaranteed response time for customers. We also learnt about : Key terms - The common terminology related to service management were covered. Although nothing major , we went about learning what the terms entail with regards to the SAP system. Service module core functions – In this section, the available functions and features were put into perspective—what is available and how much we can expect from it.  You also learnt what service operations mean. Case study and your own project – The available features of the service module were implemented for the case study. By doing so, knowledge was provided to implement the service module in your own business. We also reviewed some guidelines which enabled you to translate the case study implementation into a set of activities for your own project. Let's begin with Service reports. Service reports The crucial element of each module is the information that can easily be extracted for reporting purposes. SAP provides a series of canned reports for the service module. The Service Calls report provides information about service call activities based on the selected criteria. You can filter this report by timeframe of service call creation and also by resolution time. Additional filter ranges are available for Customer Code, Handled By, Item, and also Queue ID. In addition, the report allows filtering by Problem Type, Priority, Call Type, Origin, Call Status, and Overdue Calls. You can see that there is a wide range of options to obtain information. It is important to note that reporting can utilize the information only if all of the data is properly collected using the SAP forms. In case no options are selected for filtering, the report defaults to select all of the available information. The following additional service reports are available and have almost identical filtering capabilities as the service calls report: Service Calls by Queue, Response Time by Assigned to, Average Closure Time. The Service Contracts report(seen below) helps you manage the status of all maintenance contracts. If you've ever had to manage maintenance contracts with customers, you will appreciate the ease of obtaining the information here. You can filter this report by Customer Code, Start Date Range, End Date Range and Termination Date Range. In addition, this report can be further filtered by Contract Type, Contract Status, and Service Type. The customer equipment card report allows information to be obtained about items sold to customers based on serial number tracking. Each serial number has its own contract with expiration which is usually based on the purchase date. This report is able to be filtered by customer and item code. In addition, a more global filter can be used such as item group. The Service Monitor report provides a more real-time view of the service pipeline. We already covered this report in the previous section. Finally, My Reports includes My Service Calls, My Open Service Calls, and My Overdue Service Calls. Those reports conveniently filter the information based on the current login name. Therefore, if you run the report, you will only see information that is relevant for you based on the login. Limitations I have already mentioned that you can look at SAP Business ONE as the operating system for your business. You can use industry add-ons to seamlessly transform the standard features into an industry solution that is specific to your requirements. Therefore, let's evaluate some add-ons I've worked with that are related to the service module—specifically, the Enprise Job Costing module and the solution from Navigator called ServiceONE. By looking at these add-ons, we can also learn the limitations of the standard service module. For example, since the Navigator promises to have all of the information available in one view, we realize that in SAP, we sometimes need to jump to different forms to get where we wanted to be initially. Let's further evaluate the features of these add-ons. The Enprise Job Costing solution introduces a web-based timesheet. This is an obvious feature that is not directly available in the SAP standard configuration. First, I will look at the Navigator solution and will then follow the Enprise offering. Often, there is more than one add-on providing industry-specific features. You then need to evaluate both solutions and decide which one best fi ts your requirements. Please note that I am presenting the add-on features to better define the limitations of the SAP Standard Business ONE features. Job Costing add-on by Enprise The Enprise Job Costing add-on is one of the first industry-specific solutions that gained widespread adoption as a standard for companies that required a detailed job costing solution. The advantage of this solution is that it is based on true expertise in the job costing area as it relates to the SAP service module. Let's look at a scenario that is very common for companies that work in the service industry and require what is known as job costing. However, I would first like to take the opportunity to explain job costing a bit. Job costing allows the profit and loss for specific services provided to be calculated. For example, if you have a company that sends out technicians to customer sites for performing equipment repairs, you need to make sure that the invoiced amount exceeds the cost you incur. The following workflow may be common in this environment: Serialized items are delivered to customers with each having a warranty contract that may or may not include services, parts, and replacements. Services may be performed on serialized items delivered by you or by another company. A service call may lead to a proposal (job) which will then be ordered. Technicians may use a timesheet to report the status and time. Timesheet entries must be possible via a mobile device or the Web. Furthermore, time entered needs to be approved before it is relevant for invoicing. A job may lead to subjobs that require unique management of related costs. As the number of jobs increase, you will require "work in progress" reporting. Estimating a job is crucial. Therefore, technicians need to be able to create estimates. It must then be possible to translate those estimates into orders and contracts. As services may require replacement parts, a feature is required that allows optimized picking of relevant items for a specific job. Complex jobs require milestone payments. This needs to be implemented in the contracts. The invoicing system needs to be integrated with the way services are completed. For example, milestone payments, fixed priced billing, and partial invoicing are common requirements in the service industry. The Enprise Job Costing add-on resides in its own menu item called Job Costing. As you can see below, the menu items are well defined and provide a quick overview of the available features. In addition, it is important to note that the features seamlessly appear within the SAP interface: By selecting the Job Entry form, the powerful features come to light. As you can see in the following screenshot, the form allows searching jobs based on Status,Type, Properties, Category, and Entered By. The resulting list is shown in the lower pane. We can use this interface to search for specific jobs, and then click on the Bulk Invoice button as highlighted in the screenshot. This automates the invoicing process based on a clear, uncluttered form. Please note that we do not need to jump between multiple forms. The Direct Time Entry form(seen below) is basically a timesheet. Therefore, technicians can use this to enter the time they spend on projects. Please note the buttons in the lower right that allow importing from the Web and also from an Excel clipboard. Enprise provides a web-based timesheet from which we can import data. However, it is important to note that we can also import from an Excel clipboard. This way, we can use the date that technicians entered in their laptops. The contract list provides a link where the Enprise-enhanced contract management surfaces: The contract management allows milestones for a contract. Each milestone to be established could lead to a milestone payment. In addition, we can directly jump to the related invoices by using the Show Invoices button: Enprise has adopted the concept of master data. For this purpose, the Job Master Data form was established. This is consistent with the SAP concept. Each job is defined and configured with specific parameters, which later drive the transactions that are based on this master data. For example, we can define the job parameters alongside a list of subjobs. In addition, documents can be attached as attachments: Advanced service functionality using ServiceONE by Navigator Navigator provides a wide range of valuable add-ons. Each add-on is valuable. However, the key advantage of Navigator is the comprehensive portfolio of add-ons the cover almost all aspects of SAP Business ONE. In particular, the fact that Navigator also provides a mobile solution, which connects handheld computers with SAP Business ONE, extends the reach of the available functionality beyond the boundaries of the SAP client interface. Therefore, a mobile field service does not need to use a web-based timesheet, but could directly interact using mobile devices. However, you may need to purchase another add-on to accomplish this.
Read more
  • 0
  • 0
  • 2162

article-image-introduction-legacy-modernization-oracle
Packt
16 Oct 2009
13 min read
Save for later

Introduction to Legacy Modernization in Oracle

Packt
16 Oct 2009
13 min read
IT organizations are under increasing demand to increase the ability of the business to innovate while controlling and often reducing costs. Legacy modernization is a real opportunity for these goals to be achieved. To attain these goals, the organization needs to take full advantage of emerging advances in platform and software innovations, while leveraging the investment that has been made in the business processes within the legacy environment.To make good choices for a specific roadmap to modernization, the decision makers should work to have a good understanding of what these modernization options are, and how to get there. Overview of the Modernization Options There are five primary approaches to legacy modernization: Re-architecting to a new environment SOA integration and enablement Replatforming through re-hosting and automated migration Replacement with COTS solutions Data Modernization Other organizations may have different nomenclature for what they call each type of modernization, but any of these options can generally fit into one of these five categories. Each of the options can be carried out in concert with the others, or as a standalone effort. They are not mutually exclusive endeavors. Further, in a large modernization project, multiple approaches are often used for parts of the larger modernization initiative. The right mix of approaches is determined by the business needs driving the modernization, organization's risk tolerance and time constraints, the nature of the source environment and legacy applications. Where the applications no longer meet business needs and require significant changes, re-architecture might be the best way forward. On the other hand, for very large applications that mostly meet the business needs, SOA enablement or re-platforming might be lower risk options. You will notice that the first thing we talk about in this section—the Legacy Understanding phase—isn't listed as one of the modernization options. It is mentioned at this stage because it is a critical step that is done as a precursor to any option your organization chooses. Legacy Understanding Once we have identified our business drivers and the first steps in this process, we must understand what we have before we go ahead and modernize it. Legacy environments are very complex and quite often have little or no current documentation. This introduces a concept of analysis and discovery that is valuable for any modernization technique. Application Portfolio Analysis (APA) In order to make use of any modernization approach, the first step an organization must take is to carry out an APA of the current applications and their environment. This process has many names. You may hear terms such as Legacy Understanding, Application Re-learn, or Portfolio Understanding. All these activities provide a clear view of the current state of the computing environment. This process equips the organization with the information that it needs to identify the best areas for modernization. For example, this process can reveal process flows, data flows, how screens interact with transactions and programs, program complexity and maintainability metrics and can even generate pseudocode to re-document candidate business rules. Additionally, the physical repositories that are created as a result of the analysis can be used in the next stages of modernization, be it in SOA enablement, re-architecture, or re-platforming. Efforts are currently underway by the Object Management Group (OMG) to create a standard method to exchange this data between applications. The following screenshot shows the Legacy Portfolio Analysis: APA Macroanalysis The first form of APA analysis is a very high-level abstract view of the application environment. This level of analytics looks at the application in the context of the overall IT organization. Systems information is collected at a very high level. The key here is to understand which applications exist, how they interact, and what the identified value of the desired function is. With this type of analysis, organizations can manage overall modernization strategies and identify key applications that are good candidates for SOA integration, re-architecture, or re-platforming versus a replacement with Commercial Off-the-Shelf (COTS) applications. Data structures, program code, and technical characteristics are not analyzed here. The following macro-level process flow diagram was automatically generated from Relativity Technologies Modernization Workbench tool. Using this, the user can automatically get a view of the screen flows within a COBOL application. This is used to help identify candidate areas for modernization, areas of complexity, transfer of knowledge, or legacy system documentation. The key thing about these types of reports is that they are dynamic and automatically generated. The previous flow diagram illustrates some interesting points about the system that can be understood quickly by the analyst. Remember, this type of diagram is generated automatically, and can provide instant insight into the system with no prior knowledge. For example, we now have some basic information such as: MENSAT1.MENMAP1 is the main driver and is most likely a menu program. There are four called programs. Two programs have database interfaces. This is a simplistic view, but if you can imagine hundreds of programs in a visual perspective, we can quickly identify clusters of complexity, define potential subsystems, and do much more, all from an automated tool with visual navigation and powerful cross-referencing capabilities. This type of tool can also help to re-document existing legacy assets. APA Microanalysis The second type of portfolio analysis is APA microanalysis. This examines applications at the program level. This level of analysis can be used to understand things like program logic or candidate business rules for enablement, or business rule transformation. This process will also reveal things such as code complexity, data exchange schemas, and specific interaction within a screen flow. These are all critical when considering SOA integration, re-architecture, or a re-platforming project. The following are more models generated from the Relativity Modernization Technologies Workbench tool. The first is a COBOL transaction taken from a COBOL process. We are able to take a low-level view of a business rule slice taken from a COBOL program, and understand how this process flows. The particulars of this flow map diagram are not important; rather, this model can be automatically generated and is dynamic based on the current state of the code. The second model shows how a COBOL program interacts with a screen conversation. In this example, we are able to look at specific paragraphs within a particular program. We can identify specific CICS transaction and understand which paragraphs (or subroutines) are interacting with the database. The models can be used to further refine our drive for a more re-architected system, which helps us to  identify business rules and populate a rules engine, This example is just another example of a COBOL program that interacts with screens—shown in gray, and the paragraphs that execute CICS transactions—shown in white. So with these color coded boxes, we can quickly identify paragraphs, screens, databases, and CICS transactions. Application Portfolio Management (APM) APA is only a part of IT approach known as Application Portfolio Management. While APA analysis is critical for any modernization project, APM provides guideposts on how to combine the APA results, business assessment of the applications' strategic value and future needs, and IT infrastructure directions to come up with a long term application portfolio strategy and related technology targets to support it. It is often said that you cannot modernize that which you do not know. With APM, you can effectively manage change within an organization, understand the impact of change, and also manage its compliance. APM is a constant process, be it part of a modernization project or an organization's portfolio management and change control strategy. All applications are in a constant state of change. During any modernization, things are always in a state of flux. In a modernization project, legacy code is changed, new development is done (often in parallel), and data schemas are changed. When looking into APM tool offerings, consider products that can provide facilities to capture these kinds of changes in information and provide an active repository, rather than a static view. Ideally, these tools must adhere to emerging technical standards, like those being pioneered by  the OMG. Re-Architecturing Re-architecting is based on the concept that all legacy applications contain invaluable business logic and data relevant to the business, and these assets should be leveraged in the new system, rather than throwing it all out to rebuild from scratch. Since the new modern IT environment elevates a lot of this logic above the code using declarative models supported by BPM tools, ESBs, Business Rules engines, Data integration and access solutions, some of the original technical code can be replaced by these middleware tools to achieve greater agility. The following screenshot shows an example of a system after re-architecture. The previous example shows what a system would look like, from a higher level, after re-architecture. We see that this isn't a simple transformation of one code base to another in a one-to-one format. It is also much more than remediation and refactoring of the legacy code to standard java code. It is a system that fully leverages technologies suited for the required task, for example, leveraging Identity Management for security, business rules for core business, and BPEL for process flow. Thus, re-architecting focuses on recovering and reassembling the process relevant to business from a legacy application, while eliminating the technology-specific code. Here, we want to capture the value of the business process that is independent of the legacy code base, and move it into a different paradigm. Re-architecting is typically used to handle modernizations that involve changes in architecture, such as the introduction of object orientation and process-driven services. The advantage that re-architecting has over greenfield development is that re-architecting recognizes that there is information in the application code and surrounding artifacts (example, DDLs, COPYBOOKS, user training manuals) that is useful as a source for the re-architecting process, such as application process interaction, data models, and workflow. Re-architecting will usually go outside the source code of the legacy application to incorporate concepts like workflow and new functionality that were never part of the legacy application. However, it also recognized that this legacy application contains key business rules and processes that need to be harvested and brought forward. Some of the important considerations for maximizing re-use by extracting business rules from legacy applications as part of a re-architecture project include: Eliminate dead code, environmental specifics, resolve mutually exclusive logic. Identify key input/output data (parameters, screen input, DB and file records, and so on). Keep in mind many rules outside of code (for example, screen flow described in a training manual. Populate a data dictionary specific to application/industry context. Identify and tag rules based on transaction types and key data, policy parameters, key results (output data). Isolate rules into tracking repository. Combine automation and human review to track relationships, eliminate redundancies, classify and consolidate, add annotation. A parallel method of extracting knowledge from legacy applications uses modeling techniques, often based on UML. This method attempts to mine UML artifacts from the application code and related materials, and then create full-fledged models representing the complete application. Key considerations for mining models include: Convenient code representation helps to quickly filter out technical details. Allow user-selected artifacts to be quickly represented in UML entities. Allow user to add relationships and annotate the objects to assemble more complete UML model. Use external information if possible to refine use cases (screen flows) and activity diagrams—remember that some actors, flows, and so on may not appear in the code. Export to XML-based standard notation to facilitate refinement and forward-re-engineering through UML-based tools. Modernization with this method leverages the years of investment in the legacy code base, it is much less costly and less risky than starting a new application from ground zero. However, since it does involve change, it does have its risks. As a result, a number of other modernization options have been developed that involve less risk. The next set of modernization option provide a different set of benefits with respect to a fully re-architected SOA environment. The important thing is that these other techniques allow an organization to break the process of reaching the optimal modernization target into a series of phases that lower the overall risk of modernization for an organization. In the following figure, we can see that re-architecture takes a monolithic legacy system and applies technology and process to deliver a highly adaptable modern architecture. Since SOA integration is the least invasive approach to legacy application modernization, this technique allows legacy components to be used as part of an SOA infrastructure very quickly and with little risk. Further, it is often the first step in the larger modernization process. In this method, the source code remains mostly unchanged (we will talk more about that later) and the application is wrapped using SOA components, thus creating services that can be exposed and registered to an SOA management facility on a new platform, but are implemented via the exiting legacy code. The exposed services can then be re-used and combined with the results of other more invasive modernization techniques such as re-architecting. Using SOA integration, an organization can begin to make use of SOA concepts, including the orchestration of services into business processes, leaving the legacy application intact. Of course, the appropriate interfaces into the legacy application must exist and the code behind these interfaces must perform useful functions in a manner that can be packaged as services. SOA readiness assessment involves analysis of service granularity, exception handling, transaction integrity and reliability requirements, considerations of response time, message sizes, and scalability, issues of end-to-end messaging security, and requirements for services orchestration and SLA management. Following an assessment, any issues discovered need to be rectified before exposing components as services, and appropriate run-time and lifecycle governance policies created and implemented. It is important to note that there are three tiers where integration can be done: Data, Screen, and Code. So, each of the tiers, based upon the state and structure of the code, can be extended with this technique. As mentioned before, this is often the first step in modernization. In this example, we can see that the legacy systems still stay on the legacy platform. Here, we isolate and expose this information as a business service using legacy adapters. The table below lists important considerations in SOA integration and enablement projects. Criteria for identifying well defined services Represent a core enterprise function re-usable by many client applications Present a coarse-grained interface Single interaction vs. multi-screen flows UI, business logic, data access layers Exception handling-returning results without branching to another screen Discovering "Services" beyond screen flows Conversational vs. sync/async calls COMMAREA transactions (re-factored to use reasonable message size) Security policies and their enforcement RACF vs. LDAP-based or SSO mechanism End-to-end messaging security and Authentication, Authorization, Audition   Services integration and orchestration Wrapping and proxying via middle-tier gate-way vs. mainframe-based services Who's responsible for input validation? Orchestrating "composite" MF services Supporting bidirectional integration Quality of Service (QoS) requirements Response time, throughput, scalability End-to-end monitoring and SLA management Transaction integrity and global transaction coordination End-to-end monitoring and tracing Services lifecycle governance Ownership of service interfaces and change control process Service discovery (repository, tools) Orchestration, extension BPM integration
Read more
  • 0
  • 0
  • 3916
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-drools-jboss-rules-50-flow-part-1
Packt
16 Oct 2009
10 min read
Save for later

Drools JBoss Rules 5.0 Flow (Part 1)

Packt
16 Oct 2009
10 min read
Loan approval service Loan approval is a complex process starting with customer requesting a loan. This request comes with information such as amount to be borrowed, duration of the loan, and destination account where the borrowed amount will be transferred. Only the existing customers can apply for a loan. The process starts with validating the request. Upon successful validation, a customer rating is calculated. Only customers with a certain rating are allowed to have loans. The loan is processed by a bank employee. As soon as an approved event is received from a supervisor, the loan is approved and money can be transferred to the destination account. An email is sent to inform the customer about the outcome. Model If we look at this process from the domain modeling perspective, in addition to the model that we already have, we'll need a Loan class. An instance of this class will be a part of the context of this process. The screenshot above shows Java Bean, Loan, for holding loan-related information. The Loan bean defines three properties. amount (which is of type BigDecimal), destinationAccount (which is of type Account; if the loan is approved, the amount will be transferred to this account), and durationYears (which represents a period for which the customer will be repaying this loan). Loan approval ruleflow We'll now represent this process as a ruleflow. It is shown in the following figure. Try to remember this figure because we'll be referring back to it throughout this article. The preceding figure shows the loan approval process—loanApproval.rf file. You can use the Ruleflow Editor that comes with the Drools Eclipse plugin to create this ruleflow. The rest of the article will be a walk through this ruleflow explaining each node in more detail. The process starts with Validate Loan ruleflow group. Rules in this group will check the loan for missing required values and do other more complex validation. Each validation rule simply inserts Message into the knowledge session. The next node called Validated? is an XOR type split node. The ruleflow will continue through the no errors branch if there are no error or warning messages in the knowledge session—the split node constraint for this branch says: not Message() Code listing 1: Validated? split node no errors branch constraint (loanApproval.rf file). For this to work, we need to import the Message type into the ruleflow. This can be done from the Constraint editor, just click on the Imports... button. The import statements are common for the whole ruleflow. Whenever we use a new type in the ruleflow (constraints, actions, and so on), it needs to be imported. The otherwise branch is a "catch all" type branch (it is set to 'always true'). It has higher priority number, which means that it will be checked after the no errors branch. The .rf files are pure XML files that conform with a well formed XSD schema. They can be edited with any XML editor. Invalid loan application form If the validation didn't pass, an email is sent to the customer and the loan approval process finishes as Not Valid. This can be seen in the otherwise branch. There are two nodes-Email and Not Valid. Email is a special ruleflow node called work item. Email work item Work item is a node that encapsulates some piece of work. This can be an interaction with another system or some logic that is easier to write using standard Java. Each work item represents a piece of logic that can be reused in many systems. We can also look at work items as a ruleflow alternative to DSLs. By default, Drools Flow comes with various generic work items, for example, Email (for sending emails), Log (for logging messages), Finder (for finding files on a file system), Archive (for archiving files), and Exec (for executing programs/system commands). In a real application, you'd probably want to use a different work item than a generic one for sending an email. For example, a custom work item that inserts a record into your loan repository. Each work item can take multiple parameters. In case of email, these are: From, To, Subject, Text, and others. Values for these parameters can be specified at ruleflow creation time or at runtime. By double-clicking on the Email node in the ruleflow, Custom Work Editor is opened (see the following screenshot). Please note that not all work items have a custom editor. In the first tab (not visible), we can specify recipients and the source email address. In the second tab (visible), we can specify the email's subject and body. If you look closer at the body of the email, you'll notice two placeholders. They have the following syntax: #{placeholder}. A placeholder can contain any mvel code and has access to all of the ruleflow variables (we'll learn more about ruleflow variables later in this article). This allows us to customize the work item parameters based on runtime conditions. As can be seen from the screenshot above, we use two placeholders: customer.firstName and errorList. customer and errorList are ruleflow variables. The first one represents the current Customer object and the second one is ValidationReport. When the ruleflow execution reaches this email work item, these placeholders are evaluated and replaced with the actual values (by calling the toString method on the result). Fault node The second node in the otherwise branch in the loan approval process ruleflow is a fault node. Fault node is similar to an end node. It accepts one incoming connection and has no outgoing connections. When the execution reaches this node, a fault is thrown with the given name. We could, for example, register a fault handler that will generate a record in our reporting database. However, we won't register a fault handler, and in that case, it will simply indicate that this ruleflow finished with an error. Test setup We'll now write a test for the otherwise branch. First, let's set up the test environment. Then a new session is created in the setup method along with some test data. A valid Customer with one Account is requesting a Loan. The setup method will create a valid loan configuration and the individual tests can then change this configuration in order to test various exceptional cases. @Before public void setUp() throws Exception { session = knowledgeBase.newStatefulKnowledgeSession(); trackingProcessEventListener = new TrackingProcessEventListener(); session.addEventListener(trackingProcessEventListener); session.getWorkItemManager().registerWorkItemHandler( "Email", new SystemOutWorkItemHandler()); loanSourceAccount = new Account(); customer = new Customer(); customer.setFirstName("Bob"); customer.setLastName("Green"); customer.setEmail("[email protected]"); Account account = new Account(); account.setNumber(123456789l); customer.addAccount(account); account.setOwner(customer); loan = new Loan(); loan.setDestinationAccount(account); loan.setAmount(BigDecimal.valueOf(4000.0)); loan.setDurationYears(2); Code listing 2: Test setup method called before every test execution (DefaulLoanApprovalServiceTest.java file). A tracking ruleflow event listener is created and added to the knowledge session. This event listener will record the execution path of a ruleflow—store all of the executed ruleflow nodes in a list. TrackingProcessEventListener overrides the beforeNodeTriggered method and gets the node to be executed by calling event.getNodeInstance(). loanSourceAccount represents the bank's account for sourcing loans. The setup method also registers an Email work item handler. A work item handler is responsible for execution of the work item (in this case, connecting to the mail server and sending out emails). However, the SystemOutWorkItemHandler implementation that we've used is only a dummy implementation that writes some information to the console. It is useful for our testing purposes. Testing the 'otherwise' branch of 'Validated?' node We'll now test the otherwise branch, which sends an email informing the applicant about missing data and ends with a fault. Our test (the following code) will set up a loan request that will fail the validation. It will then verify that the fault node was executed and that the ruleflow process was aborted. @Test public void notValid() { session.insert(new DefaultMessage()); startProcess(); assertTrue(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_FAULT_NOT_VALID)); assertEquals(ProcessInstance.STATE_ABORTED, processInstance.getState()); } Code listing 3: Test method for testing Validated? node's otherwise branch (DefaultLoanApprovalServiceTest.java file). By inserting a message into the session, we're simulating a validation error. The ruleflow should end up in the otherwise branch. Next, the test above calls the startProcess method. It's implementation is as follows: private void startProcess() { Map<String, Object> parameterMap = new HashMap<String, Object>(); parameterMap.put("loanSourceAccount", loanSourceAccount); parameterMap.put("customer", customer); parameterMap.put("loan", loan); processInstance = session.startProcess( PROCESS_LOAN_APPROVAL, parameterMap); session.insert(processInstance); session.fireAllRules(); } Code listing 4: Utility method for starting the ruleflow (DefaultLoanApprovalServiceTest.java file). The startProcess method starts the loan approval process. It also sets loanSourceAccount, loan, and customer as ruleflow variables. The resulting process instance is, in turn, inserted into the knowledge session. This will enable our rules to make more sophisticated decisions based on the state of the current process instance. Finally, all of the rules are fired. We're already supplying three variables to the ruleflow; however, we haven't declared them yet. Let's fix this. Ruleflow variables can be added through Eclipse's Properties editor as can be seen in the following screenshot (just click on the ruleflow canvas, this should give the focus to the ruleflow itself). Each variable needs a name type and, optionally, a value. The preceding screenshot shows how to set the loan ruleflow variable. Its Type is set to Object and ClassName is set to the full type name droolsbook.bank.model.Loan. The other two variables are set in a similar manner. Now back to the test from code listing 3. It verifies that the correct nodes were triggered and that the process ended in aborted state. The isNodeTriggered method takes the process ID, which is stored in a constant called PROCESS_LOAN_APPROVAL. The method also takes the node ID as second argument. This node ID can be found in the properties view after clicking on the fault node. The node ID—NODE_FAULT_NOT_VALID—is a constant of type long defined as a property of this test class. static final long NODE_FAULT_NOT_VALID = 21;static final long NODE_SPLIT_VALIDATED = 20; Code listing 5: Constants that holds fault and Validated? node's IDs (DefaultLoanApprovalServiceTest.java file). By using the node ID, we can change node's name and other properties without breaking this test (node ID is least likely to change). Also, if we're performing bigger re-factorings involving node ID changes, we have only one place to update—the test's constants. Ruleflow unit testingDrools Flow support for unit testing isn't the best. With every test, we have to run the full process from start to the end. We'll make it easier with some helper methods that will set up a state that will utilize different parts of the flow. For example, a loan with high amount to borrow or a customer with low rating.Ideally we should be able to test each node in isolation. Simply start the ruleflow in a particular node. Just set the necessary parameters needed for a particular test and verify that the node executed as expected.Drools support for snapshots may resolve some of these issues; however, we'd have to first create all snapshots that we need before executing the individual test methods. Another alternative is to dig deeper into Drools internal API, but this is not recommended. The internal API can change in the next release without any notice.
Read more
  • 0
  • 0
  • 2222

article-image-human-readable-rules-drools-jboss-rules-50part-2
Packt
16 Oct 2009
5 min read
Save for later

Human-readable Rules with Drools JBoss Rules 5.0(Part 2)

Packt
16 Oct 2009
5 min read
Drools Agenda Before we talk about how to manage rule execution order, we have to understand Drools Agenda. When an object is inserted into the knowledge session, Drools tries to match this object with all of the possible rules. If a rule has all of its conditions met, its consequence can be executed. We say that a rule is activated. Drools records this event by placing this rule onto its agenda (it is a collection of activated rules). As you may imagine, many rules can be activated, and also deactivated, depending on what objects are in the rule session. After the fireAllRules method call, Drools picks one rule from the agenda and executes its consequence. It may or may not cause further activations or deactivations. This continues until the Drools Agenda is empty. The purpose of the agenda is to manage the execution order of rules. Methods for managing rule execution order The following are the methods for managing the rule execution order (from the user's perspective). They can be viewed as alternatives to ruleflow. All of them are defined as rule attributes. salience: This is the most basic one. Every rule has a salience value. By default it is set to 0. Rules with higher salience value will fire first. The problem with this approach is that it is hard to maintain. If we want to add new rule with some priority, we may have to shift the priorities of existing rules. It is often hard to figure out why a rule has certain salience, so we have to comment every salience value. It creates an invisible dependency on other rules. activation-group: This used to be called xor-group. When two or more rules with the same activation group are on the agenda, Drools will fire just one of them. agenda-group: Every rule has an agenda group. By default it is MAIN. However, it can be overridden. This allows us to partition Drools Agenda into multiple groups that can be executed separately. The figure above shows partitioned Agenda with activated rules. The matched rules are coming from left and going into Agenda. One rule is chosen from the Agenda at a time and then executed/fired. At runtime, we can programmatically set the active Agenda group (through the getAgenda().getAgendaGroup(String agendaGroup).setFocus() method of KnowledgeRuntime), or declaratively, by setting the rule attribute auto-focus to true. When a rule is activated and has this attribute set to true, the active agenda group is automatically changed to rule's agenda group. Drools maintains a stack of agenda groups. Whenever the focus is set to a different agenda group, Drools adds this group onto this stack. When there are no rules to fire in the current agenda group, Drools pops from the stack and sets the agenda group to the next one. Agenda groups are similar to ruleflow groups with the exception that ruleflow groups are not stacked. Note that only one instance of each of these attributes is allowed per rule (for example, a rule can only be in one ruleflow-group ; however, it can also define salience within that group). Ruleflow As we've already said, ruleflow can externalize the execution order from the rule definitions. Rules just define a ruleflow-group attribute, which is similar to agenda-group. It is then used to define the execution order. A simple ruleflow (in the example.rf file) is shown in the following screenshot: The preceding screenshot shows a ruleflow opened with the Drools Eclipse plugin. On the lefthand side are the components that can be used when building a ruleflow. On the righthand side is the ruleflow itself. It has a Start node which goes to ruleflow group called Group 1. After it finishes execution, an Action is executed, then the flow continues to another ruleflow group called Group 2, and finally it finishes at an End node. Ruleflow definitions are stored in a file with the .rf extension. This file has an XML format and defines the structure and layout for presentational purposes. Another useful rule attribute for managing which rules can be activated is lock-on-active. It is a special form of the no-loop attribute. It can be used in combination with ruleflow-group or agenda-group. If it is set to true, and an agenda/ruleflow group becomes active/focused, it discards any further activations for the rule until a different group becomes active. Please note that activations that are already on the agenda will be fired. A ruleflow consists of various nodes. Each node has a name, type, and other specific attributes. You can see and change these attributes by opening the standard Properties view in Eclipse while editing the ruleflow file. The basic node types are as follows: Start End Action RuleFlowGroup Split Join They are discussed in the following sections. Start It is the initial node. The flow begins here. Each ruleflow needs one start node. This node has no incoming connection—just one outgoing connection. End It is a terminal node. When execution reaches this node, the whole ruleflow is terminated (all of the active nodes are canceled). This node has one incoming connection and no outgoing connections. Action Used to execute some arbitrary block of code. It is similar to the rule consequence—it can reference global variables and can specify dialect. RuleFlowGroup This node will activate a ruleflow-group, as specified by its RuleFlowGroup attribute. It should match the value in ruleflow-group rule attribute.  
Read more
  • 0
  • 0
  • 3946

article-image-getting-started-scratch-14-part-2
Packt
16 Oct 2009
7 min read
Save for later

Getting Started with Scratch 1.4 (Part 2)

Packt
16 Oct 2009
7 min read
Add sprites to the stage In the first part we learned that if we want something done in Scratch, we tell a sprite by using blocks in the scripts area. A single sprite can't be responsible for carrying out all our actions, which means we'll often need to add sprites to accomplish our goals. We can add sprites to the stage in one of the following four ways: paint new sprite, choose new sprite from file, get a surprise sprite, or by duplicating a sprite. Duplicating a sprite is not in the scope of this article. The buttons to insert a new sprite using the other three methods are directly above the sprites list. Let's be surprised. Click on get surprise sprite (the button with the "?" on it.). If the second sprite covers up the first sprite, grab one of them with your mouse and drag it around the screen to reposition it. If you don't like the sprite that popped up, delete it by selecting the scissors from the tool bar and clicking on the sprite. Then click on get surprise sprite again. Each sprite has a name that displays beneath the icon. See the previous screenshot for an example. Right now, our sprites are cleverly named Sprite1 and Sprite2. Get new sprites The create new sprite option allows you to draw a sprite using the Paint Editor when you need a sprite that you can't find anywhere else. You can also create sprites using third-party graphics programs, such as Adobe Photoshop, GIMP, and Tux Paint. If you create a sprite in a different program, then you need to import the sprite using the choose new sprite from file option. Scratch also bundles many sprites with the installation, and the choose new sprite from file option will allow you to select one of the included files. The bundled sprites are categorized into Animals, Fantasy, Letters, People, Things, and Transportation, as seen in the following screenshot: If you look at the screenshot carefully, you'll notice the folder path lists Costumes, not sprites. A costume is really a sprite. If you want to be surprised, then use the get surprise sprite option to add a sprite to the project. This option picks a random entry from the gallery of bundled sprites. We can also add a new sprite by duplicating a sprite that's already in the project by right-clicking on the sprite in the sprites list and choosing duplicate (command C on Mac). As the name implies, this creates a clone of the sprite. The method we use to add a new sprite depends on what we are trying to do and what we need for our project. Time for action – spin sprite spin Let's get our sprites spinning. To start, click on Sprite1 from the sprites list. This will let us edit the script for Sprite1. From the Motion palette, drag the turn clockwise 15 degrees block into the script for Sprite1 and snap it in place after the if on edge, bounce block. Change the value on the turn block to 5. From the sprites list, click on Sprite2. From the Motion palette, drag the turn clockwise 15 degrees block into the scripts area. Find the repeat 10 block from the Control palette and snap it around the turn clockwise 15 degrees block. Wrap the script in the forever block. Place the when space key pressed block on top of the entire stack of blocks. From the Looks palette, snap the say hello for 2 secs block onto the bottom of the repeat block and above the forever block. Change the value on the repeat block to 100. Change the value on the turn clockwise 15 degrees block to 270. Change the value on the say block to I'm getting dizzy! Press the Space bar and watch the second sprite spin. Click the flag and set the second sprite on a trip around the stage. What just happened? We have two sprites on the screen acting independently of each other. It seems simple enough, but let's step through our script. Our cat got bored bouncing in a straight line across the stage, so we introduced some rotation. Now as the cat walked, it turned five degrees each time the blocks in the forever loop ran. This caused the cat to walk in an arc. As the cat bounced off the stage, it got a new trajectory. We told Sprite2 to turn 270 degrees for 100 consecutive times. Then the sprite stopped for two seconds and displayed a message, "I'm getting dizzy!" Because the script was wrapped in a forever block, Sprite2 started tumbling again. We used the space bar as the control to set Sprite2 in motion. However, you noticed that Sprite1 did not start until we clicked the flag. That's because we programmed Sprite1 to start when the flag was clicked. Have a go hero Make Sprite2 less spastic. Instead of turning 270 degrees, try a smaller value, such as 5. Sometimes we need inspiration So far, we've had a cursory introduction to Scratch, and we've created a few animations to illustrate some basic concepts. However, now is a good time to pause and talk about inspiration. Sometimes we learn by examining the work of other people and adapting that work to create something new that leads to creative solutions. When we want to see what other people are doing with Scratch, we have two places to turn. First, our Scratch installation contains dozens of sample projects. Second, the Scratch web site at http://scratch.mit.edu maintains a thriving community of Scratchers. Browse Scratch's projects Scratch includes several categories of projects for Animation, Games, Greetings, Interactive Art, Lists, Music and Dance, Names, Simulations, Speak up, and Stories. Time for action – spinner Let's dive right in. From the Scratch interface, click the Open button to display the Open Project dialog box, as seen in the following screenshot. Click on the Examples button. Select Simulations and click OK. Select Spinner and click OK to load the Spinner project. Follow the instructions on the screen and spin the arrow by clicking on the arrow. We're going to edit the spinner wheel. From the sprites list, click on Stage. From the scripts area, click the Backgrounds tab. Click Edit on background number 1 to open the Paint Editor. Select a unique color from the color palette, such as purple. Click on the paint bucket from the toolbar, then click on one of the triangles in the circle to change its color. The paint bucket is highlighted in the following screenshot. Click OK to return to our project. What just happened? We opened a community project called Spinner that came bundled with Scratch. When we clicked on the arrow, it spun and randomly selected a color from the wheel. We got our first look at a project that uses a background for the stage and modified the background using Scratch's built-in image editor. The Paint Editor in Scratch provides a basic but functional image editing environment. Using the Paint Editor, we can create a new sprite/background and modify a sprite/background. This can be useful if we are working with a sprite or background that someone else has created. Costume versus background A costume defines the look of a sprite while a background defines the look of the stage. A sprite may have multiple costumes just as the stage can have multiple backgrounds. When we want to work with the backgrounds on the stage, we use the switch to background and next background blocks. We use the switch to costume and next costume blocks when we want to manipulate a sprite's costume. Actually, if you look closely at the available looks blocks when you're working with a sprite, you'll realize that you can't select the backgrounds. Likewise, if you're working with the stage, you can't select costumes.
Read more
  • 0
  • 0
  • 2684

article-image-getting-started-scratch-14-part-1
Packt
16 Oct 2009
6 min read
Save for later

Getting Started with Scratch 1.4 (Part 1)

Packt
16 Oct 2009
6 min read
Before we create any code, let's make sure we speak the same language. The interface at a glance When we encounter software that's unfamiliar to us, we often wonder, "Where do I begin?" Together, we'll answer that question and click through some important sections of the Scratch interface so that we can quickly start creating our own projects. Now, open Scratch and let's begin. Time for action – first step When we open Scratch, we notice that the development environment roughly divides into three distinct sections, as seen in the following screenshot. Moving from left to right, we have the following sections in sequential order: Blocks palette Script editor Stage Let's see if we can get our cat moving: In the blocks palette, click on the Looks button. Drag the switch to costume block onto the scripts area. Now, in the blocks palette, click on the Control button. Drag the when flag clicked block to the scripts area and snap it on top of the switch to costume block, as illustrated in the following screenshot. How to snap two blocks together?As you drag a block onto another block, a white line displays to indicate that the block you are dragging can be added to the script. When you see the white line, release your mouse to snap the block in place. In the scripts area, click on the Costumes tab to display the sprite's costumes. Click on costume2 to change the sprite on the stage. Now, click back on costume1 to change how the sprite displays on the stage. Directly beneath the stage is a sprites list. The current list displays Sprite1 and Stage. Click on the sprite named Stage and notice that the scripts area changes. Click back on Sprite1 in the sprites list and again note the change to the scripts area. Click on the flag above the stage to set our first Scratch program in motion. Watch closely, or you might miss it. What just happened? Congratulations! You created your first Scratch project. Let's take a closer look at what we did just now. As we clicked through the blocks palette, we saw that the available blocks changed depending on whether we chose Motion, Looks, or Control. Each set of blocks is color-coded to help us easily identify them in our scripts. The first block we added to the script instructed the sprite to display costume2. The second block provided a way to control our script by clicking on the flag. Blocks with a smooth top are called hats in Scratch terminology because they can be placed only at the top of a stack of blocks. Did you look closely at the blocks as you snapped the control block into the looks block? The bottom of the when flag clicked block had a protrusion like a puzzle piece that fits the indent on the top of the switch to costume block. As children, most of us probably have played a game where we needed to put the round peg into the round hole. Building a Scratch program is just that simple. We see instantly how one block may or may not fit into another block. Stack blocks have indents on top and bumps on the bottom that allow blocks to lock together to form a sequence of actions that we call a script. A block depicting its indent and bump can be seen in the following screenshot: When we clicked on the Costumes tab, we learned that our cat had two costumes or appearances. Clicking on the costume caused the cat on the stage to change its appearance. As we clicked around the sprites list, we discovered our project had two sprites: a cat and a stage. And the script we created for the cat didn't transfer to the stage. We finished the exercise by clicking on the flag. The change was subtle, but our cat appeared to take its first step when it switched to costume2. Basics of a Scratch project Inside every Scratch project, we find the following ingredients: sprites, costumes, blocks, scripts, and a stage. It's how we mix the ingredients with our imagination that creates captivating stories, animations, and games. Sprites bring our program to life, and every project has at least one. Throughout the book, we'll learn how to add and customize sprites. A sprite wears a costume. Change the costume and you change the way the sprite looks. If the sprite happens to be the stage, the costume is known as a background. Blocks are just categories of instructions that include motion, looks, sound, pen, control, sensing, operators, and variables. Scripts define a set of blocks that tell a sprite exactly what to do. Each block represents an instruction or piece of information that affects the sprite in some way. We're all actors on Scratch's stage Think of each sprite in a Scratch program as an actor. Each actor walks onto the stage and recites a set of lines from the script. How each actor interacts with another actor depends on the words the director chooses. On Scratch's stage, every object, even the stone in the corner, is a sprite capable of contributing to the story. As directors, we have full creative control. Time for action – save your work It's a good practice to get in the habit of saving your work. Save your work early, and save it often: To save your new project, click the disk icon at the top of the Scratch window or click File | Save As. A Save Project dialog box opens and asks you for a location and a New Filename. Enter some descriptive information for your project by supplying the Project author and notes About this project in the fields provided. Set the cat in motion Even though our script contains only two blocks, we have a problem. When we click on the flag, the sprite switches to a different costume and stops. If we try to click on the flag again, nothing appears to happen, and we can't get back to the first costume unless we go to the Costumes tab and select costume1. That's not fun. In our next exercise, we're going to switch between both costumes and create a lively animation.
Read more
  • 0
  • 0
  • 2502
article-image-drools-jboss-rules-50-flow-part-2
Packt
16 Oct 2009
8 min read
Save for later

Drools JBoss Rules 5.0 Flow (Part 2)

Packt
16 Oct 2009
8 min read
Transfer Funds work item We'll now jump almost to the end of our process. After a loan is approved, we need a way of transferring the specified sum of money to customer's account. This can be done with rules, or even better, with pure Java as this task is procedural in nature. We'll create a custom work item so that we can easily reuse this functionality in other ruleflows. Note that if it was a once-off task, it would probably be better suited to an action node. The Transfer Funds node in the loan approval process is a custom work item. A new custom work item can be defined using the following four steps (We'll see how they are accomplished later on): Create a work item definition. This will be used by the Eclipse ruleflow editor and by the ruleflow engine to set and get parameters. For example, the following is an extract from the default WorkDefinitions.conf file that comes with Drools. It describes 'Email' work definition. The configuration is written in MVEL. MVEL allows one to construct complex object graphs in a very concise format. This file contains a list of maps—List<map<string, Object>>. Each map defines properties of one work definition. The properties are: name, parameters (that this work item works with), displayName, icon, and customEditor (these last three are used when displaying the work item in the Eclipse ruleflow editor). A custom editor is opened after double-clicking on the ruleflow node. import org.drools.process.core.datatype.impl.type.StringDataType;[ [ "name" : "Email", "parameters" : [ "From" : new StringDataType(), "To" : new StringDataType(), "Subject" : new StringDataType(), "Body" : new StringDataType() ], "displayName" : "Email", "icon" : "icons/import_statement.gif", "customEditor" : "org.drools.eclipse.flow.common.editor. editpart.work.EmailCustomEditor" ]] Code listing 13: Excerpt from the default WorkDefinitions.conf file. Work item's parameters property is a map of parameterName and its value wrappers. The value wrapper must implement the org.drools.process.core.datatype.DataType interface. Register the work definitions with the knowledge base configuration. This will be shown in the next section. Create a work item handler. This handler represents the actual behavior of a work item. It will be invoked whenever the ruleflow execution reaches this work item node. All of the handlers must extend the org.drools.runtime.process.WorkItemHandler interface. It defines two methods. One for executing the work item and another for aborting the work item. Drools comes with some default work item handler implementations, for example, a handler for sending emails: org.drools.process.workitem.email.EmailWorkItemHandler. This handler needs a working SMTP server. It must be set through the setConnection method before registering the work item handler with the work item manager (next step). Another default work item handler was shown in code listing 2 (in the first part)-SystemOutWorkItemHandler. Register the work item handler with the work item manager. After reading this you may ask, why doesn't the work item definition also specify the handler? It is because a work item can have one or more work item handlers that can be used interchangeably. For example, in a test case, we may want to use a different work item handler than in production environment. We'll now follow this four-step process and create a Transfer Funds custom work item. Work item definition Our transfer funds work item will have three input parameters: source account, destination account, and the amount to transfer. Its definition is as follows: import org.drools.process.core.datatype.impl.type.ObjectDataType;[ [ "name" : "Transfer Funds", "parameters" : [ "Source Account" : new ObjectDataType("droolsbook.bank. model.Account"), "Destination Account" : new ObjectDataType("droolsbook.bank. model.Account"), "Amount" : new ObjectDataType("java.math.BigDecimal") ], "displayName" : "Transfer Funds", "icon" : "icons/transfer.gif" ]] Code listing 14: Work item definition from the BankingWorkDefinitions.conf file. The Transfer Funds work item definition from the code above declares the usual properties. It doesn't have a custom editor as was the case with email work item. All of the parameters are of the ObjectDataType type. This is a wrapper that can wrap any type. In our case, we are wrapping Account and BigDecimal  types. We've also specified an icon that will be displayed in the ruleflow's editor palette and in the ruleflow itself. The icon should be of the size 16x16 pixels. Work item registration First make sure that the BankingWorkDefinitions.conf file is on your classpath. We now have to tell Drools about our new work item. This can be done by creating a drools.rulebase.conf file with the following contents: drools.workDefinitions = WorkDefinitions.conf BankingWorkDefinitions.conf Code listing 15: Work item definition from the BankingWorkDefinitions.conf file (all in one one line). When Drools starts up, it scans the classpath for configuration files. Configuration specified in the drools.rulebase.conf file will override the default configuration. In this case, only the drools.workDefinitions setting is being overridden. We already know that the WorkDefinitions.conf file contains the default work items such as email and log. We want to keep those and just add ours. As can be seen from the code listing above, drools.workDefinitions settings accept list of configurations. They must be separated by a space. When we now open the ruleflow editor in Eclipse, the ruleflow palette should contain our new Transfer Funds work item. If you want to know more about the file based configuration resolution process, you can look into the org.drools.util.ChainedProperties class. Work item handler Next, we'll implement the work item handler. It must implement the org. drools.runtime.process.WorkItemHandler interface that defines two methods: executeWorkItem and abortWorkItem. The implementation is as follows: /** * work item handler responsible for transferring amount from * one account to another using bankingService.transfer method * input parameters: 'Source Account', 'Destination Account' * and 'Amount' */public class TransferWorkItemHandler implements WorkItemHandler { BankingService bankingService; public void executeWorkItem(WorkItem workItem, WorkItemManager manager) { Account sourceAccount = (Account) workItem .getParameter("Source Account"); Account destinationAccount = (Account) workItem .getParameter("Destination Account"); BigDecimal sum = (BigDecimal) workItem .getParameter("Amount"); try { bankingService.transfer(sourceAccount, destinationAccount, sum); manager.completeWorkItem(workItem.getId(), null); } catch (Exception e) { e.printStackTrace(); manager.abortWorkItem(workItem.getId()); } } /** * does nothing as this work item cannot be aborted */ public void abortWorkItem(WorkItem workItem, WorkItemManager manager) { } Code listing 16: Work item handler (TransferWorkItemHandler.java file). The executeWorkItem method retrieves the three declared parameters and calls the bankingService.transfer method (the implementation of this method won't be shown). If all went OK, the manager is notified that this work item has been completed. It needs the ID of the work item and optionally a result parameter map. In our case, it is set to null. If an exception happens during the transfer, the manager is told to abort this work item. The abortWorkItem method on our handler doesn't do anything because this work item cannot be aborted. Please note that the work item handler must be thread-safe. Many ruleflow instances may reuse the same work item instance. Work item handler registration The transfer work item handler can be registered with a WorkItemManager as follows: TransferWorkItemHandler transferHandler = new TransferWorkItemHandler(); transferHandler.setBankingService(bankingService); session.getWorkItemManager().registerWorkItemHandler( "Transfer Funds", transferHandler); Code listing 17: TransferWorkItemHandler registration (DefaultLoanApprovalServiceTest.java file). A new instance of this handler is created and the banking service is set. Then it is registered with WorkItemManager in a session. Next, we need to 'connect' this work item into our ruleflow. This means set its parameters once it is executed. We need to set the source/destination account and the amount to be transferred. We'll use the in-parameter mappings of Transfer Funds to set these parameters. As we can see the Source Account is mapped to the loanSourceAccount ruleflow variable. The Destination Account ruleflow variable is set to the destination account of the loan and the Amount ruleflow variable is set to loan amount. Testing the transfer work item This test will verify that the Transfer Funds work item is correctly executed with all of the parameters set and that it calls the bankingService.transfer method with correct parameters. For this test, the bankingService service will be mocked with jMock library (jMock is a lightweight Mock object library for Java. More information can be found at http://www.jmock.org/). First, we need to set up the banking service mock object in the following manner: mockery = new JUnit4Mockery();bankingService = mockery.mock(BankingService.class); Code listing 18: jMock setup of bankingService mock object (DefaultLoanApprovalServiceTest.java file). Next, we can write our test. We are expecting one invocation of the transfer method with loanSourceAccount and loan's destination and amount properties. Then the test will set up the transfer work item as in code listing 17, start the process, and approve the loan (more about this is discussed in the next section). The test also verifies that the Transfer Funds node has been executed. Test method's implementation is as follows: @Test public void transferFunds() { mockery.checking(new Expectations() { { one(bankingService).transfer(loanSourceAccount, loan.getDestinationAccount(), loan.getAmount()); } }); setUpTransferWorkItem(); setUpLowAmount(); startProcess(); approveLoan(); assertTrue(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_WORK_ITEM_TRANSFER)); } Code listing 19: Test for the Transfer Funds work item (DefaultLoanApprovalServiceTest.java file). The test should execute successfully.
Read more
  • 0
  • 0
  • 1999

article-image-primer-agi-asterisk-gateway-interface
Packt
16 Oct 2009
2 min read
Save for later

A Primer to AGI: Asterisk Gateway Interface

Packt
16 Oct 2009
2 min read
How does AGI work Let's examine the following diagram: As the previous diagram illustrates, an AGI script communicates with Asterisk via two standard data streams—STDIN (Standard Input) and STDOUT (Standard Output). From the AGI script point-of-view, any input coming in from Asterisk would be considered STDIN, while output to Asterisk would be considered as STDOUT. The idea of using STDIN/STDOUT data streams with applications isn't a new one, even if you're a junior level programmer. Think of it as regarding any input from Asterisk with a read directive and outputting to Asterisk with a print or echo directive. When thinking about it in such a simplistic manner, it is clear that AGI scripts can be written in any scripting or programming language, ranging from BASH scripting, through PERL/PHP scripting, to even writing C/C++ programs to perform the same task. Let's now examine how an AGI script is invoked from within the Asterisk dialplan: exten => _X.,1,AGI(some_script_name.agi,param1,param2,param3) As you can see, the invocation is similar to the invocation of any other Asterisk dialplan application. However, there is one major difference between a regular dialplan application and an AGI script—the resources an AGI script consumes.While an internal application consumes a well-known set of resources from Asterisk, an AGI script simply hands over the control to an external process. Thus, the resources required to execute the external AGI script are now unknown, while at the same time, Asterisk consumes the resources for managing the execution of the AGI script.Ok, so BASH isn't much of a resource hog, but what about Java? This means that the choice of programming language for your AGI scripts is important. Choosing the wrong programming language can often lead to slow systems and in most cases, non-operational systems. While one may argue that the underlying programming language has a direct impact on the performance of your AGI application, it is imperative to learn the impact of each. To be more exact, it's not the language itself, but more the technology of the programming language runtime that is important. The following table tries to distinguish between three programming languages' families and their applicability to AGI development.
Read more
  • 0
  • 0
  • 6995

article-image-flex-101-flash-builder-4-part-1
Packt
16 Oct 2009
11 min read
Save for later

Flex 101 with Flash Builder 4: Part 1

Packt
16 Oct 2009
11 min read
  The article is intended towards developers who have never used Flex before and would like to exercise a “Hello World” kind of tutorial. The article does not aim to cover Flex and FB4 in detail but rather focuses on the mechanics of FB4 and getting an application running with minimal effort. For developers familiar with Flex and the predecessor to Flash Builder 4 (Flex Builder 2 or 3), it contains an introduction to FB4 and some differences in the way you go about building Flex Applications using FB4. Even if you have not programmed before and are looking at understanding how to make a start in developing applications, this would serve as a good start. The Flex Ecosystem The Flex ecosystem is a set of libraries, tools, languages and a deployment runtime that provides an end-to-end framework for designing, developing and deploying RIAs. All these together are being branded as a part of the Flash platform. In its latest release, Flex 4, special efforts have been put in to address the designer to developer workflow by letting graphic designers address layout, skinning, effects and general look and feel of your application and then the developers taking over to address the application logic, events, etc. To understand this at a high level, take a look at the diagram shown below. This is a very simplified diagram and the intention is to project a 10,000 ft view of the development, compilation and execution process. Let us understand the diagram now: The developer will typically work in the Flash Builder Application. Flash Builder is the Integrated Development Environment (IDE) that provides an environment for coding, compiling, running / debugging your Flex based applications. Your Flex Application will typically consist of MXML and ActionScript code. ActionScript is an ECMAScript compatible Object Oriented language, whereas MXML is an XML-based markup language. Using MXML you can define/layout your visual components like buttons, combobox, data grids, and others. Your application logic will be typically coded inside ActionScript classes/methods. While coding your Flex Application, you will make use of the Flex framework classes that provide most of the core functionality. Additional libraries like Flex Charting libraries and 3rd party components can be used in your application too. Flash Builder compiles all of this into object byte code that can be executed inside the Flash Player. Flash Player is the runtime host that executes your application. This is high level introduction to the ecosystem and as we work through the samples later on in the article, things will start falling into place. Flash Builder 4 Flash Builder is the new name for the development IDE previously known as Flex Builder. The latest release is 4 and it is currently in public beta. Flash Builder 4 is based on the Eclipse IDE, so if you are familiar with Eclipse based tools, you will be able to navigate your way quite easily. Flash Builder 4 like Flex Builder 3 previously is a commercial product and you need to purchase a development license. FB4 currently is in public beta and is available as a 30-day evaluation. Through the rest of the article, we will make use of FB4 and will be focused completely on that to build and run the sample applications. Let us now take a look at setting up FB4. Setting up your Development Environment To setup Flash Builder 4, follows these steps: The first step should be installing Flash Player 10 on your system. We will be developing with the Flex 4 SDK that comes along with Flash Builder 4 and it requires Flash Player 10. You can download the latest version of Flash Player from here: http://www.adobe.com/products/flashplayer/ Download Flash Builder 4 Public Beta from http://labs.adobe.com/technologies/flashbuilder4/. The page is shown below: After you download, run the installer program and proceed with the rest of the installation. Launch the Adobe Flash Builder Beta. It will prompt first with a message that it is a Trial version as shown below: To continue in evaluation mode, select the option highlighted above and click Next. This will launch the Flash Builder IDE. Let us start coding with Flash Builder 4 IDE. We will stick to tradition and write the “Hello World” application. Hello World using Flash Builder 4 In this section, we will be developing a basic Hello World application. While the application does not do much, it will help you get comfortable with the Flash Builder IDE. Launch the Flash Builder IDE. We will be creating a Flex Project. Flash Builder will help us create the Project that will contain all our files. To create a new Flex Project, click on the File → New → Flex Project as shown below: This will bring up a dialog in which you will need to specify more details about the Flex Project that you plan to develop. The dialog is shown below: You will need to provide at least the following information: Project Name: This is the name of your project. Enter a name that you want over here. In our case, we have named our project MyFirstFB4App. Application Type: We can develop both a Web version and a desktop version of our application using Flash Builder. The web application will then run inside of a web browser and execute within the Flash Player plug-in. We will go with the Web option over here. The Desktop application runs inside the Adobe Integrated Runtime environment and can have more desktop like features. We will skip that option for now. We will let the other options remain as is. We will use the Flex 4.0 SDK and currently we are not integrating with any Server side layer so we will leave that option as None/Other. Click on Finish at this point to create your Flex Project. This will create a main application file called MyFirstFB4App.mxml as shown below. We will come back to our coding a little later but first we must familiarize ourselves with the Flash Builder IDE. Let us first look at the Package Explorer to understand the files that have been created for the Flex Project. The screenshot is shown below: It consists of the main source file MyFirstFB4App.mxml. This is the main application file or in other words the bootstrap. All your source files (MXML and ActionScript code along with assets like images, and others should go under the src folder. They can optionally be placed in packages too. The Flex 4.0 framework consists of several libraries that you compile your code against. You would end up using its framework code, components (visual and non-visual) and other classes. These classes are packaged in a library file with an extension .swc. A list of library files is shown above. You do not need to typically do anything with it. Optionally, you can also use 3rd party components written by other companies and developers that are not part of the Flex framework. These libraries are packages as .SWC files too and they can be placed in the libs folder as shown in the previous screenshot. The typical step is to write and compile your code—build your project. If your build is successful, the object code is generated in the bin-debug folder. When you deploy your application to a Web Server, you will need to pickup the contents from this folder. We will come to that a little later. The html-template folder contains some boiler-plate code that contains the container HTML into which your object code will be referenced. It is possible to customize this but for now, we will not discuss that. Double-click MyFirstFB4App.mxml file. This is our main application file. The code listing is given below: <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> </s:Application> As discussed before, you will typically write one or more MXML files that will contain typically your visual components (although there can be non-visual components also). By visual components, we mean controls like button, combobox, list, tree, and others. It could also contain layout components and containers that help you layout your design as per the application screen design. To view what components, you can place on the main application canvas, select the Design View as shown below: Have a look at the lower half of the left pane. You will see the Components tab as shown below, which would address most needs of your Application Visual design. Click on the Controls tree node as shown below. You will see several controls that you can use and from which, we will use the Button control for this application. Simply select the Button control and drag it to the Design View Canvas as shown below: This will drop an instance of the Button control on the Design View as shown below: Select the Button to see its properties panel as shown below. Properties Panel is where you can set several attributes at design time for the control. In case the Properties panel is not visible, you can get to that by selecting Window → Properties from the main menu. In the Properties panel, we can change several key attributes. All controls can be uniquely identified and addressed in your code via the ID attribute. This is a unique name that you need to provide. Go ahead and give it some meaningful name. In our case, we name it btnSayHello. Next we can change the label so that instead of Button, it can display a message for example, Say Hello. Finally we want to wire some code such that if the button is clicked, we can do some action like display a Message Box saying Hello World. To do that, click the icon next to the On click edit field as shown below. It will provide you two options. Select the option for Generate Event Handler. This will generate the code and switch to the Source view. The code is listed below for your reference. <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> <fx:Script> <![CDATA[ protected function btnSayHello_clickHandler(event:MouseEvent):void { // TODO Auto-generated method stub } ]]> </fx:Script> <s:Button x="17" y="14" label="Button" id="btnSayHello" click="btnSayHello_clickHandler(event)"/> </s:Application> There are few things to note here. As mentioned most of your application logic will be written in ActionScript and that is exactly what Flash Builder has generated for you. All such code is typically added inside a scripting block marked with the <fx:Script> tag. You can place your ActionScript methods over here that can be used by the rest of the application. When we clicked on Generate Event Handler, Flash Builder generated the Event Handler code. This code is in ActionScript and was appropriately placed inside the <fx:Script> block for us. If you look at the code, you can see that it has added a function that is invoked when the click event is fired on the button. The method is btnSayHello_clickHandler and if you notice it has an empty method that is, no implementation. Let us run the application now to see what it looks like. To run the application, click on the   Run icon in the main toolbar of Flash Builder. This will launch the web application as shown below. Clicking the Say Hello button will not do anything at this point since there is no code written inside the handler as we saw above. To display the MessageBox, we add the code shown below (Only the Script section is shown below): <fx:Script> <![CDATA[ import mx.controls.Alert; protected function btnSayHello_clickHandler(event:MouseEvent):void { Alert.show("Hello World"); } ]]> </fx:Script> We use one of the classes (called Alert) from the Flex framework. Like any other language, we need to specify which package we are using the class from so that the compiler can understand it. The Alert class belongs to the mx.controls package and it has a static method called show() which takes a single parameter of type String. This String parameter is the message to be displayed and in our case it is "Hello World". To run this, click Ctrl-S to save your file or File →  Save from the main menu. And click on Run icon in the main toolbar. This will launch the application and on clicking the SayHello button, you will see the Hello World Alert window as shown below.
Read more
  • 0
  • 0
  • 2488
article-image-asterisk-gateway-interface-scripting-php
Packt
16 Oct 2009
4 min read
Save for later

Asterisk Gateway Interface Scripting with PHP

Packt
16 Oct 2009
4 min read
PHP-CLI vs PHP-CGI Most Linux distributions include both versions of PHP when installed, especially if you are using a modern distribution such as CentOS or Mandriva. When writing AGI scripts with PHP, it is imperative that you use PHP-CLI, and not PHP-CGI. Why is this so important? The main issue is that PHP-CLI and PHP-CGI handle their STDIN (standard input) slightly differently, which makes the reading of channel variables via PHP-CGI slightly more problematic. The php.ini configuration file The PHP interpreter includes a configuration file that defines a set of defaults for the interpreter. For your scripts to work in an efficient manner, the following must be set—either via the php.ini file, or by your PHP script: ob_implicit_flush(false); set_time_limit(5); error_log = filename;error_reporting(0); The above code snippet performs the following: Directive Description ob_implicit_flush(false); Sets your PHP output buffering to false, in order to make sure that output from your AGI script to Asterisk is not buffered, and takes longer to execute set_time_limit(5); Sets a time limit on your AGI scripts to verify that they don't extend beyond a reasonable time of execution; there is no rule of thumb relating to the actual value; it is highly dependant on your implementation Depending on your system and applications, your maximum time limit may be set to any value; however, we suggest that you verify your scripts, and are able to work with a maximum limit of 30 seconds. error_log=filename; Excellent for debugging purposes; always creates a log file error_reporting(E_NONE); Does not report errors to the error_log; changes the value to enable different logging parameters; check the PHP website for additional information about this AGI script permissions All AGI scripts must be located in the directory /var/lib/asterisk/agi-bin, which is Asterisk's default directory for AGI scripts. All AGI scripts should have the execute permission, and should be owned by the user running Asterisk. If you are unfamiliar with these, consult with your system administrator for additional information. The structure of a PHP based AGI script Every PHP based AGI script takes the following form: #!/usr/bin/php -q <? $stdin = fopen(‘php://stdin’, ‘r’); $stdout = fopen(‘php://stdout’, ‘w’); $stdlog = fopen(‘my_agi.log’, ‘w’); /* Operational Code starts here */ .. .. ..?> Upon execution, Asterisk transmits a set of information to our AGI script via STDIN. Handling of that input is best performed in the following manner: #!/usr/bin/php -q <? $stdin = fopen(‘php://stdin’, ‘r’); $stdout = fopen(‘php://stdout’, ‘w’); $stdlog = fopen(‘my_agi.log’, ‘w’); /* Handling execution input from Asterisk */ while (!feof($stdin)) { $temp = fgets($stdin); $temp = str_replace("n","",$temp); $s = explode(":",$temp); $agivar[$s[0]] = trim($s[1]); if $temp == "") { break; } } /* Operational Code starts here */ .. .. ..?> Once we have handled our inbound information from the Asterisk server, we can start our actual operational flow. Communication between Asterisk and AGI The communication between Asterisk and an AGI script is performed via STDIN and STDOUT (standard output). Let's examine the following diagram: In the above diagram, ASC refers to our AGI script, while AST refers to Asterisk itself. As you can see from the diagram above, the entire flow is fairly simple. It is just a set of simple I/O queries and responses that are carried through the STDIN/STDOUT data streams. Let's now examine a slightly more complicated example: The above figure shows an example that includes two new elements in our AGI logic—access to a database, and to information provided via a web service. For example, the above image illustrates something that may be used as a connection between the telephony world and a dating service. This leads to an immediate conclusion that just as AGI is capable of connecting to almost any type of information source, depending solely on the implementation of the AGI script and not on Asterisk, Asterisk is capable of interfacing with almost any type of information source via out-of-band facilities. Enough of talking! Let's write our first AGI script.
Read more
  • 0
  • 0
  • 7874

Packt
16 Oct 2009
17 min read
Save for later

WCF – Windows Communication Foundation

Packt
16 Oct 2009
17 min read
What is WCF? WCF is the acronym for Windows Communication Foundation. It is Microsoft's latest technology that enables applications in a distributed environment to communicate with each other. WCF is Microsoft's unified programming model for building service-oriented applications. It enables developers to build secure, reliable, transacted solutions that integrate across platforms and interoperate with existing investments. WCF is built on the Microsoft .NET Framework and simplifies the development of connected systems. It unifies a broad array of distributed systems capabilities in a composable, extensible architecture that supports multiple transports, messaging patterns, encodings, network topologies, and hosting models. It is the next version of several existing products—ASP.NET's web methods (ASMX) and Microsoft Web Services Enhancements (WSE) for Microsoft .NET, .NET Remoting, Enterprise Services, and System.Messaging. The purpose of WCF is to provide a single programming model that can be used to create services on the .NET platform for organizations. Why is WCF used for SOA? As we have seen in the previous section, WCF is an umbrella technology that covers ASMX web services, .NET remoting, WSE, Enterprise Service, and System.Messaging. It is designed to offer a manageable approach to distributed computing, broad interoperability, and direct support for service orientation. WCF supports many styles of distributed application development by providing a layered architecture. At its base, the WCF channel architecture provides asynchronous, untyped message-passing primitives. Built on top of this base are protocol facilities for secure, reliable, transacted data exchange and a broad choice of transport and encoding options. Let us take an example to see why WCF is a good approach for SOA. Suppose a company is designing a service to get loan information. This service could be used by the internal call center application, an Internet web application, and a third-party Java J2EE application such as a banking system. For interactions with the call center client application, performance is important. For communication with the J2EE-based application however, interoperability becomes the highest goal. The security requirements are also quite different between the local Windows-based application, and the J2EE-based application running on another operating system. Even transactional requirements might vary, with only the internal application being allowed to make transactional requests. With these complex requirements, it is not easy to build the desired service with any single existing technology. For example, the ASMX technology may serve well for the interoperability, but its performance may not be ideal. The .NET remoting will be a good choice from the performance perspective, but it is not good at interoperability. Enterprise Services could be used for managing object lifetimes and defining distributed transactions, but Enterprise Services supports only a limited set of communication options. Now with WCF, it is much easier to implement this service. As WCF has unified a broad array of distributed systems capabilities, the get loan service can be built with WCF for all of its application-to-application communication. The following shows how WCF addresses each of these requirements: Because WCF can communicate using web service standards, interoperability with other platforms that also support SOAP, such as the leading J2EE-based application servers, is straightforward. You can also configure and extend WCF to communicate with web services using messages not based on SOAP, for example, simple XML formats such as RSS. Performance is of paramount concern for most businesses. WCF was developed with the goal of being one of the fastest distributed application platforms developed by Microsoft. To allow for optimal performance when both parties in a communication are built on WCF, the wire encoding used in this case is an optimized binary version of an XML Information Set. Using this option makes sense for communication with the call center client application, because it is also built on WCF, and performance is an important concern. Managing object lifetimes, defining distributed transactions, and other aspects of Enterprise Services, are now provided by WCF. They are available to any WCF-based application, which means that the get loan service can use them with any of the other applications that it communicates with. Because it supports a large set of the WS-* specifications, WCF helps to provide reliability, security, and transactions when communicating with any platform that supports these specifications. The WCF option for queued messaging, built on Message Queuing, allows applications to use persistent queuing without using another set of application programming interfaces. The result of this unification is greater functionality, and significantly reduced complexity. WCF architecture The following diagram illustrates the major layers of the Windows Communication Foundation (WCF) architecture. This diagram is taken from the Microsoft web site (http://msdn.microsoft.com/en-us/library/ms733128.aspx): The Contracts layer defines various aspects of the message system. For example, the Data Contract describes every parameter that makes up every message that a service can create or consume. The Service runtime layer contains the behaviors that occur only during the actual operation of the service, that is, the runtime behaviors of the service. The Messaging layer is composed of channels. A channel is a component that processes a message in some way, for example, authenticating a message. In its final form, a service is a program. Like other programs, a service must be run in an executable format. This is known as the hosting application. In the next section, we will explain these concepts in detail. Basic WCF concepts—WCF ABCs There are many terms and concepts around WCF, such as address, binding, contract, endpoint, behavior, hosting, and channels. Understanding these terms is very helpful when using WCF. Address The WCF Address is a specific location for a service. It is the specific place to which a message will be sent. All WCF services are deployed at a specific address, listening at that address for incoming requests. A WCF Address is normally specified as a URI, with the first part specifying the transport mechanism, and the hierarchical part specifying the unique location of the service. For example, http://www.myweb.com/myWCFServices/SampleService can be an address for a WCF service. This WCF service uses HTTP as its transport protocol, and it is located on the server www.myweb.com, with a unique service path of myWCFServices/SampleService. The following diagram illustrates the three parts of a WCF service address. Binding Bindings are used to specify the transport, encoding, and protocol details required for clients and services to communicate with each other. Bindings are what WCF uses to generate the underlying wire representation of the endpoint. So, most of the details of the binding must be agreed upon by the parties that are communicating. The easiest way to achieve this is for clients of a service to use the same binding that the service uses. A binding is made up of a collection of binding elements. Each element describes some aspect of how the service communicates with clients. A binding must include at least one transport binding element, at least one message encoding binding element (which can be provided by the transport binding element by default), and any number of other protocol binding elements. The process that builds a runtime out of this description allows each binding element to contribute code to that runtime. WCF provides bindings that contain common selections of binding elements. These can either be used with their default settings, or the default values can be modified according to user requirements. These system-provided bindings have properties that allow direct control over the binding elements and their settings. The following are some examples of the system-provided bindings: BasicHttpBinding, WSHttpBinding, WSDualHttpBinding, WSFederationHttpBinding, NetTcpBinding, NetNamedPipeBinding, NetMsmqBinding, NetPeerTcpBinding, and MsmqIntegrationBinding. Each one of these built-in bindings has predefined required elements for a common task, and is ready to be used in your project. For instance, the BasicHttpBinding uses HTTP as the transport for sending SOAP 1.1 messages, and it has attributes and elements such as receiveTimeout, sendTimeout, maxMessageSize, and maxBufferSize. You can accept the default settings of its attributes and elements, or overwrite them as needed. Contract A WCF contract is a set of specifications that define the interfaces of a WCF service. A WCF service communicates with other applications according to its contracts. There are several types of WCF contracts, such as Service Contract, Operation Contract, Data Contract, Message Contract, and Fault Contract. Service contract A service contract is the interface of the WCF service. Basically, it tells others what the service can do. It may include service-level settings, such as the name of the service, the namespace of the service, and the corresponding callback contracts of the service. Inside the interface, it can define a bunch of methods, or service operations for specific tasks. Normally, a WCF service has at least one service contract. Operation contract An operation contract is defined within a service contract. It defines the parameters and return type of an operation. An operation can take data of a primitive (native) data type, such as an integer as a parameter, or it can take a message, which should be defined as a message contract type. Just as a service contract is an interface, an operation contract is a definition of an operation. It has to be implemented in order that the service functions as a WCF service. An operation contract also defines operation-level settings, such as the transaction flow of the operation, the directions of the operation (one-way, two-way, or both ways), and fault contract of the operation. The following is an example of an operation contract: [WCF::FaultContract(typeof(MyWCF.EasyNorthwind.FaultContracts.ProductFault))]MyWCF.EasyNorthwind.MessageContracts.GetProductResponseGetProduct(MyWCF.EasyNorthwind.MessageContracts.GetProductRequest request); In this example, the operation contract's name is GetProduct, and it takes one input parameter, which is of type GetProductRequest (a message contract) and has one return value, which is of type GetProductResponse (another message contract). It may return a fault message, which is of type ProductFault (a fault contract), to the client applications. We will cover message contract and fault contract in the following sections. Message contract If an operation contract needs to pass a message as a parameter or return a message, the type of these messages will be defined as message contracts. A message contract defines the elements of the message, as well as any message-related settings, such as the level of message security, and also whether an element should go to the header or to the body. The following is a message contract example: namespace MyWCF.EasyNorthwind.MessageContracts{ /// <summary> /// Service Contract Class - GetProductResponse /// </summary> [WCF::MessageContract(IsWrapped = false)] public partial class GetProductResponse { private MyWCF.EasyNorthwind.DataContracts.Product product; [WCF::MessageBodyMember(Name = "Product")] public MyWCF.EasyNorthwind.DataContracts.Product Product { get { return product; } set { product = value; } } }} In this example, the namespace of the message contract is MyWCF.EasyNorthwind.MessageContracts, and the message contract's name is GetProductResponse. This message contract has one member, which is of type Product. Data contract Data contracts are data types of the WCF service. All data types used by the WCF service must be described in metadata to enable other applications to interoperate with the service. A data contract can be used by an operation contract as a parameter or return type, or it can be used by a message contract to define elements. If a WCF service uses only primitive (native) data types, it is not necessary to define any data contract. The following is an of example data contract: namespace MyWCF.EasyNorthwind.DataContracts{ /// <summary> /// Data Contract Class - Product /// </summary> [WcfSerialization::DataContract(Namespace = "http://MyCompany.com/ ProductService/EasyWCF/2008/05", Name = "Product")] public partial class Product { private int productID; private string productName; [WcfSerialization::DataMember(Name = "ProductID", IsRequired = false, Order = 0)] public int ProductID { get { return productID; } set { productID = value; } } [WcfSerialization::DataMember(Name = "ProductName", IsRequired = false, Order = 1)] public string ProductName { get { return productName; } set { productName = value; } } }} In this example, the namespace of the data contract is MyWCF.EasyNorthwind.DataContracts, the name of the data contract is Product, and this data contract has two members (ProductID and ProductName).   Fault contract In any WCF service operation contract, if an error can be returned to the caller, the caller should be warned of that error. These error types are defined as fault contracts. An operation can have zero or more fault contracts associated with it. The following is a fault contract example: namespace MyWCF.EasyNorthwind.FaultContracts{ /// <summary> /// Data Contract Class - ProductFault /// </summary> [WcfSerialization::DataContract(Namespace = "http://MyCompany.com/ ProductService/EasyWCF/2008/05", Name = "ProductFault")] public partial class ProductFault { private string faultMessage; [WcfSerialization::DataMember(Name = "FaultMessage", IsRequired = false, Order = 0)] public string FaultMessage { get { return faultMessage; } set { faultMessage = value; } } }} In this example, the namespace of the fault contract is MyWCF.EasyNorthwind.FaultContracts, the name of the fault contract is ProductFault, and the fault contract has only one member (FaultMessage). Endpoint Messages are sent between endpoints. Endpoints are places where messages are sent or received (or both), and they define all of the information required for the message exchange. A service exposes one or more application endpoints (as well as zero or more infrastructure endpoints). A service can expose this information as the metadata that clients can process to generate appropriate WCF clients and communication stacks. When needed, the client generates an endpoint that is compatible with one of the service's endpoints. A WCF service endpoint has an address, a binding, and a service contract(WCF ABC). The endpoint's address is a network address where the endpoint resides. It describes, in a standard-based way, where messages should be sent. Each endpoint normally has one unique address, but sometimes two or more endpoints can share the same address. The endpoint's binding specifies how the endpoint communicates with the world, including things such as transport protocol (TCP, HTTP), encoding (text, binary), and security requirements (SSL, SOAP message security). The endpoint's contract specifies what the endpoint communicates, and is essentially a collection of messages organized in the operations that have basic Message Exchange Patterns (MEPs) such as one-way, duplex, or request/reply. The following diagram shows the components of a WCF service endpoint. Behavior A WCF behavior is a type, or settings to extend the functionality of the original type. There are many types of behaviors in WCF, such as service behavior, binding behavior, contract behavior, security behavior and channel behavior. For example, a new service behavior can be defined to specify the transaction timeout of the service, the maximum concurrent instances of the service, and whether the service publishes metadata. Behaviors are configured in the WCF service configuration file. Hosting A WCF service is a component that can be called by other applications. It must be hosted in an environment in order to be discovered and used by others. The WCF host is an application that controls the lifetime of the service. With .NET 3.0 and beyond, there are several ways to host the service. Self hosting A WCF service can be self-hosted, which means that the service runs as a standalone application and controls its own lifetime. This is the most flexible and easiest way of hosting a WCF service, but its availability and features are limited. Windows services hosting A WCF service can also be hosted as a Windows service. A Windows service is a process managed by the operating system and it is automatically started when Windows is started (if it is configured to do so). However, it lacks some critical features (such as versioning) for WCF services. IIS hosting A better way of hosting a WCF service is to use IIS. This is the traditional way of hosting a web service. IIS, by nature, has many useful features, such as process recycling, idle shutdown, process health monitoring, message-based activation, high availability, easy manageability, versioning, and deployment scenarios. All of these features are required for enterprise-level WCF services. Windows Activation Services hosting The IIS hosting method, however, comes with several limitations in the service-orientation world; the dependency on HTTP is the main culprit. With IIS hosting, many of WCF's flexible options can't be utilized. This is the reason why Microsoft specifically developed a new method, called Windows Activation Services, to host WCF services. Windows Process Activation Service (WAS) is the new process activation mechanism for Windows Server 2008 that is also available on Windows Vista. It retains the familiar IIS 6.0 process model (application pools and message-based process activation) and hosting features (such as rapid failure protection, health monitoring, and recycling), but it removes the dependency on HTTP from the activation architecture. IIS 7.0 uses WAS to accomplish message-based activation over HTTP. Additional WCF components also plug into WAS to provide message-based activation over the other protocols that WCF supports, such as TCP, MSMQ, and named pipes. This allows applications that use the non-HTTP communication protocols to use the IIS features such as process recycling, rapid fail protection, and the common configuration systems that were only available to HTTP-based applications. This hosting option requires that WAS be properly configured, but it does not require you to write any hosting code as part of the application. [Microsoft MSN, Hosting Services, retrieved on 3/6/2008 from http://msdn2.microsoft.com/en-us/library/ms730158.aspx] Channels As we have seen in the previous sections, a WCF service has to be hosted in an application on the server side. On the client side, the client applications have to specify the bindings to connect to the WCF services. The binding elements are interfaces, and they have to be implemented in concrete classes. The concrete implementation of a binding element is called a channel. The binding represents the configuration, and the channel is the implementation associated with that configuration. Therefore, there is a channel associated with each binding element. Channels stack on top of one another to create the concrete implementation of the binding—the channel stack. The WCF channel stack is a layered communication stack with one or more channels that process messages. At the bottom of the stack is a transport channel that is responsible for adapting the channel stack to the underlying transport (for example, TCP, HTTP, SMTP and other types of transport). Channels provide a low-level programming model for sending and receiving messages. This programming model relies on several interfaces and other types collectively known as the WCF channel model. The following diagram shows a simple channel stack: Metadata The metadata of a service describes the characteristics of the service that an external entity needs to understand in order to communicate with the service. Metadata can be consumed by the ServiceModel Metadata Utility Tool (Svcutil.exe) to generate a WCF client and the accompanying configuration that a client application can use to interact with the service. The metadata exposed by the service includes XML schema documents, which define the data contract of the service, and WSDL documents, which describe the methods of the service. Though WCF services will always have metadata, it is possible to hide the metadata from outsiders. If you do so, you have to pass the metadata to the client side by other means. This practice is not common, but it gives your services an extra layer of security. When enabled via the configuration settings through metadata behavior, metadata for the service can be retrieved by inspecting the service and its endpoints. The following configuration setting in a WCF service configuration file will enable the metadata publishing for HTTP transport protocol: <serviceMetadata httpGetEnabled="true" />
Read more
  • 0
  • 0
  • 1992
Modal Close icon
Modal Close icon