Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-microsoft-lightswitch-querying-multiple-entities
Packt
16 Sep 2011
4 min read
Save for later

Microsoft LightSwitch: Querying Multiple Entities

Packt
16 Sep 2011
4 min read
  (For more resources on this topic, see here.)   Microsoft LightSwitch makes it easy to query multiple entities and with queries you can fine tune the results using multiple parameters. In the following, we will be considering the Orders and the Shippers tables from the Northwind database shown next: What we would like to achieve is to fashion a query in LightSwitch which finds orders later than a specified date (OrderDate) carried by a specified shipping company (CompanyName). In the previous example, we created a single parameter and here we extend it to two parameters, OrderDate and CompanyName. The following stored procedure in SQL Server 2008 would produce the rows that satisfy the above conditions: Use NorthwindGoCreate Procedure ByDateAndShprName @ordDate datetime, @shprName nvarchar(30)asSELECT Orders.OrderID, Orders.CustomerID, Orders.EmployeeID,Orders.OrderDate, Orders.RequiredDate, Orders.ShippedDate,Orders.ShipVia, Orders.Freight, Orders.ShipName, Orders.ShipAddress, Shippers.ShipperID,Shippers.CompanyName, Shippers.PhoneFROM Orders INNER JOIN Shippers ON Orders.ShipVia = Shippers.ShipperIDwhere Orders.OrderDate > @OrdDate and Shippers.CompanyName=@shprName The stored procedure ByDateAndShprName can be executed by providing the two parameters (variables), @OrdDate and @shprName, as shown below. Exec ByDateAndShprName '5/1/1998 12:00:00','United Package' The result returned by the previous command is shown next copied from the SQL Server Management Studio (only first few columns are shown): The same result can be achieved in LightSwitch using two parameters after attaching these two tables to the LightSwitch application. As the details of creating screens and queries have been described in detail, only some details specific to the present section are described. Note that the mm-dd-yyyy appears in the result reversed yyyy-mm-dd. Create a Microsoft LightSwitch application (VB or C#). Here project Using Combo6 was created. Attach a database using SQL Server 2008 Express and bring the two tables, Orders and Shippers, to create two entities, Order and Shipper, as shown in the next screenshot: Create a query as shown in the next image: Here the query is called ByDate. Note that the CompanyName in the Shippers table is distinct. The completed query with two parameters appears as shown: Create a new screen (click on Add Screen in the query designer shown in the previous screenshot) and choose the Editable Grid Screen template. Here the screen created is named EditableGridByDate. Click on Add Data Item… and add the query NorthwindData.ByDate. The designer changes as shown next: Click on OrderDate parameter on the left-hand side navigation of the screen and drag and drop it just below the Screen Command Bar as shown. In a similar manner, drag and drop the query parameter CompanyName below the OrderDate of the earlier step. This will display as two controls for two parameters on the screen. Hold with mouse, drag and drop ByDate below the CompanyName you added in the previous step. The completed screen design should appear as shown (some fields are not shown in the display): The previous image shows two parameters. The DataGrid rows show the rows returned by the query. As is, this screen would return no data if the parameters were not specified. The OrderDate defaults to Current Date. Click on F5 to display the screen as shown: Enter the date 5/1/1998 directly. Enter United Package in the CompanyName textbox and click on the Refresh button on the previous screen. The screen is displayed as shown here: The above screen is an editable screen and you should be able to add, delete, and edit the fields and they should update the fields in the backend database when you save the data. Also note that the LightSwitch application returned 11 rows of data while the stored procedure in SQL Server returned 10 rows. This may look weird but SQL Server date time refers to PM but Microsoft LightSwitch order date is datetime data type with AM. Entering PM instead of AM returns the correct number of rows.  
Read more
  • 0
  • 0
  • 3889

article-image-xcode-4-ios-displaying-notification-messages
Packt
30 Aug 2011
8 min read
Save for later

Xcode 4 ios: Displaying Notification Messages

Packt
30 Aug 2011
8 min read
  Xcode 4 iOS Development Beginner's Guide Use the powerful Xcode 4 suite of tools to build applications for the iPhone and iPad from scratch  The iPhone provides developers with many ways in which they can add informative messages to their applications to alert the user. We will be looking at the various types of notification methods, ranging from alerts, activity indicators, audio sounds, and vibrations. Exploring the notification methods The applications on the iPhone are user-centric, meaning that they don't operate without a user interface and don't perform any background operations. These types of applications enable users to work with data, play games, or communicate with other users. Despite these, at some point an application will need to communicate with the user. This can be as simple as a warning message, or providing feedback or even asking the user to provide some information. The iPhone and Cocoa-Touch use three special methods to gain your attention and are explained below: CLASS DESCRIPTION UIAlertView This class creates a simple modal alert window that presents the user with a message and a few options. Modal elements require the user to interact with them before they can proceed. These types of elements are displayed (layered) on top of other windows and block the underlying objects until the user responds to one of the actions presented. UIActionSheet These types of classes are similar to the UIAlertView class, except that they can be associated with a given view, tab bar, or toolbar and become animated when it appears on the screen. Action Sheets do not have an associated message property; they contain a single title property. System Sound Services This enables playback and vibration and supports various file formats (CAF, AIF, and WAV Files) and makes use of the AudioToolBox framework.   Generating alerts There is no doubt that you will need to incorporate alerts into your applications. These can be very useful to inform the user of when the application is running, and can be a simple message such as memory running low, or that an application or internal error has occurred. We can notify the user in a number of ways using the UIAlertView class, and it can be used to display a simple modal message or gather information from the user. Time for action – creating the GetUsersAttention application Before we can proceed with creating our GetUsersAttention application, we must first launch the Xcode development environment. Select the View-based application template from the project template dialog box. Ensure that you have selected iPhone from under the Device Family dropdown, as the type of view to create. Next, you will need to provide a name for your project. Enter GetUsersAttention and then choose a location where you would like to save the project. Once your project has been created, you will be presented with the Xcode interface, along with the project files that the template created for you within the Project Navigator Window. What just happened? In this section, we looked at the steps involved in creating a View-based application for our GetUsersAttention application. In the next section, we will take a look at how we can add the AudioToolbox Framework into our project to incorporate sound.   Time for action – adding the AudioToolbox Framework to our application Now that we have created our project, we need to add the AudioToolbox Framework to our project. This is an important framework which will provide us the ability to play sound and vibrate the phone. To add the new frameworks or additional frameworks into your project, select the Project Navigator Group, and then follow these simple steps as outlined below: Select your Project within the Project Navigator Window. Then select your project target from under the TARGETS group. Select the Build Phases tab. Expand the Link Library with Libraries disclosure triangle. Then finally, use the + button to add the library that you want to add; if you want to remove a framework, highlight it from the group and click on the - button. You can also search for the framework if you can't find it in the list shown. If you are still confused on how to go about adding these frameworks, refer to the following image, which highlights what parts you need to select (highlighted by a red rectangle): (Move the mouse over the image to enlarge.) What just happened? In the above section, we looked at how we are able to add frameworks to our application. We looked at the differences between the MediaPlayer and AudioToolbox frameworks, and the limitations of the two. Adding frameworks to your application allows you to extend your application and utilise those features in your application to avoid reinventing the wheel. When you add frameworks to your application, the system loads them into memory as needed and shares the one copy of the resource among all applications whenever possible. Now that we have added the AudioToolbox.framework to our project, our next step is to start creating our user interface. In the next section, we will be taking a look at how we start to build our user interface and create events. Building our user interface User interfaces provide a great way to communicate with the user in order to either obtain information or to display notifications. A good interface is one that provides a good consistent flow throughout your application as you navigate from screen to screen. This involves considering the screen size of your view. In the next section, we look at how to add some controls to our view to build our interface. To obtain further information about what constitutes a good user interface, Apple provides these iOS Human Interface Guidelines which can be obtained at the following location: http://developer.apple.com/library/ios/documentation/userexperience/conceptual/mobilehig/MobileHIG.pdf.   Time for action – adding controls to our View We will be adding five button (UIButton) controls which will be handling our actions to display alerts and Action Sheets, playing sounds, and vibrating the iPhone. From the Object Library, select and drag a (UIButton) Round Rect Button control onto our view. Modify the Object Attributes of the Round Rect Button control and set its title to read "Show Activity Indicator". From the Object Library, select and drag a (UIButton) Round Rect Button control onto our view. Modify the Object Attributes of the Round Rect Button control and set its title to read "Display Alert Dialog". From the Object Library, select and drag a (UIButton) Round Rect Button control onto our view. Modify the Object Attributes of the Round Rect Button control and set its title to read "Display Action Sheet". From the Object Library, select and drag a (UIButton) Round Rect Button control onto our view. Modify the Object Attributes of the Round Rect Button control and set its title to read "Play Alert Sound". From the Object Library, select and drag a (UIButton) Round Rect Button control onto our view. Modify the Object Attributes of the Round Rect Button control and set its title to read "Vibrate iPhone". If you have followed everything correctly, your view should look something like the following screenshot. If it doesn't look quite the same, feel free to adjust yours: What just happened? In the above section, we looked at how we are able to use the Object Library to add controls to our view and customize their properties in order to build our user interface. In the next section, we will take a look at how to create events to respond to button events. Creating events Now that we have created our user interface, we need to create the events that will respond when we click on each of the buttons. We first need to create an instance of our UIAlertView class, called baseAlert, which will be used by our Show Activity indicator event and will be used to dismiss the activity after a period of time has lapsed. Open the GetUsersAttentionViewController.h interface file and add the following highlighted code as shown in the code snippet below: #import <UIKit/UIKit.h> @interface GetUsersAttentionViewController : UIViewController <UIAlertViewDelegate, UIActionSheetDelegate>{ UIAlertView *baseAlert; } @end We could have declared this within our GetUsersAttentionViewController.m implementation file, but I prefer to declare it in this class as it can be referenced throughout your application. You will notice from the code snippet above that we have made reference to two delegate protocols within our GetUsersAttentionViewController.h interface file; this enables us to capture and respond to the button event presses used by our Action Sheet and Alert Views. This will become apparent when we start adding the code events for our Alert Views and Action Sheets.  
Read more
  • 0
  • 0
  • 1601

article-image-getting-started-netbeans
Packt
04 Aug 2011
6 min read
Save for later

Getting Started with NetBeans

Packt
04 Aug 2011
6 min read
Java EE 6 Development with NetBeans 7 Develop professional enterprise Java EE applications quickly and easily with this popular IDE In addition to being an IDE, NetBeans is also a platform. Developers can use NetBeans' APIs to create both NetBeans plugins and standalone applications. For a brief history of Netbeans, see http://netbeans.org/about/history.html. Although the NetBeans IDE supports several programming languages, because of its roots as a Java only IDE it is a lot more popular with this language. As a Java IDE, NetBeans has built-in support for Java SE (Standard Edition) applications, which typically run in the user's desktop or notebook computer; Java ME (Micro Edition), which typically runs in small devices such as cell phones or PDAs; and for Java EE (Enterprise Edition) applications, which typically run on "big iron" servers and can support thousands of concurrent users.   Obtaining NetBeans NetBeans can be obtained by downloading it from http://www.netbeans.org. To download NetBeans, we need to click on the button labeled Download Free NetBeans IDE 7.0 (the exact name of the button may vary depending on the current version of NetBeans). Clicking on this button will take us to a page displaying all of NetBeans download bundles. NetBeans download includes different NetBeans bundles that provide different levels of functionality. The following table summarizes the different available NetBeans bundles and describes the functionality they provide: NetBeans bundleDescriptionJava SEAllows development of Java desktop applications.Java EEAllows development of Java Standard Edition (typically desktop applications), and Java Enterprise Edition (enterprise application running on "big iron" servers) applications.C/C++Allows development of applications written in the C or C++ languages.PHPAllows development of web applications using the popular open source PHP programming language.AllIncludes functionality of all NetBeans bundles. To follow the examples, either the Java EE or the All bundle is needed. The screenshots were taken with the Java EE bundle. NetBeans may look slightly different if the All Pack is used, particularly, some additional menu items may be seen. The following platforms are officially supported: Windows 7/Vista/XP/2000 Linux x86 Linux x64 Solaris x86 Solaris x64 Mac OS X Additionally, NetBeans can be executed in any platform containing Java 6 or newer. To download a version of NetBeans to be executed in one of these platforms, an OS independent version of NetBeans is available for download. Although the OS independent version of NetBeans can be executed in all of the supported platforms, it is recommended to obtain the platform-specific version of NetBeans for your platform. The NetBeans download page should detect the operating system being used to access it, and the appropriate platform should be selected by default. If this is not the case, or if you are downloading NetBeans with the intention of installing it in another workstation on another platform, the correct platform can be selected from the drop down labeled, appropriately enough, Platform. Once the correct platform has been selected, we need to click on the appropriate Download button for the NetBeans bundle we wish to install. For Java EE development, we need either the Java EE or the All bundle. NetBeans will then be downloaded to a directory of our choice. Java EE applications need to be deployed to an application server. Several application servers exist in the market, both the Java EE and the All NetBeans bundles come with GlassFish and Tomcat bundled. Tomcat is a popular open source servlet container, it can be used to deploy applications using the Servlets, JSP and JSF, however it does not support other Java EE technologies such as EJBs or JPA. GlassFish is a 100 percent Java EE-compliant application server. We will be using the bundled GlassFish application server to deploy and execute our examples.   Installing NetBeans NetBeans requires a Java Development Kit (JDK) version 6.0 or newer to be available before it can be installed. NetBeans installation varies slightly between the supported platforms. In the following few sections we explain how to install NetBeans on each supported platform. Microsoft Windows For Microsoft Windows platforms, NetBeans is downloaded as an executable file named something like netbeans-7.0-ml-java-windows.exe, (exact name depends on the version of NetBeans and the NetBeans bundle that was selected for download). To install NetBeans on Windows platforms, simply navigate to the folder where NetBeans was downloaded and double-click on the executable file. Mac OS X For Mac OS X, the downloaded file is called something like netbeans-7.0-ml-javamacosx.dmg (exact name depends on the NetBeans version and the NetBeans bundle that was selected for download). In order to install NetBeans, navigate to the location where the file was downloaded and double-click on it. The Mac OS X installer contains four packages, NetBeans, GlassFish, Tomcat, and OpenESB, these four packages need to be installed individually, They can be installed by simply double-clicking on each one of them. Please note that GlassFish must be installed before OpenESB. Linux and Solaris For Linux and Solaris, NetBeans is downloaded in the form of a shell script. The name of the file will be similar to netbeans-7.0-ml-java-linux.sh, netbeans-7.0-mljava-solaris-x86.sh, or netbeans-7.0-ml-java-solaris-sparc.sh, depending on the version of NetBeans, the selected platform and the selected NetBeans bundle. Before NetBeans can be installed in these platforms, the downloaded file needs to be made executable. This can be done in the command line by navigating to the directory where the NetBeans installer was downloaded and executing the following command: chmod +x ./filename.sh Substitute filename.sh with the appropriate file name for the platform and the NetBeans bundle. Once the file is executable it can be installed from the command line: ./filename.sh Again substitute filename.sh with the appropriate file name for the platform and the NetBeans bundle. Other platforms For other platforms, NetBeans can be downloaded as a platform-independent zip file. The name of the zip file will be something like netbeans-7.0-201007282301-mljava.zip (exact file name may vary, depending on the exact version of NetBeans downloaded and the NetBeans bundle that was selected). To install NetBeans on one of these platforms, simply extract the zip file to any suitable directory.  
Read more
  • 0
  • 0
  • 4513
Banner background image

article-image-biztalk-application-currency-exchange-rates
Packt
03 Aug 2011
7 min read
Save for later

BizTalk Application: Currency Exchange Rates

Packt
03 Aug 2011
7 min read
Microsoft BizTalk 2010: Line of Business Systems Integration A practical guide to integrating Line of Business systems with BizTalk Server 2010 We are going to assume that we have two companies in our Dynamics AX implementation; one that is Canadian Dollar (CAD) based, and one that is United States Dollar(USD) based. Thus, we need to use the LedgerExchangeRates.create and LedgerExchangeRates.find actions in both companies. For the remainder of this example, we'll refer to these as daxCADCompany and daxUSDCompany. The complete solution, titled Chapter9-AXExchangeRates, is included in the source code. Dynamics AX schemas We'll start by creating a new BizTalk project, Chapter9-AXExchangeRates, in Visual Studio. After the AIF actions setup is complete, the next step is to generate the required schemas that are needed for our BizTalk application. This is done by right clicking on the BizTalk project in Visual Studio 2010, click Add, highlight and click Add Generated Items. This will bring up the Add Generated Items window, under the Templates section—Visual Studio installed template, select Add Adapter Metadata, and click Add. This will bring up the Add Adapter Wizard window (shown in the following screenshot), so we'll simply select Microsoft Dynamics AX 2009 and click Next. Now, we'll need to fill in the AX server instance name (AX 2009-SHARED in our example) under Server name, and the TCP/IP Port number (2712, which is the default port number, but this can differ). Now, click Next from the BizTalk Adapter for Microsoft Dynamics AX Schema Import Wizard window. Specify the connection information in the next step. In the next window, you should see all the active AIF services. Note that since the AIF services table is a global table, so you will see all the active services in your Dynamics AX instance. This does not mean that each endpoint, thus each company, is configured to accept the actions that each AIF service listed has available. This is the point where you first verify that your connectivity and AIF setup is correct. An error here using the wizard typically is due to an error in the AIF channel configuration. In the wizard window above, you can see the AIF services that are enabled. In our case, the ExchangeRatesService is the only service currently enabled in our Dynamics AX instance. Under this service, you will see three possible modes (sync, async request, and async response) to perform these actions. All three will actually produce the same schemas. Which mode and action (create, find, findkeys, or read) we use is actually determined by the metadata in our message we'll be sending to AX and the logical port configurations in our orchestration. Now, click Finish. Now in the project solution, we see that the wizard will generate the two artifacts. The first ExchangeRates_ExchangeRates.xsd is the schema for the message type that we need to send when calling the LedgerExchangeRates.create action and it is also the same schema returned in the response message when calling the action LedgerExchangeRates.find. Since we are actually dealing with the same AX table in both actions, Exchange Rates, both actions will in part (one will be the inbound message, the other will be the outbound message) require the same schema. The second artifact, BizTalk Orchestration.odx, is also generated by default by the wizard. In the orchestration view, we can see that four Multi-part Message Types were also added to the orchestration. Rename the orchestration to something more meaningful such as ProcessExchangeRates.odx. Now that we have defined our message type that will be returned in our response message, we need to define what the request type will be. Notice from the orchestration view that two messages, ExchangeRatesService_create_Response and ExchangeRatesService_find_Request, have types which Visual Studio has in error 'does not exist or is invalid'. For the out-of-the-box find action, we need the message type DynamicsAX5.QueryCriteria. The other message type is return by AX when calling a create action is DynamicsAX5.EntityKey (if we called a createList action, the returned message would be of type DynamicsAX5.EntitiyKeyList). The schemas for these message types are in the Microsoft.Dynamics.BizTalk.Adapter.Schemas assembly, which can be found in the bin directory of the install location of the BizTalk adapter. Add this reference to the project in Visual Studio. Then, re-select the appropriate message type for each Message Part that is invalid from the Select Artifact Type window as shown. Next, depending on your organization, typically you may want to either populate the noon exchange rates or closing rates. For our example, we will use the closing USD/CAD exchange rates from the Bank of Canada. This is published at 16:30 EST on the website (http://www.bankofcanada.ca/rss/fx/close/fx-close.xml). Since this source is already in XML, download and save a sample. We then generate a schema from Visual Studio using the BizTalk Schema Generator (right click the solution, Add Generated Items, Add Generated Schemas, using the Well-Formed XML (Not Loaded) document type. This will generate the schema for the message that we need to receive by our BizTalk application daily. In the example provided, the schema is ClosingFxRates.xsd (the wizard will generate four other .xsd files that are referenced in ClosingFxRates.xsd). A simple way to schedule the download of this XML data file is to use the Schedule Task Adapter (http://biztalkscheduledtask.codeplex.com/), which can be downloaded and installed at no cost (the source code is also available). Download and install the adapter (requires Microsoft .NET Framework Version 1.1 Redistributable Package), then add using the BizTalk Server Administration Console with the name Schedule. We will use this adapter in our receive location to retrieve the XML via http. There are also RSS adapters available to be purchased from, for example, /nsoftware (http://www.nsoftware.com/). However, for this example, the scheduled task adapter will suffice. Now, since the source of our exchange rates is a third-party schema, and your specific requirements for the source will most likely differ, we'll create a canonical schema ExchangeRates.xsd. As you can see in the schema below, we are only interested in a few pieces of information: Base Currency (USD or CAD in our example), Target Currency (again USD or CAD), Rate, and finally, Date. Creating a canonical schema will also simplify the rest of our solution. Now that we have all the schemas defined for our message types defined or referenced, we can add the messages that we require to the orchestration. We'll begin by adding the message msgClosingFxRates. That will be our raw input data from the Bank of Canada with the message type from the generated schema ClosingFxRates.RDF. For each exchange rate, we'll need to first query Dynamics AX to see if it exists, thus we'll need a request message and a response message. Add a message msgAXQueryExchangeRatesRequest, which will be a multi-part message type ExchangeRatesService_find_Request, and msgAXQueryExchangeRatesResponse that will be a multi-part message type ExchangeRatesService_find_Response. Next, we'll create the messages for the XML that we'll send and receive from Dynamics AX to create an exchange rate. Add a message msgAXCreateExchnageRatesRequest, which will be a multi-part message type of ExchangeRatesService_create_Request, and msgAXCreateExchnageRatesResponse that will be a multi-part message type ExchangeRatesService_create_Response. Finally, we'll need to create two messages, msgExchangeRatesUSDCAD and msgExchangeRatesCADUSD, which will have the message type of the canonical schema ExchangeRates. These messages will contain the exchange rates for USD to CAD and for CAD to USD respectively. We'll create these two messages just to simplify our orchestration for this example. In practice, if you're going to deal with several exchange rates, you will need to add logic to the orchestration to loop through the list rates that you're interested in and have only one message of type ExchangeRates resent several times.
Read more
  • 0
  • 0
  • 1638

article-image-integrating-microsoft-dynamics-ax-2009-using-biztalk-adapter
Packt
03 Aug 2011
5 min read
Save for later

Integrating with Microsoft Dynamics AX 2009 using BizTalk Adapter

Packt
03 Aug 2011
5 min read
  Microsoft BizTalk 2010: Line of Business Systems Integration What is Dynamics AX? Microsoft Dynamics AX (formally Microsoft Axapta) is Microsoft's Enterprise Resource Planning (ERP) solution for mid-size and large customers. Much like SAP, Dynamics AX provides functions that are critical to businesses that can benefit from BizTalk's integration. Microsoft Dynamics AX is fully customizable and extensible through its rich development platform and tools. It has direct connections to products such as Microsoft BizTalk Server, Microsoft SQL Server, Exchange, and Office. Often Dynamics AX is compared to SAP All in One. Those who are familiar with SAP are also familiar with high cost of implementation, maintenance, and customization associated with it. A Microsoft Dynamics AX solution offers more customizability, lower maintenance costs, and lower per-user costs than SAP. ERP implementations often fail in part due to lack of user acceptance in adopting a new system. The Dynamics AX user interface has a similar look and feel to other widely used products such as Microsoft Office and Microsoft Outlook, which significantly increases the user's comfort level when dealing with a new ERP system. For more information on Dynamics AX 2009 and SAP, please see http://www.microsoft.com/dynamics/en/us/compare-sap.aspx. Methods of integration with AX Included with Dynamics AX 2009, Microsoft provides two tools for integration with Dynamics AX: Dynamics AX BizTalk Adapter .NET Business Connector The BizTalk adapter interfaces via the Application Interface Framework Module (AIF) in Dynamics AX 2009, and the .NET Business Connector directly calls the Application Object Tree (AOT) classes in your AX source code. The AIF module requires a license key, which can add cost to your integration projects if your organization has not purchased this module. It provides an extensible framework that enables integration via XML document exchange. A great advantage of the AIF module is its integration functionality with the BizTalk Dynamics AX adapter. Other adapters include a FILE adapter and MSMQ, as well as Web Services to consume XML files are included out of the box. The AIF module requires a fair amount of setup and configuration. Other advantages include full and granular security, capability of synchronous and asynchronous mode integration mode, and full logging of transactions and error handling. The Microsoft BizTalk AX 2009 adapter can execute AX actions (exposed functions to the AIF module) to write data to AX in both synch and asynch modes. Which mode is used is determined by the design of your BizTalk application (via logical ports). A one-way send port will put the XML data into the AIF queue, whereas a two-way send-receive port will execute the actions and return a response message. Asynch transitions will stay in the AIF queue until a batch job is executed. Setting up and executing the batch jobs can be very difficult to manage. Pulling data from AX can also be achieved using the BizTalk adapter. Transactions pushed into the same AIF queue (with an OUTBOUND direction in an async mode) can be retrieved using the AX adapter which polls AX for these transactions. The .NET Business connector requires custom .NET code to be written in order to implement it. If your business requirements are for a single (or very small amount) of point-to-point integration data flows, then we would recommend using the .NET Business Connector. However, this often requires customizations in order to create and expose the methods. Security also needs to be handled with the service account that the code is running under. Installing the adapter and .NET Business Connector The Microsoft BizTalk adapter for Dynamics AX 2009 and the .NET Business Connector are installed from your Dynamics AX Setup install setup under Integration on the Add or Modify components window. Each component is independent of one another; however the BizTalk adapter leverages components of the business connector. You are not required to install the Dynamics AX client on the BizTalk server. When installed in BizTalk adapter, you can simply select all the defaults from the install wizard. For the .NET business connector, you'll be prompted for the location of your Dynamics AX instance. This will be used only as a default configuration and can easily be changed. Configuring Dynamics AX 2009 Application Integration Framework for BizTalk Adapter Configuration of the AIF module involves several steps. It also goes a long way to increasing your understanding of the granularity of Dynamics AX setup and security considerations that were taken into account for integration of what can be highly sensitive data.It is recommended that this setup be done with Admin level security, however, only full control of the AIF module is required. This setup is almost identical in version prior to Dynamics AX 2009; minor differences will be noted. All AIF setup tables can be found in Dynamics AX under Basic | Setup | Application Integration Framework. The first step is rather simple, however critical. In the Transport Adapters form, add in a new entry selecting Adapter Class drop down AifBizTalkAdapter, select Active, and Direction will be Receive and Respond. You also notice there are two other out-of-the-box adapters: FILE and MSMQ. This is a one-time setup that is effective across all companies. Next, using the Channels form, set up an active channel for your specific BizTalk server. Select a meaningful and identifiable Channel ID and Name such as BizTalkChannelID and BizTalkChannel. Select the Adapter to BizTalk Adapter, check Active, set Direction to Both, Response channel equal to the Channel ID of BizTalkChannelID. Set the Address to your BizTalk Server (I2CDARS1 as shown below). (Move the mouse over the image to enlarge.)  
Read more
  • 0
  • 0
  • 3214

article-image-biztalk-application-dynamics-ax-message-outflow
Packt
03 Aug 2011
4 min read
Save for later

BizTalk Application: Dynamics AX Message Outflow

Packt
03 Aug 2011
4 min read
  Microsoft BizTalk 2010: Line of Business Systems Integration A practical guide to integrating Line of Business systems with BizTalk Server 2010 We found that rather than repeating code in several BizTalk solutions when you need to retrieve data from the AIF Queue, it's relatively simple to create a general solution to accomplish this. This solution will retrieve all data via the BizTalk Dynamics AX adapter by polling the Queue at a set interval of time. The minimum polling interval is 1 minute, thus any messages you put in the AIF Queue will not be immediately consumed by BizTalk. The complete solution (Chapter9-AXMessageOutflow) is included with the source code. We'll start by creating a new BizTalk project, Chapter9-AXMessageOutflow, in Visual Studio. Add in a new orchestration, ProcessOutboundAXMessage.odx, which will be the only orchestration required for this example. Also, we'll need to add reference to the Microsoft.Dynamics.BizTalk.Adapter.Schemas assembly and sign the project with a strong name key.   Message setup Next, we'll add two messages to our orchestration: msgAxOutboundMessage and msgAXDocument. These will be the only two messages required in this example. The first message, msgAXOutboundMessage, is of type DynamicsAX5.Message.Envelope. The schema is located in the referenced Microsoft.Dynamics.BizTalk.Adapter.Schemas assembly. (Move the mouse over the image to enlarge.) All outbound messages from the AIF Queue are of this type. As you can see from the sample screenshot below, we have some metadata in the header node but what we are really interested in is the XML contents of the Body node. The contents of the MessageParts node in the Body node will be of type ExchangeRatesService_ExchangeRates.xsd. Thus, all the schemas we require for both inbound and outbound transactions can be generated using the adapter. For the second message, since we don't want to specify a document type, we will use System.Xml.XmlDocument for the Message Type. Using the System.Xml.XmlDocument message type allows for great flexibility in this solution. We can push any message to the AIF queue, and no changes to this BizTalk application are required. Only changes to consuming applications may need to add the AX schema of the message in order to process it. Orchestration setup Next, we create a new a logical port that will receive all messages from Dynamics AX via the AIF Queue with the following settings: Port NameReceiveAXOutboundMessage_PortPort Type NameReceiveAXOutboundMessage_PortTypeCommunication PatternOne-WayPort direction of communicationI'll always be receiving messages on this portPort BindingSpecify Later Also, create a new send port. For this example, we'll just send to a folder drop using the FILE adapter so that we can easily view the XML documents. In practice, other BizTalk applications will most likely process these messages, so you may choose to modify the send port to meet your requirements. Send port settings: Port NameSendAXDocument_PortPort Type NameSendAXDocument_PortTypeCommunication PatternOne-WayPort direction of communicationI'll always be sending messages on this portPort BindingSpecify Later Next, we will need to add the following to the orchestration: Receive shape (receive msgAXOutboundMessage message) Expression shape (determine the file name for msgAXDocument) Message assignment (construct msgAXDocument message) Send shape (send msgAXDocument message) We'll also add two variables (aifActionName and xpathExpression) of type System.String and xmlDoc of type System.Xml.XmlDocument. In the expression shape, we want to extract the AIF Action so that we can name the outbound XML documents in a similar fashion. This will allow us to easily identify the message type from AX. Next, we'll put the following inside expression shape below receive to extract the AIF action name. aifActionName = msgAXOutboundMessage(DynamicsAx5.Action); aifActionName = aifActionName.Substring(55,aifActionName. LastIndexOf('/') - 55); Now, we need to extract the contents of the body message, which is the XML document that we are interested in. Inside the message assignment shape, we will use XPath to extract the message. What we are interested in is the contents of the Body node in the DynamicsAX5.Message.Envelope message we will receive from AX via the AIF Queue. Add the following code inside the assignment shape to extract the XML, assign it to the message we are sending out, and set a common name that we can use in our send port: // Extract Contents of Body node in Envelope Message xpathExpression = "/*[local-name()='Envelope' and namespaceuri()=' http://schemas.microsoft.com/dynamics/2008/01/documents/ Message']/*[local-name()='Body' and namespace-uri()='http://schemas.microsoft.com/dynamics/2008/01/ documents/Message']"; xmlDoc = xpath(msgAXOutboundMessage, xpathExpression); // Extract the XML we are interested in xmlDoc.LoadXml(xmlDoc.FirstChild.FirstChild.InnerXml); // Set the message to the XML Document msgAXDocument = xmlDoc; // Assign FILE.ReceivedFileNameproperty msgAXDocument(FILE.ReceivedFileName) = aifActionName;  
Read more
  • 0
  • 0
  • 1787
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-getting-started-apache-cassandra
Packt
29 Jul 2011
8 min read
Save for later

Getting started with Apache Cassandra

Packt
29 Jul 2011
8 min read
The Apache Cassandra Project develops a highly scalable second-generation distributed database, bringing together a fully distributed design and a ColumnFamily-based data model. The article contains recipes that allow users to hit the ground running with Cassandra. We show several recipes to set up Cassandra. These include cursory explanations of the key configuration files. It also contains recipes for connecting to Cassandra and executing commands both from the application programmer interface and the command-line interface. Also described are the Java profiling tools such as JConsole. The recipes in this article should help the user understand the basics of running and working with Cassandra. A simple single node Cassandra installation Cassandra is a highly scalable distributed database. While it is designed to run on multiple production class servers, it can be installed on desktop computers for functional testing and experimentation. This recipe shows how to set up a single instance of Cassandra. Getting ready Visit http://cassandra.apache.org in your web browser and find a link to the latest binary release. New releases happen often. For reference, this recipe will assume apache-cassandra-0.7.2-bin.tar.gz was the name of the downloaded file. How to do it... Download a binary version of Cassandra: $ mkdir $home/downloads $ cd $home/downloads $ wget <url_from_getting_ready>/apache-cassandra-0.7.2-bin.tar.gz Choose a base directory that the user will run as he has read and write access to: Default Cassandra storage locations Cassandra defaults to wanting to save data in /var/lib/cassandra and logs in /var/log/cassandra. These locations will likely not exist and will require root-level privileges to create. To avoid permission issues, carry out the installation in user-writable directories. Create a cassandra directory in your home directory. Inside the cassandra directory, create commitlog, log, saved_caches, and data subdirectories: $ mkdir $HOME/cassandra/ $ mkdir $HOME/cassandra/{commitlog,log,data,saved_caches} $ cd $HOME/cassandra/ $ cp $HOME/downloads/apache-cassandra-0.7.2-bin.tar.gz . $ tar -xf apache-cassandra-0.7.2-bin.tar.gz Use the echo command to display the path to your home directory. You will need this when editing the configuration file: $ echo $HOME /home/edward This tar file extracts to apache-cassandra-0.7.2 directory. Open up the conf/cassandra.yaml file inside in your text editor and make changes to the following sections: data_file_directories: - /home/edward/cassandra/data commitlog_directory: /home/edward/cassandra/commit saved_caches_directory: /home/edward/cassandra/saved_caches Edit the $HOME/apache-cassandra-0.7.2/conf/log4j-server.properties file to change the directory where logs are written: log4j.appender.R.File=/home/edward/cassandra/log/system.log Start the Cassandra instance and confirm it is running by connecting with nodetool: $ $HOME/apache-cassandra-0.7.2/bin/cassandra INFO 17:59:26,699 Binding thrift service to /127.0.0.1:9160 INFO 17:59:26,702 Using TFramedTransport with a max frame size of 15728640 bytes. $ $HOME/apache-cassandra-0.7.2/bin/nodetool --host 127.0.0.1 ring Address Status State Load Token 127.0.0.1 Up Normal 385 bytes 398856952452... How it works... Cassandra comes as a compiled Java application in a tar file. By default, it is configured to store data inside /var. By changing options in the cassandra.yaml configuration file, Cassandra uses specific directories created. YAML: YAML Ain't Markup Language YAML™ (rhymes with "camel") is a human-friendly, cross-language, Unicode-based data serialization language designed around the common native data types of agile programming languages. It is broadly useful for programming needs ranging from configuration files and Internet messaging to object persistence and data auditing. See http://www.yaml.org for more information. After startup, Cassandra detaches from the console and runs as a daemon. It opens several ports, including the Thrift port 9160 and JMX port on 8080. For versions of Cassandra higher than 0.8.X, the default port is 7199. The nodetool program communicates with the JMX port to confirm that the server is alive. There's more... Due to the distributed design, many of the features require multiple instances of Cassandra running to utilize. For example, you cannot experiment with Replication Factor, the setting that controls how many nodes data is stored on, larger than one. Replication Factor dictates what Consistency Level settings can be used for. With one node the highest Consistency Level is ONE. Reading and writing test data using the command-line interface The command-line interface (CLI) presents users with an interactive tool to communicate with the Cassandra server and execute the same operations that can be done from client server code. This recipe takes you through all the steps required to insert and read data. How to do it... Start the Cassandra CLI and connect to an instance: $ <cassandra_home>/bin/cassandra-cli [default@unknown] connect 127.0.0.1/9160; Connected to: "Test Cluster" on 127.0.0.1/9160 New clusters do not have any preexisting keyspaces or column families. These need to be created so data can be stored in them: [default@unknown] create keyspace testkeyspace [default@testkeyspace] use testkeyspace; Authenticated to keyspace: testkeyspace [default@testkeyspace] create column family testcolumnfamily; Insert and read back data using the set and get commands: [default@testk..] set testcolumnfamily['thekey'] ['thecolumn']='avalue'; Value inserted. [default@testkeyspace] assume testcolumnfamily validator as ascii; [default@testkeyspace] assume testcolumnfamily comparator as ascii; [default@testkeyspace] get testcolumnfamily['thekey']; => (column=thecolumn, value=avalue, timestamp=1298580528208000) How it works... The CLI is a helpful interactive facade on top of the Cassandra API. After connecting, users can carry out administrative or troubleshooting tasks. Running multiple instances on a single machine Cassandra is typically deployed on clusters of multiple servers. While it can be run on a single node, simulating a production cluster of multiple nodes is best done by running multiple instances of Cassandra. This recipe is similar to A simple single node Cassandra installation earlier in this article. However in order to run multiple instances on a single machine, we create different sets of directories and modified configuration files for each node. How to do it... Ensure your system has proper loopback address support. Each system should have the entire range of 127.0.0.1-127.255.255.255 configured as localhost for loopback. Confirm this by pinging 127.0.0.1 and 127.0.0.2: $ ping -c 1 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_req=1 ttl=64 time=0.051 ms $ ping -c 1 127.0.0.2 PING 127.0.0.2 (127.0.0.2) 56(84) bytes of data. 64 bytes from 127.0.0.2: icmp_req=1 ttl=64 time=0.083 ms Use the echo command to display the path to your home directory. You will need this when editing the configuration file: $ echo $HOME /home/edward Create a hpcas directory in your home directory. Inside the cassandra directory, create commitlog, log, saved_caches, and data subdirectories: $ mkdir $HOME/hpcas/ $ mkdir $HOME/hpcas/{commitlog,log,data,saved_caches} $ cd $HOME/hpcas/ $ cp $HOME/downloads/apache-cassandra-0.7.2-bin.tar.gz . $ tar -xf apache-cassandra-0.7.2-bin.tar.gz Download and extract a binary distribution of Cassandra. After extracting the binary, move/rename the directory by appending '1' to the end of the filename.$ mv apache-cassandra-0.7.2 apache-cassandra-0.7.2-1 Open the apachecassandra- 0.7.2-1/conf/cassandra.yaml in a text editor. Change the default storage locations and IP addresses to accommodate our multiple instances on the same machine without clashing with each other: data_file_directories: - /home/edward/hpcas/data/1 commitlog_directory: /home/edward/hpcas/commitlog/1 saved_caches_directory: /home/edward/hpcas/saved_caches/1 listen_address: 127.0.0.1 rpc_address: 127.0.0.1 Each instance will have a separate logfile. This will aid in troubleshooting. Edit conf/log4j-server.properties: log4j.appender.R.File=/home/edward/hpcas/log/system1.log Cassandra uses JMX (Java Management Extensions), which allows you to configure an explicit port but always binds to all interfaces on the system. As a result, each instance will require its own management port. Edit cassandra-env.sh: JMX_PORT=8001 Start this instance: $ ~/hpcas/apache-cassandra-0.7.2-1/bin/cassandra INFO 17:59:26,699 Binding thrift service to /127.0.0.101:9160 INFO 17:59:26,702 Using TFramedTransport with a max frame size of 15728640 bytes. $ bin/nodetool --host 127.0.0.1 --port 8001 ring Address Status State Load Token 127.0.0.1 Up Normal 385 bytes 398856952452... At this point your cluster is comprised of single node. To join other nodes to the cluster, carry out the preceding steps replacing '1' with '2', '3', '4', and so on: $ mv apache-cassandra-0.7.2 apache-cassandra-0.7.2-2 Open ~/hpcas/apache-cassandra-0.7.2-2/conf/cassandra.yaml in a text editor: data_file_directories: - /home/edward/hpcas/data/2 commitlog_directory: /home/edward/hpcas/commitlog/2 saved_caches_directory: /home/edward/hpcas/saved_caches/2 listen_address: 127.0.0.2 rpc_address: 127.0.0.2 Edit ~/hpcas/apache-cassandra-0.7.2-2/conf/log4j-server. properties: log4j.appender.R.File=/home/edward/hpcas/log/system2.log Edit ~/hpcas/apache-cassandra-0.7.2-2/conf/cassandra-env.sh: JMX_PORT=8002 Start this instance: $ ~/hpcas/apache-cassandra-0.7.2-2/bin/cassandra How it works... The Thrift port has to be the same for all instances in a cluster. Thus, it is impossible to run multiple nodes in the same cluster on one IP address. However, computers have multiple loopback addresses: 127.0.0.1, 127.0.0.2, and so on. These addresses do not usually need to be configured explicitly. Each instance also needs its own storage directories. Following this recipe you can run as many instances on your computer as you wish, or even multiple distinct clusters. You are only limited by resources such as memory, CPU time, and hard disk space.
Read more
  • 0
  • 0
  • 2668

article-image-integrating-biztalk-server-and-microsoft-dynamics-crm
Packt
20 Jul 2011
7 min read
Save for later

Integrating BizTalk Server and Microsoft Dynamics CRM

Packt
20 Jul 2011
7 min read
What is Microsoft Dynamics CRM? Customer relationship management is a critical part of virtually every business. Dynamics CRM 2011 offers a solution for the three traditional areas of CRM: sales, marketing, and customer service. For customers interested in managing a sales team, Dynamics CRM 2011 has a strong set of features. This includes organizing teams into territories, defining price lists, managing opportunities, maintaining organization structures, tracking sales pipelines, enabling mobile access, and much more. If you are using Dynamics CRM 2011 for marketing efforts, then you have the ability to import data from multiple sources, plan campaigns and set up target lists, create mass communications, track responses to campaigns, share leads with the sales team, and analyze the success of a marketing program. Dynamics CRM 2011 also serves as a powerful hub for customer service scenarios. Features include rich account management, case routing and management, a built-in knowledge base, scheduling of call center resources, scripted Q&A workflows called Dialogs, contract management, and more. Besides these three areas, Microsoft pitches Dynamics CRM as a general purpose application platform called xRM, where the "X" stands for any sort of relationship management. Dynamics CRM has a robust underlying framework for screen design, security roles, data auditing, entity definition, workflow, and mobility, among others. Instead of building these foundational aspects into every application, we can build our data-driven applications within Dynamics CRM. Microsoft has made a big move into the cloud with this release of Dynamics CRM 2011. For the first time in company history, a product was released online (Dynamics CRM Online) prior to on-premises software. The hosted version of the application runs an identical codebase to the on-premises version meaning that code built to support a local instance will work just fine in the cloud. In addition to the big play in CRM hosting, Microsoft has also baked Windows Azure integration into Dynamics CRM 2011. Specifically, we now have the ability to configure a call-out to an Azure AppFabric Service Bus endpoint. To do this, the downstream service must implement a specific WCF interface and within CRM, the Azure AppFabric plugin is configured to call that downstream service through the Azure AppFabric Service Bus relay service. For BizTalk Server to accommodate this pattern, we would want to build a proxy service that implements the required Dynamics CRM 2011 interface and forwards requests into a BizTalk Server endpoint. This article will not demonstrate this scenario, however, as the focus will be on integrating with an onpremises instance only. Why Integrate Dynamics CRM and BizTalk Server? There are numerous reasons to tie these two technologies together. Recall that BizTalk Server is an enterprise integration bus that connects disparate applications. There can be a natural inclination to hoard data within a particular application, but if we embrace real-time message exchange, we can actually have a more agile enterprise. Consider a scenario when a customer's full "contact history" resides in multiple systems. The Dynamics CRM 2011 contact center may only serve a specific audience, and other systems within the company hold additional details about the company's customers. One design choice could be to bulk load that information into Dynamics CRM 2011 on a scheduled interval. However, it may be more effective to call out to a BizTalk Server service that aggregates data across systems and returns a composite view of a customer's history with a company. In a similar manner, think about how information is shared between systems. A public website for a company may include a registration page where visitors sign up for more information and deeper access to content. That registration event is relevant to multiple systems within the company. We could send that initial registration message to BizTalk Server and then broadcast that message to the multiple systems that want to know about that customer. A marketing application may want to respond with a personalized email welcoming that person to the website. The sales team may decide to follow up with that person if they expressed interest in purchasing products. Our Dynamics CRM 2011 customer service center could choose to automatically add the registration event so that it is ready whenever that customer calls in. In this case, BizTalk Server acts as a central router of data and invokes the exposed Dynamics CRM services to create customers and transactions. Communicating from BizTalk Server to Dynamics CRM The way that you send requests from BizTalk Server to Dynamics CRM 2011 has changed significantly in this release. In the previous versions of Dynamics CRM, a BizTalk "send" adapter was available for communicating with the platform. Dynamics CRM 2011 no longer ships with an adapter and developers are encouraged to use the WCF endpoints exposed by the product. Dynamics CRM has both a WCF REST and SOAP endpoint. The REST endpoint can only be used within the CRM application itself. For instance, you can build what is called a web resource that is embedded in a Dynamics CRM page. This resource could be a Microsoft Silverlight or HTML page that looks up data from three different Dynamics CRM entities and aggregates them on the page. This web resource can communicate with the Dynamics CRM REST API, which is friendly to JavaScript clients. Unfortunately, you cannot use the REST endpoint from outside of the Dynamics CRM environment, but because BizTalk cannot communicate with REST services, this has little impact on the BizTalk integration story. The Dynamics CRM SOAP API, unlike its ASMX web service predecessor, is static and operates with a generic Entity data structure. Instead of having a dynamic WSDL that exposes typed definitions for all of the standard and custom entities in the system, the Dynamics CRM 2011 SOAP API has a set of operations (for example, Create, Retrieve) that function with a single object type. The Entity object has a property identifying which concrete object it represents (for example, Account or Contract), and a name/value pair collection that represents the columns and values in the object it represents. For instance, an Entity may have a LogicalName set to "Account" and columns for "telephone1", "emailaddress", and "websiteurl." In essence, this means that we have two choices when interacting with Dynamics CRM 2011 from BizTalk Server. Our first option is to directly consume and invoke the untyped SOAP API. Doing this involves creating maps from a canonical schema to the type-less Entity schema. In the case of doing a Retrieve operation, we may also have to map the type-less Entity message back to a structured message for more processing. Below, we will walk through an example of this. The second option involves creating a typed proxy service for BizTalk Server to invoke. Dynamics CRM has a feature-rich Solution Development Kit (SDK) that allows us to create typed objects and send them to the Dynamics CRM SOAP endpoint. This proxy service will then expose a typed interface to BizTalk that operates as desired with a strongly typed schema. An upcoming exercise demonstrates this scenario. Which choice is best? For simple solutions, it may be fine to interact directly with the Dynamics CRM 2011 SOAP API. If you are updating a couple fields on an entity, or retrieving a pair of data values, the messiness of the untyped schema is worth the straightforward solution. However, if you are making large scale changes to entities, or getting back an entire entity and publishing to the BizTalk bus for more subscribers to receive, then working strictly with a typed proxy service is the best route. However, we will look at both scenarios below, and you can make that choice for yourself. Integrating Directly with the Dynamics CRM 2011 SOAP API In the following series of steps, we will look at how to consume the native Dynamics CRM SOAP interface in BizTalk Server. We will first look at how to query Dynamics CRM to return an Entity. After that, we will see the steps for creating a new Entity in Dynamics CRM. Querying Dynamics CRM from BizTalk Server In this scenario, BizTalk Server will request details about a specific Dynamics CRM "contact" record and send the result of that inquiry to another system.
Read more
  • 0
  • 0
  • 4822

article-image-communicating-dynamics-crm-biztalk-server
Packt
20 Jul 2011
6 min read
Save for later

Communicating from Dynamics CRM to BizTalk Server

Packt
20 Jul 2011
6 min read
Microsoft BizTalk 2010: Line of Business Systems Integration A practical guide to integrating Line of Business systems with BizTalk Server 2010 There are three viable places where Dynamics CRM can communicate with BizTalk Server. First, a Dynamics CRM form is capable of executing client-side JavaScript at various points in the form lifecycle. One can definitely use JavaScript to invoke web services, including web services exposed by BizTalk Server. However, note that JavaScript invocation of web services is typically synchronous and could have a negative impact on the user experience if a form must constantly wait for responses from distributed services. Also, JavaScript that runs within Dynamics CRM is clientside and tied directly to the page on which it resides. If we programmatically interact with a Dynamics CRM entity, then any code existing in the client-side script will not get invoked. For instance, if after an "account" record is created we send a message, via JavaScript, to BizTalk, this logic would not fire if we created an "account" record programmatically. The second place where Dynamics CRM can communicate with BizTalk Server is through workflows. A workflow in Dynamics CRM is an automated process where a set of steps is executed according to rules that we define. For example, when a sales opportunity is closed, we run a workflow that adds a message to the customer record, notifies all parties tied to the opportunity, and sends a polite email to the lost prospect. Workflows are based on Windows Workflow 4.0 technology and can be built either in the Dynamics CRM application itself or within Visual Studio 2010. The Dynamics CRM web application allows us to piece together workflows using previously registered workflow steps. If we need new workflow steps or need to construct something complex, we can jump into Visual Studio 2010 and define the workflow there. Why would we choose to use a workflow to send a message to BizTalk Server? If you have a long-running process that can either be scheduled or executed on demand and want the option for users to modify the process, the workflow may be the right choice. The final strategy for communicating between Dynamics CRM and BizTalk Server is to use plugins. Plugins are server-based application extensions that execute business logic and get tied directly to an entity. This means that they are invoked whether we work in the Dynamics CRM web interface or through the API. I can run a plugin both synchronously and asynchronously, depending on the situation. For instance, if we need to validate the data on a record prior to saving it, we can set a plugin to run before the "save" operation is committed and provide some user feedback on the invalid information. Or, we could choose to asynchronously call a plugin after a record is saved and transmit data to our service bus, BizTalk Server. In the following exercise, we will leverage plugins to send data from Dynamics CRM to BizTalk Server. Integration with BizTalk Server In this first walkthrough, we will build a plugin that communicates from Dynamics CRM to a BizTalk Server located. An event message will be sent to BizTalk whenever a change occurs on an Account record in Dynamics CRM. Setup This exercise leverages a BizTalk Server project already present in your Visual Studio 2010 solution. We are going to publish a web service from BizTalk Server that takes in a message and routes it to a BizTalk send port that writes the message to the file system. If you have not already done so, go to the code package and navigate to C:LOBIntegrationChapter03Chapter3-DynamicsCRM and open the Visual Studio 2010 solution file named Chapter3-DynamicsCRM.sln. Find the BizTalk Server project named Chapter3-DynamicsCRM.AcctRouting and open it. The code package includes a custom schema named AccountEventChange_XML.xsd and notice which elements we want from Dynamics CRM 2011 when an account changes. The first element, EventSource, is used to designate the source of the change event, as there may be multiple systems that share changes in an organization's accounts. This BizTalk project should be set to deploy to a BizTalk application named Chapter3. Build and deploy the project to the designated BizTalk Server. After confirming a successful deployment, launch the BizTalk WCF Service Publishing Wizard. We are going to use this schema to expose a web service entry point into BizTalk Server that Dynamics CRM 2011 can invoke. On the WCF Service Type wizard page, select a WCF-BasicHttp adapter and set the service to expose metadata and have the wizard generate a receive location for us in the Chapter3 application: On the Create WCF Service wizard page, choose to Publish schemas as WCF service. This option gives us fine-grained control over the naming associated with our service. On the next page, delete the two-way operation already present in the service definition. Rename the topmost service definition to AccountChangeService and assign the service the same name. Right-click the service and create a new one-way operation named PublishAccountChange. Right-click the Request message of the operation and choose the AccountChangeEvent message from our BizTalk Project's DLL: On the following wizard page, set the namespace of the service to http://Chapter3/AccountServices. Next, set the location of our service to http://localhost/AccountChangeService and select the option to allow anonymous access to the generated service. Finally, complete the wizard by clicking the Create button on the final wizard page. Confirm that the wizard successfully created both an IIS-hosted web service, and a BizTalk receive port/location. Ensure that the IIS web service is running under an Application Pool that has permission to access the BizTalk databases. In order to test this service, first, go to the BizTalk Server Administration Console and locate the Chapter3 application. Right click the Send Ports folder and create a new, static one-way send port named Chapter3.SendAccountChange.FILE. Set the send port to use the FILE adapter and select the FileDropDropCustomerChangeEvent folder that is present in the code package: This send port should listen for all account change event messages, regardless of which receive location (and system) that they came from. Go to the Filters tab of this send port. Set the filter Property to BTS.MessageType and filter Value to http://Chapter3-DynamicsCRM.AcctRouting.AccountChangeEvent_XML#AccountChangeEvent. All that remains is to test our service. Open the WCF Test Client application and add a new service reference to http://localhost/AccountChangeService/AccountChangeService.svc. Invoke the PublishAccountChange method and, if everything is configured correctly, we will see a message emitted by BizTalk Server that matches our service input parameters: (Move the mouse over the image to enlarge.) We now are sufficiently ready to author the Dynamics CRM plugin, which calls this BizTalk service.  
Read more
  • 0
  • 0
  • 2637

article-image-how-integrate-vtiger-crm-your-website
Packt
15 Jul 2011
5 min read
Save for later

How to Integrate vtiger CRM with your Website

Packt
15 Jul 2011
5 min read
  vtiger CRM Beginner's Guide Record and consolidate all your customer information with vtiger CRM    To go through the exercises in this article, you'll need basic knowledge of HTML. If you already understand basic web development concepts, then you'll also be well prepared to delve into vtiger CRM's API. The vtiger CRM API For you developers out there, all of the ins and outs of vtiger CRM's API are fully documented at http://api.vtiger.com. For those of you not familiar with API's, API stands for Application Programming Interface. It's an interface for computers rather than humans. What does the API do? To illustrate—you can access the human interface of vtiger CRM by logging in with your username and password. The screens that are shown to you with all of the buttons and links make up the human interface. An API, on the other hand, is an interface for other computers. Computers don't need the fancy stuff that we humans do in the interface—it's all text. What is the benefit of the API? With an API, vtiger allows other computer systems to inform it and also ask it questions. This makes everyone's life easier, especially if it means you don't have to type the same data twice into two systems. Here's an example. You have a website where people make sales inquiries and you capture that information as a sales lead. You might receive that information as an email. At that point you could just leave the data in your email and refer to it as needed (which many people still do) or you could enter it into a CRM tool like vtiger so you can keep track of your leads. You can take it one step further by using vtiger's API. You can tell your website how to talk to vtiger's API and now your website can send the leads directly into vtiger, and...Voila! When you log in, the person who just made an inquiry on your website is now a lead in vtiger. Sending a lead into vtiger CRM from your website Well, what are we waiting for?! Let's give it a try. There is a plugin/extension in vtiger called Webforms and it uses the vtiger API to get data into vtiger. In the following exercises, we're going to: Configure the Webforms plugin Create a webform on your company website IMPORTANT NOTE: If you want to be able to send leads into vtiger from your website, your vtiger installation must be accessible on the Internet. If you have installed vtiger on a computer or server on your internal network, then you won't be able to send leads into vtiger from your website, because your website won't be able to connect with the computer/server that vtiger is running on. Time for action – configuring the Webforms plugin OK, let's roll up our sleeves and get ready to do a little code editing. Let's take a look first: Let's navigate to the Webforms configuration file in vtigercrm/modules/Webforms/Webforms.config.php Let's open it up with a text editor like Notepad. Here's what it might look like by default: <?php /*+************************************************************** ******************* * The contents of this file are subject to the vtiger CRM Public License Version 1.0 * ("License"); You may not use this file except in compliance with the License * The Original Code is: vtiger CRM Open Source * The Initial Developer of the Original Code is vtiger. * Portions created by vtiger are Copyright (C) vtiger. * All Rights Reserved. ***************************************************************** *******************/ $enableAppKeyValidation = true; $defaultUserName = 'admin'; $defaultUserAccessKey = 'iFOdqrI8lS5UhNTa'; $defaultOwner = 'admin'; $successURL = ''; $failureURL = ''; /** * JSON or HTML. if incase success and failure URL is NOT specified. */ $defaultSuccessAction = 'HTML'; $defaultSuccessMessage = 'LBL_SUCCESS'; ?> We have to be concerned with several lines here. Specifically, they're the ones that contain the following: $defaultUserName: This will most likely be the admin user, although it can be any user that you create in your vtiger CRM system. $defaultUserAccessKey: This key is used for authentication when your website will access vtiger's API. You can access this key by logging in to vtiger and clicking on the My Preferences link at the top right. It needs to be the key for the username assigned to the $defaultUserName variable. $defaultOwner: This user will be assigned all of the new leads created by this form by default. $successURL: If the lead submission is successful, this is the URL to which you want to send the user after they entered their information. This would typically be a web page that would thank the user for their submission and provide any additional sales information. $failureURL: This is the URL which you want to send the user if the submission fails. This would typically be a web page that would say something like, "We apologize, but something has gone wrong. Please try again". Now you'll need to fill in the values with the information from our own installation of vtiger CRM. Save the Webforms.config.php and close it. We've finished completing the configuration of the Webforms module. What just happened? We configured the Webforms module in vtiger CRM. We modified the Webform plugin's configuration file, Webforms.config.php. Now the Webforms module will: Be able to authenticate lead submissions that come from your website Assign all new leads to the admin user by default (you'll be able to change this) Send the user to a thank you page, should the lead submission into vtiger succeed Send the user to an "Oops" page, should the lead submission into vtiger fail  
Read more
  • 0
  • 1
  • 5512
article-image-introduction-vtiger-crm
Packt
11 Jul 2011
6 min read
Save for later

Introduction to Vtiger CRM

Packt
11 Jul 2011
6 min read
Customer. Relationship. Management. These three words by themselves are enough to give us all nightmares, never mind all three of them together in the same sentence. But, vtiger CRM takes the pain away and provides us with a solution that will leave us whistling the 1812 Overture (Yeah, you can make the cannon noises if you want). vtiger CRM is an open source customer relationship management tool written in the PHP scripting language, and it uses MySQL as its database. This article introduces you to vtiger CRM and you'll learn a bit about its origin and history. The major technologies that make up vtiger CRM are outlined and the core feature set is explained. The history of Vtiger CRM As you may already know, vtiger CRM is a fork of another CRM package called SugarCRM. SugarCRM was originally released under the SPL or "SugarCRM Public License". It's a modified version of the Mozilla Public License 1.1. In 2004, Sridhar Vembu, CEO of AdventNet, created vtiger. SugarCRM was starting to "close" some of its source code for commercial gain. Vembu and the vtiger team created vtiger under the "honest open source" label based on SugarCRM's own SPL. SugarCRM was openly upset by this movement. They called vtiger CRM "a lie" and claimed they were not living up to "the spirit" of open source. However, the vtiger team claimed full compliance with the SPL and openly admitted that it was a fork. They also sent a letter to Eric Raymond, a well-renowned advocate of open source. You can read the whole thread from 2004 here. It's a very interesting read. vtiger CRM states that they will protect the CRM to stay free with no dual versioning. Until now, vtiger has remained 100 percent open source and free. With the current version of vtiger—version 5—vtiger has lost almost all SugarCRM code. The technical components of vtiger CRM vtiger CRM is built on Apache, PHP, and MySQL. We'll review briefly each of these components below. Apache Apache is another open source software project. Apache is a web server. A web server allows you to "host" a website. When you browse the Internet, Apache is what sends the content you're viewing to your screen. Of the roughly 255 million websites that existed in 2010, Apache hosted about 152 million of them. vtiger CRM uses the Apache web server by default, although it can be configured to work with other web servers. Make sure you have a working installation of Apache 2.0.40 or above. Litt le experience with Apache is necessary for the use of vtiger. PHP PHP is another open source soft ware project. It is a sophisticated scripting language that benefits from the contribution of developers all over the world. PHP allows you to process data among many other things. vtiger CRM is built on PHP. Over the last decade, PHP has become the scripting language of choice for many open-source and commercially hosted soft ware packages. With vtiger CRM 5.2, PHP 5.3.x is recommended. If you have significant experience in PHP, you will have more potential for automation and more power in customizing vtiger CRM for your organization's needs. MySQL MySQL is a free, open source database management engine. It allows you to store, process, and retrieve relational data. vtiger CRM uses MySQL to store all of its CRM data. To run vtiger well, MySQL 5.1.x is recommended. Like PHP, significant MySQL skill will unlock the true potential for customization that vtiger has. Smarty Smarty is a template engine designed for PHP. The result it that it separates the application logic (PHP) from the presentation or what you see on the screen. vtiger CRM uses Smarty to display its data, such as leads, accounts, and contacts. You don't have to worry about installing Smarty or its version, as it installs along with vtiger. Smarty is a PHP-based templating system that allows vtiger to create its various views and layouts and merge them with vtiger's data layer, MySQL. CSS CSS (Cascading Style Sheets) is the standard by which colors and background images are applied in vtiger. If you are proficient in CSS, you can significantly change the look and feel of vtiger CRM and even make important usability improvements specific to your organization. So, if you have experience with Apache, PHP, MySQL, and CSS, vtiger is a perfect fit for you and your organization. I'm sure you're eager to dig right in and start installing, but first let's take a look at vtiger's CRM feature set. Then you'll get a good picture of what you have to start with. vtiger's core feature set Lead Management, Sales Force Automation, Activity Management, and Customer Service are at the core of vtiger. However, there are plenty of other features that extend this core. There are also billing, inventory, email integration, and calendaring features that really start to build out the full-featured CRM that vtiger is. There won't be an extensive consideration of all of vtiger's features in this article, but we'll get an overview of the core features that have to do with sales force automation. Sales and marketing features First, here are some brief definitions of the terms that vtiger CRM uses. A Lead represents a company or a representative of a company that may have an interest in your products or services. A Potential is a lead that does have an interest in your products or services. An Account is either a customer or a prospect that has an attached Potential. A Contact is a person that is connected to an Account. Multi-channel lead and account management is an integral part of CRM and is firmly supported by vtiger. You can capture leads from your website, enter them after a conference, etc. and vtiger will help you work on those leads and track them until they become business opportunities and then paying accounts. Notice the default view of leads in the following screenshot. You can filter the lead view with custom filters so you can have visibility to specific segments of your lead pool, such as location, number of employees, revenue, sales stage, and so on: vtiger CRM also features an easy and secure web lead form that you can place on any website and it will insert new leads directly into your vtiger CRM system. The lead details screen tells you everything you need to know about the lead on one page: You can also incorporate a product-based selling process at the lead stage with integrated products. You can indicate any products that the lead is interested in. Once you have identified a lead as having potential for business, you can click on the Convert Lead link while viewing a lead to convert into a Potential and/or Account. We'll get into more detailed information about how to manage your sales process in vtiger CRM, but for now, you can see the power of vtiger CRM beginning to unfold and what it means for your sales process.
Read more
  • 0
  • 0
  • 4465

article-image-apache-cassandra-working-multiple-datacenter-environments
Packt
07 Jul 2011
5 min read
Save for later

Apache Cassandra: Working in Multiple Datacenter Environments

Packt
07 Jul 2011
5 min read
Cassandra High Performance Cookbook Over 150 recipes to design and optimize large scale Apache Cassandra deployments       Changing debugging to determine where read operations are being routed Cassandra replicates data to multiple nodes; because of this, a read operation can be served by multiple nodes. If a read at QUORUM or higher is submitted, a Read Repair is executed, and the read operation will involve more than a single server. In a simple flat network which nodes have chosen for digest reads, are not of much consequence. However, in multiple datacenter or multiple switch environments, having a read cross a switch or a slower WAN link between datacenters can add milliseconds of latency. This recipe shows how to debug the read path to see if reads are being routed as expected. How to do it... Edit <cassandra_home>/conf/log4j-server.properties and set the logger to debug, then restart the Cassandra process: log4j.rootLogger=DEBUG,stdout,R On one display, use the tail -f <cassandra_log_dir>/system.log to follow the Cassandra log: DEBUG 06:07:35,060 insert writing local RowMutation(keyspace='ks1', key='65', modifications=[cf1]) DEBUG 06:07:35,062 applying mutation of row 65 In another display, open an instance of the Cassandra CLI and use it to insert data. Remember, when using RandomPartitioner, try different keys until log events display on the node you are monitoring: [default@ks1] set cf1[‘e'][‘mycolumn']='value'; Value inserted. Fetch the column using the CLI: [default@ks1] get cf1[‘e'][‘mycolumn']; Debugging messages should be displayed in the log. DEBUG 06:08:35,917 weakread reading SliceByNamesReadComman d(table='ks1', key=65, columnParent='QueryPath(columnFami lyName='cf1', superColumnName='null', columnName='null')', columns=[6d79636f6c756d6e,]) locally ... DEBUG 06:08:35,919 weakreadlocal reading SliceByNamesReadCo mmand(table='ks1', key=65, columnParent='QueryPath(columnFa milyName='cf1', superColumnName='null', columnName='null')', columns=[6d79636f6c756d6e,]) How it works... Changing the logging property level to DEBUG causes Cassandra to print information as it is handling reads internally. This is helpful when troubleshooting a snitch or when using the consistency levels such as LOCAL_QUORUM or EACH_QUORUM, which route requests based on network topologies. Using IPTables to simulate complex network scenarios in a local environment While it is possible to simulate network failures by shutting down Cassandra instances, another failure you may wish to simulate is a failure that partitions your network. A failure in which multiple systems are UP but cannot communicate with each other is commonly referred to as a split brain scenario. This state could happen if the uplink between switches fails or the connectivity between two datacenters is lost. Getting ready When editing any firewall, it is important to have a backup copy. Testing on a remote machine is risky as an incorrect configuration could render your system unreachable. How to do it... Review your iptables configuration found in /etc/sysconfig/iptables. Typically, an IPTables configuration accepts loopback traffic: :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT Remove the highlighted rule and restart IPTables. This should prevent instances of Cassandra on your machine from communicating with each other: #/etc/init.d/iptables restart Add a rule to allow a Cassandra instance running on 10.0.1.1 communicate to 10.0.1.2: -A RH-Firewall-1-INPUT -m state --state NEW -s 10.0.1.1 -d 10.0.1.2 -j ACCEPT How it works... IPTables is a complete firewall that is a standard part of current Linux kernel. It has extensible rules that can permit or deny traffic based on many attributes, including, but not limited to, source IP, destination IP, source port, and destination port. This recipe uses the traffic blocking features to simulate network failures, which can be used to test how Cassandra will operate with network failures. Choosing IP addresses to work with RackInferringSnitch A snitch is Cassandra's way of mapping a node to a physical location in the network. It helps determine the location of a node relative to another node in order to ensure efficient request routing. The RackInferringSnitch can only be used if your network IP allocation is divided along octets in your IP address. Getting ready The following network diagram demonstrates a network layout that would be ideal for RackInferringSnitch. How to do it... In the <cassandra_home>/conf/cassandra.yaml file: endpoint_snitch: org.apache.cassandra.locator.RackInferringSnitch Restart the Cassandra instance for this change to take effect. How it works... The RackInferringSnitch requires no extra configuration as long as your network adheres to a specific network subnetting scheme. In this scheme, the first octet, Y.X.X.X, is the private network number 10. The second octet, X.Y.X.X, represents the datacenter. The third octet, X.X.Y.X, represents the rack. The final octet represents the host, X.X.X.Y. Cassandra uses this information to determine which hosts are ‘closest'. It is assumed that ‘closer' nodes will have more bandwidth and less latency between them. Cassandra uses this information to send Digest Reads to the closest nodes and route requests efficiently. There's more... While it is ideal if the network conforms to what is required for RackInferringSnitch, it is not always practical or possible. It is also rigid in that if a single machine does not adhere to the convention, the snitch will fail to work properly.
Read more
  • 0
  • 0
  • 3339

article-image-apache-cassandra-libraries-and-applications
Packt
29 Jun 2011
6 min read
Save for later

Apache Cassandra: Libraries and Applications

Packt
29 Jun 2011
6 min read
  Cassandra High Performance Cookbook Over 150 recipes to design and optimize large scale Apache Cassandra deployments Introduction Cassandra's popularity has led to several pieces of software that have developed around it. Some of these are libraries and utilities that make working with Cassandra easier. Other software applications have been built completely around Cassandra to take advantage of its scalability. This article describes some of these utilities. Building Cassandra from source The Cassandra code base is active and typically has multiple branches. It is a good practice to run official releases, but at times it may be necessary to use a feature or a bug fix that has not yet been released. Building and running Cassandra from source allows for a greater level of control of the environment. Having the source code, it is also possible to trace down and understand the context or warning or error messages you may encounter. This recipe shows how to checkout Cassandra code from Subversion (SVN) and build it. How to do it... Visit http://svn.apache.org/repos/asf/cassandra/branches with a web browser. Multiple sub folders will be listed: /cassandra-0.5/ /cassandra-0.6/ Each folder represents a branch. To check out the 0.6 branch: $ svn co http://svn.apache.org/repos/asf/cassandra/branches/ cassandra-0.6/ Trunk is where most new development happens. To check out trunk: $ svn co http://svn.apache.org/repos/asf/cassandra/trunk/ To build the release tar, move into the folder created and run: $ ant release This creates a release tar in build/apache-cassandra-0.6.5-bin.tar.gz, a release jar, and an unzipped version in build/dist. How it works... Subversion (SVN) is a revision control system commonly used to manage software projects. Subversion repositories are commonly accessed via the HTTP protocol. This allows for simple browsing. This recipe is using the command-line client to checkout code from the repository. Building the contrib stress tool for benchmarking Stress is an easy-to-use command-line tool for stress testing and benchmarking Cassandra. It can be used to generate a large quantity of requests in short periods of time, and it can also be used to generate a large amount of data to test performance with. This recipe shows how to build it from the Cassandra source. Getting ready Before running this recipe, complete the Building Cassandra from source recipe discussed above. How to do it... From the source directory, run ant. Then, change to the contrib/stress directory and run ant again. $ cd <cassandra_src> $ ant jar $ cd contrib/stress $ ant jar ... BUILD SUCCESSFUL Total time: 0 seconds How it works... The build process compiles code into the stress.jar file. Inserting and reading data with the stress tool The stress tool is a multithreaded load tester specifically for Cassandra. It is a command-line program with a variety of knobs that control its operation. This recipe shows how to run the stress tool. Before you begin... See the previous recipe, Building the contrib stress tool for benchmarking before doing this recipe. How to do it... Run the <cassandra_src>/bin/stress command to execute 10,000 insert operations. $ bin/stress -d 127.0.0.1,127.0.0.2,127.0.0.3 -n 10000 --operation INSERT Keyspace already exists. total,interval_op_rate,interval_key_rate,avg_latency,elapsed_time 10000,1000,1000,0.0201764,3 How it works... The stress tool is an easy way to do load testing against a cluster. It can insert or read data and report on the performance of those operations. This is also useful in staging environments where significant volumes of disk data are needed to test at scale. Generating data is also useful to practice administration techniques such as joining new nodes to a cluster. There's more... It is best to run the load testing tool on a different node than on the system being tested and remove anything else that causes other unnecessary contention. Running the Yahoo! Cloud Serving Benchmark The Yahoo! Cloud Serving Benchmark (YCSB) provides benchmarking for the bases of comparison between NoSQL systems. It works by generating random workloads with varying portions of insert, get, delete, and other operations. It then uses multiple threads for executing these operations. This recipe shows how to build and run the YCSB. Information on the YCSB can be found here: http://research.yahoo.com/Web_Information_Management/YCSB https://github.com/brianfrankcooper/YCSB/wiki/ https://github.com/joaquincasares/YCSB How to do it... Use the git tool to obtain the source code. $ git clone git://github.com/brianfrankcooper/YCSB.git Build the code using the ant. $ cd YCSB/ $ ant Copy the JAR files from your <cassandra_hom>/lib directory to the YCSB classpath. $ cp $HOME/apache-cassandra-0.7.0-rc3-1/lib/*.jar db/ cassandra-0.7/lib/ $ ant dbcompile-cassandra-0.7 Use the Cassandra CLI to create the required keyspace and column family. [default@unknown] create keyspace usertable with replication_ factor=3; [default@unknown] use usertable; [default@unknown] create column family data; Create a small shell script run.sh to launch the test with different parameters. CP=build/ycsb.jar for i in db/cassandra-0.7/lib/*.jar ; do CP=$CP:${i} done java -cp $CP com.yahoo.ycsb.Client -t -db com.yahoo.ycsb. db.CassandraClient7 -P workloads/workloadb -p recordcount=10 -p hosts=127.0.0.1,127.0.0.2 -p operationcount=10 -s Run the script ant pipe the output to more command to control pagination: $ sh run.sh | more YCSB Client 0.1 Command line: -t -db com.yahoo.ycsb.db.CassandraClient7 -P workloads/workloadb -p recordcount=10 -p hosts=127.0.0.1,127.0.0.2 -p operationcount=10 -s Loading workload... Starting test. data 0 sec: 0 operations; 0 sec: 10 operations; 64.52 current ops/sec; [UPDATE AverageLatency(ms)=30] [READ AverageLatency(ms)=3] [OVERALL], RunTime(ms), 152.0 [OVERALL], Throughput(ops/sec), 65.78947368421052 [UPDATE], Operations, 1 [UPDATE], AverageLatency(ms), 30.0 [UPDATE], MinLatency(ms), 30 [UPDATE], MaxLatency(ms), 30 [UPDATE], 95thPercentileLatency(ms), 30 [UPDATE], 99thPercentileLatency(ms), 30 [UPDATE], Return=0, 1 How it works... YCSB has many configuration knobs. An important configuration option is -P, which chooses the workload. The workload describes the portion of read, write, and update percentage. The -p option overrides options from the workload file. YCSB is designed to test performance as the number of nodes grows and shrinks, or scales out. There's more... Cassandra has historically been one of the strongest performers in the YCSB.
Read more
  • 0
  • 0
  • 1915
article-image-facelets-templating-jsf-20
Packt
20 Jun 2011
7 min read
Save for later

Facelets Templating in JSF 2.0

Packt
20 Jun 2011
7 min read
One advantage that Facelets has over JSP is its templating mechanism. Templates allow us to specify page layout in one place, then we can have template clients that use the layout defined in the template. Since most web applications have consistent layout across pages, using templates makes our applications much more maintainable, since changes to the layout need to be made in a single place. If at one point we need to change the layout for our pages (add a footer, or move a column from the left side of the page to the right side of the page, for example), we only need to change the template, and the change is reflected in all template clients. NetBeans provides very good support for facelets templating. It provides several templates "out of the box", using common web page layouts. We can then select from one of several predefined templates to use as a base for our template or simply to use it "out of the box". NetBeans gives us the option of using HTML tables or CSS for layout. For most modern web applications, CSS is the preferred approach. For our example we will pick a layout containing a header area, a single left column, and a main area. After clicking on Finish, NetBeans automatically generates our template, along with the necessary CSS files. The automatically generated template looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link href="./resources/css/default.css" rel="stylesheet" type="text/css" /> <link href="./resources/css/cssLayout.css" rel="stylesheet" type="text/css" /> <title>Facelets Template</title> </h:head> <h:body> <div id="top" class="top"> <ui:insert name="top">Top</ui:insert> </div> <div> <div id="left"> <ui:insert name="left">Left</ui:insert> </div> <div id="content" class="left_content"> <ui:insert name="content">Content</ui:insert> </div> </div> </h:body> </html> As we can see, the template doesn't look much different from a regular Facelets file. Adding a Facelets template to our project We can add a Facelets template to our project simply by clicking on File | New File, then selecting the JavaServer Faces category and the Facelets Template file type. Notice that the template uses the following namespace: Java EE 6 Development with NetBeans 7" href="http://java.sun.com" target="_blank">http://java.sun.com/jsf/facelets. This namespace allows us to use the <ui:insert> tag, the contents of this tag will be replaced by the content in a corresponding <ui:define> tag in template clients. Using the template To use our template, we simply need to create a Facelets template client, which can be done by clicking on File | New File, selecting the JavaServer Faces category and the Facelets Template Client file type. After clicking on Next >, we need to enter a file name (or accept the default), and select the template that we will use for our template client. After clicking on Finish, our template client is created. <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <body> <ui:composition template="./template.xhtml"> <ui:define name="top"> top </ui:define> <ui:define name="left"> left </ui:define> <ui:define name="content"> content </ui:define> </ui:composition> </body> </html> As we can see, the template client also uses the Java EE 6 Development with NetBeans 7" href="http://java.sun.com" target="_blank">http://java.sun.com/jsf/facelets" namespace. In a template client, the <ui:composition> tag must be the parent tag of any other tag belonging to this namespace. Any markup outside this tag will not be rendered; the template markup will be rendered instead. The <ui:define> tag is used to insert markup into a corresponding <ui:insert> tag in the template. The value of the name attribute in <ui:define> must match the corresponding <ui:insert> tag in the template. After deploying our application, we can see templating in action by pointing the browser to our template client URL. Notice that NetBeans generated a template that allows us to create a fairly elegant page with very little effort on our part. Of course, we should replace the markup in the <ui:define> tags to suit our needs. Here is a modified version of our template, adding markup to be rendered in the corresponding places in the template: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <body> <ui:composition template="./template.xhtml"> <ui:define name="top"> <h2>Welcome to our Site</h2> </ui:define> <ui:define name="left"> <h3>Links</h3> <ul> <li> <h:outputLink value="http://www.packtpub.com"> <h:outputText value="Packt Publishing"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.ensode.net"> <h:outputText value="Ensode.net"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.ensode.com"> <h:outputText value="Ensode Technology, LLC"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.netbeans.org"> <h:outputText value="NetBeans.org"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.glassfish. org"> <h:outputText value="GlassFish.org"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.oracle.com/technetwork/ java/javaee/overview/index.html"> <h:outputText value="Java EE 6"/> </h:outputLink> </li> <li><h:outputLink value="http://www.oracle.com/ technetwork/java/index.html"> <h:outputText value="Java"/> </h:outputLink></li> </ul> </ui:define> <ui:define name="content"> <p> In this main area we would put our main text, images, forms, etc. In this example we will simply use the typical filler text that web designers love to use. </p> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc venenatis, diam nec tempor dapibus, lacus erat vehicula mauris, id lacinia nisi arcu vitae purus. Nam vestibulum nisi non lacus luctus vel ornare nibh pharetra. Aenean non lorem lectus, eu tempus lectus. Cras mattis nibh a mi pharetra ultricies. In consectetur, tellus sit amet pretium facilisis, enim ipsum consectetur magna, a mattis ligula massa vel mi. Maecenas id arcu a erat pellentesque vestibulum at vitae nulla. Nullam eleifend sodales tincidunt. Donec viverra libero non erat porta sit amet convallis enim commodo. Cras eu libero elit, ac aliquam ligula. Quisque a elit nec ligula dapibus porta sit amet a nulla. Nulla vitae molestie ligula. Aliquam interdum, velit at tincidunt ultrices, sapien mauris sodales mi, vel rutrum turpis neque id ligula. Donec dictum condimentum arcu ut convallis. Maecenas blandit, ante eget tempor sollicitudin, ligula eros venenatis justo, sed ullamcorper dui leo id nunc. Suspendisse potenti. Ut vel mauris sem. Duis lacinia eros laoreet diam cursus nec hendrerit tellus pellentesque. </p> </ui:define> </ui:composition> </body> After making the above changes, our template client now renders as follows: As we can see, creating Facelets templates and template clients with NetBeans is a breeze.
Read more
  • 0
  • 0
  • 1922

article-image-python-testing-installing-robot-framework
Packt
20 Jun 2011
2 min read
Save for later

Python Testing: Installing the Robot Framework

Packt
20 Jun 2011
2 min read
How to do it... Be sure to activate your virtualenv sandbox. Install by typing: easy_install robotframework. Using any type of window navigator, go to <virtualenv root>/build/robotframework/doc/quickstart and open quickstart.html with your favorite browser. This is not only a guide but also a runnable test suite. Switch to your virtualenv's build directory for Robot Framework: cd <virtualenv root>/build/robotframework/doc/quickstart. Run the Quick Start manual through pybot to verify installation: pybot quickstart.html. Inspect the generated report.html, log.html, and output.xml files generated by the test run. Install the Robot Framework Selenium library to allow integration with Selenium by first downloading: http://robotframework.org/SeleniumLibrary/. Unpack the tarball. Switch to the directory: cd robotframework-seleniumlibrary-2.5. Install the package: python setup.py install. Switch to the demo directory: cd demo. Start up the demo web app: python rundemo.py demoapp start. Start up the Selenium server: python rundemo.py selenium start. Run the demo tests: pybot login_tests. Shutdown the demo web app: python rundemo.py demoapp stop. Shutdown the Selenium server: python rundemo.py selenium stop. Inspect the generated report.html, log.html, output.xml, and selenium_log.txt files generated by the test run. Summary With this recipe, we have installed the Robot Framework and one third-party library that integrates Robot with Selenium. Further resources on this subject: Inheritance in Python Python Testing: Mock Objects Python: Unit Testing with Doctest Tips & Tricks on MySQL for Python Testing Tools and Techniques in Python
Read more
  • 0
  • 0
  • 3428