Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-configuring-childbrowser-plugin
Packt
21 Feb 2013
4 min read
Save for later

Configuring the ChildBrowser plugin

Packt
21 Feb 2013
4 min read
(For more resources related to this topic, see here.)   Getting ready You should download the entire community PhoneGap plugin repository located at https://github.com/phonegap/phonegap-plugins. This will provide you nearly all the content necessary to use the plugins. How to do it… We're going to split this one up into what we have to do for each platform as the steps and environments are all quite different. Plugin configuration for iOS Let's look first at the steps necessary for installing the ChildBrowser plugin: Open the collection of plugins you downloaded and navigate to iOS/ChildBrowser. Drag ChildBrowser.bundle, ChildBrowserCommand.h, ChildBrowserCommand.m, ChildBrowserViewController.h, ChildBrowserViewController.m, and ChildBrowserViewController.xib into XCode to the Socializer/Plugins folder as shown in the following screenshot: At the prompt, make sure to copy the files (instead of linking to them), as shown in the following screenshot: Copy the ChildBrowser.js file to your www/plugins/iOS directory. You can do this in XCode or in Finder. Add the plugin to Cordova.plist in Socializer/Supporting Files in XCode: Find the Plugins row, and add a new entry as shown in the following table: ChildBrowserCommand String ChildBrowserCommand   This can be better represented by the following screenshot: There, that wasn't too bad, right? The final step is to update our www/index.html file to include this plugin for our app. Add the following lines after the line that is loading the "cordova-2.2.0-ios.js" script: <script type="application/javascript" charset="utf-8" src = "./plugins/iOS/ChildBrowser.js"></script> Plugin configuration for Android For Android, we'll be using the same plugin, located in the repository you should have already downloaded from GitHub (although it will be under another directory). Let's start by installing and configuring ChildBrowser using the following steps: Create a new package (File | New | Package) under your project's src folder. Name it as com.phonegap.plugins.childBrowser. Navigate to Android/ChildBrowser/src/com/phonegap/plugins/ childBrowser and drag the ChildBrowser.java file to the newly created package in Eclipse. Go to the res/xml folder in your project and open the config.xml file with the text editor (usually this is done by a right-click on the file, Open With | Text Editor). Add the following line at the bottom of the file, just above the </plugins> ending tag: <plugin name="ChildBrowser" value="com.phonegap.plugins.childBrowser.ChildBrowser"/> Navigate to the Android/ChildBrowser/www folder in the repository. Copy childbrowser.js to assets/www/plugins/Android. Copy the childbrowser folder to assets/www. (Copy the folder, not the contents. You should end up with assets/www/childbrowser when done.) The last step is to update our www/index_Android.html file by adding the following lines just below the portion that is loading the cordova-2.0.0-android.js file: <script type="application/javascript" charset="utf-8" src = "./plugins/Android/childbrowser.js"></script> That's it. Our plugin is correctly installed and configured for Android. There's more… There is one important detail to pay attention to—the plugin's readme file, if available. This file will often indicate the installation steps necessary, or any quirks that you might need to watch out for. The proper use of the plugin is also usually detailed. Unfortunately, some plugins don't come with instructions; at that point, the best thing to do is to try installing it in the normal fashion (as we've done earlier for the ChildBrowser plugin) and see if it works. The other thing to remember is that PhoneGap is an ongoing project. This means that there are plugins that are out-of-date (and indeed, some have had to be updated by the author for this book) and won't work correctly with the most recent versions of PhoneGap. You'll need to pay attention to the plugins so that you know which version it supports, and if it needs to be modified to work with a newer version of PhoneGap. Modifications usually aren't terribly difficult, but it does involve getting into the native code, so you may wish to ask the community (located at http://groups.google.com/group/phonegap) for any help in the modification. Summary In this article we saw the detailed installation and configuration of the ChildBrowser plugin. Moreover, we separately got acquainted with the process for both iOS and Android platforms. Finally, we saw how we can apply this knowledge to install other PhoneGap plugins and where we need to go in case we need any help. Resources for Article : Further resources on this subject: Creating Content in Social Networking using Joomla! [Article] iPhone: Issues Related to Calls, SMS, and Contacts [Article] Adding Geographic Capabilities via the GeoPlaces Theme [Article]
Read more
  • 0
  • 0
  • 1822

article-image-getting-started-kinect-windows-sdk-programming
Packt
20 Feb 2013
12 min read
Save for later

Getting started with Kinect for Windows SDK Programming

Packt
20 Feb 2013
12 min read
(For more resources related to this topic, see here.) System requirements for the Kinect for Windows SDK While developing applications for any device using an SDK, compatibility plays a pivotal role. It is really important that your development environment must fulfill the following set of requirements before starting to work with the Kinect for Windows SDK. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support. and register to have the files e-mailed directly to you. Supported operating systems The Kinect for Windows SDK, as its name suggests, runs only on the Windows operating system. The following are the supported operating systems for development: Windows 7 Windows Embedded 7 Windows 8 The Kinect for Windows sensor will also work on Windows operating systems running in a virtual machine such as Microsoft HyperV, VMWare, and Parallels. System configuration The hardware requirements are not as stringent as the software requirements. It can be run on most of the hardware available in the market. The following are the minimum configurations required for development with Kinect for Windows: A 32- (x86) or 64-bit (x64) processor Dual core 2.66 GHz or faster processor Dedicated USB 2.0 bus 2 GB RAM The Kinect sensor It goes without saying, you need a Kinect sensor for your development. You can use the Kinect for Windows or the Kinect for Xbox sensor for your development. Before choosing a sensor for your development, make sure you are clear about the limitations of the Kinect for Xbox sensor over the Kinect for Windows sensor, in terms of features, API supports, and licensing mechanisms. The Kinect for Windows sensor By now, you are already familiar with the Kinect for Windows sensor and its different components. The Kinect for Windows sensor comes with an external power supply, which supplies the additional power, and a USB adapter to connect with the system. For the latest updates and availability of the Kinect for Windows sensor, you can refer to http://www.microsoft.com/en-us/kinectforwindows/site. The Kinect for Xbox sensor If you already have a Kinect sensor with your Xbox gaming console, you may use it for development. Similar to the Kinect for Windows sensor, you will require a separate power supply for the device so that it can power up the motor, camera, IR sensor, and so on. If you have bought a Kinect sensor with an Xbox as a bundle, you will need to buy the adapter / power supply separately. You can check out the external power supply adapter at http://www.microsoftstore.com. If you have bought only the Kinect for Xbox sensor, you will have everything that is required for a connection with a PC and external power cable. Development tools and software The following are the software that are required for development with Kinect SDK: Microsoft Visual Studio 2010 Express or higher editions of Visual Studio Microsoft .NET Framework 4.0 or higher Kinect for Windows SDK Kinect for Windows SDK uses the underlying speech capability of a Windows operating system to interact with the Kinect audio system. This will require Microsoft Speech Platform – Server Runtime, the Microsoft Speech Platform SDK, and a language pack to be installed in the system, and these will be installed along with the Kinect for Windows SDK. The system requirements for SDK may change with upcoming releases. Refer to http://www.microsoft.com/en-us/ kinectforwindows/. for the latest system requirements. Evaluation of the Kinect for Windows SDK   Though the Kinect for Xbox sensor has been in the market for quite some time, Kinect for Windows SDK is still fairly new in the developer paradigm, and it's evolving. The book is written on Kinect for Windows SDK v1.6. The Kinect for Windows SDK was first launched as a Beta 1 version in June 2011, and after a thunderous response from the developer community, the updated version of Kinect for Windows SDK Beta 2 version was launched in November 2011. Initially, both the SDK versions were a non-commercial release and were meant only for hobbyists. The first commercial version of Kinect for Windows SDK (v1.0) was launched in February 2012 along with a separate commercial hardware device. SDK v1.5 was released on May 2012 with bunches of new features, and the current version of Kinect for Windows SDK (v1.6) was launched in October 2012. The hardware hasn't changed since its first release. It was initially limited to only 12 countries across the globe. Now the new Kinect for Windows sensor is available in more than 40 countries. The current version of SDK also has the support of speech recognition for multiple languages.   Downloading the SDK and the Developer Toolkit The Kinect SDK and the Developer Toolkit are available for free and can be downloaded from http://www.microsoft.com/en-us/kinectforwindows/. The installer will automatically install the 64- or 32-bit version of SDK depending on your operating system. The Kinect for Windows Developer Toolkit is an additional installer that includes samples, tools, and other development extensions. The following diagram shows these components:   The main reason behind keeping SDK and Developer Toolkit in two different installers is to update the Developer Toolkit independently from the SDK. This will help to keep the toolkit and samples updated and distributed to the community without changing or updating the actual SDK version. The version of Kinect for Windows SDK and that for the Kinect for Windows Developer Toolkit might not be the same. Installing Kinect for Windows SDK Before running the installation, make sure of the following: You have uninstalled all the previous versions of Kinect for Windows SDK The Kinect sensor is not plugged into the USB port on the computer There are no Visual Studio instances currently running Start the installer, which will display the start screen as End User License Agreement. You need to read and accept this agreement to proceed with the installation. The following screenshot shows the license agreement:   Accept the agreement by selecting the checkbox and clicking on the Install option, which will do the rest of the job automatically. Before the installation, your computer may pop out the User Access Control (UAC) dialog, to get a confirmation from you that you are authorizing the installer to make changes in your computer. Once the installation is over, you will be notified along with an option for installing the Developer Toolkit, as shown in the next screenshot: Is it mandatory to uninstall the previous version of SDK before we install the new one? The upgrade will happen without any hassles if your current version is a non-Beta version. As a standard procedure, it is always recommended to uninstall the older SDK prior to installing the newer one, if your current version is a Beta version. Installing the Developer Toolkit If you didn't downloaded the Developer Toolkit installer earlier, you can click on the Download the Developer Toolkit option of the SDK setup wizard (refer to the previous screenshot); this will first download and then install the Developer Toolkit setup. If you have already downloaded the setup, you can close the current window and execute the standalone Toolkit installer. The installation process for Developer Toolkit is similar to the process for the SDK installer. Components installed by the SDK and the Developer Toolkit The Kinect for Windows SDK and Kinect for Windows Developer Toolkit install the drivers, assemblies, samples, and the documentation. To check which components are installed, you can navigate to the Install and Uninstall Programs section of Control Panel and search for Kinect. The following screenshot shows the list of components that are installed with the SDK and Toolkit installer: The default location for the SDK and Toolkit installation is %ProgramFiles%/Microsoft SDKs/Kinect. Kinect management service The Kinect for Windows SDK also installs Kinect Management, which is a Windows service that runs in the background while your PC communicates with the device. This service is responsible for the following tasks: Listening to the Kinect device for any status changes Interacting with the COM Server for any native support Managing the Kinect audio components by interacting with Windows audio drivers You can view this service by launching Services by navigating to Control Panel |Administrative Tools, or by typing Services.msc in the Run command. Is it necessary to install the Kinect SDK to end users' systems? The answer is No. When you install the Kinect for Windows SDK, it creates a Redist directory containing an installer that is designed to be deployed with Kinect applications, which install the runtime and drivers. This is the path where you can find the setup file after the SDK is installed: %ProgramFiles%/Microsoft SDKsKinectv1.6Redist KinectRuntime-v1.6-Setup.exe This can be used with your application deployment package, which will install only the runtime and necessary drivers. Connecting the sensor with the system Now that we have installed the SDK, we can plug the Kinect device into your PC. The very first time you plug the device into your system, you will notice the LED indicator of the Kinect sensor turning solid red and the system will start installing the drivers automatically. The default location of the driver is %Program Files%Microsoft Kinect DriversDrivers. The drivers will be loaded only after the installation of SDK is complete and it's a one-time job. This process also checks for the latest Windows updates on USB Drivers, so it is good to be connected to the Internet if you don't have the latest updates of Windows. The check marks in the dialog box shown in the next screenshot indicate successful driver software installation: When the drivers have finished loading and are loaded properly, the LED light on your Kinect sensor will turn solid green. This indicates that the device is functioning properly and can communicate with the PC as well. Verifying the installed drivers This is typically a troubleshooting procedure in case you encounter any problems. Also, the verification procedure will help you to understand how the device drivers are installed within your system. In order to verify that the drivers are installed correctly, open Control Panel and select Device Manager; then look for the Kinect for Windows node. You will find the Kinect for Windows Device option listed as shown in the next screenshot: Not able to view all the device components At some point of time, it may happen that you are able to view only the Kinect for Windows Device node (refer to the following screenshot). At this point of time, it looks as if the device is ready. However, a careful examination reveals a small hitch. Let's see whether you can figure it out or not! The Kinect device LED is on and Device Manager has also detected the device, which is absolutely fine, but we are still missing something here. The device is connected to the PC using the USB port, and the system prompt shows the device installed successfully—then where is the problem? The default USB port that is plugged into the system doesn't have the power capabilities required by the camera, sensor, and motor. At this point, if you plug it into an external power supplier and turn the power on, you will find all the driver nodes in Device Manager loaded automatically. This is one of the most common mistakes made by the developers. While working with Kinect SDK, make sure your Kinect device is connected with the computer using the USB port, and the external power adapter is plugged in and turned on. The next picture shows the Kinect sensor with USB connector and power adapter, and how they have been used: With the aid of the external power supply, the system will start searching for Windows updates for the USB components. Once everything is installed properly, the system will prompt you as shown in the next screenshot: All the check marks in the screenshot indicate that the corresponding components are ready to be used and the same components are also reflected in Device Manager. The messages prompting for the loading of drivers, and the prompts for the installation displaying during the loading of drivers, may vary depending upon the operating system you are using. You might also not receive any of them if the drivers are being loaded in the background. Detecting the loaded drivers in Device Manager Navigate to Control Panel | Device Manager, look for the Kinect for Windows node, and you will find the list of components detected. Refer to the next screenshot: The Kinect for Windows Audio Array Control option indicates the driver for the Kinect audio system whereas the Kinect for Windows Camera option controls the camera sensor. The Kinect for Windows Security Control option is used to check whether the device being used is a genuine Microsoft Kinect for Windows or not. In addition to appearing under the Kinect for Windows node, the Kinect for Windows USB Audio option should also appear under the Sound, Video and Game Controllers node, as shown in the next screenshot: Once the Kinect sensor is connected, you can identify the Kinect microphone like any other microphone connected to your PC in the Audio Device Manager section. Look at the next screenshot:  
Read more
  • 0
  • 0
  • 7698

article-image-knowing-prebuilt-marketing-sales-and-service-organizations
Packt
07 Feb 2013
20 min read
Save for later

Knowing the prebuilt marketing, sales, and service organizations

Packt
07 Feb 2013
20 min read
(For more resources related to this topic, see here.) Customizations incur new costs (of development, training, maintenance, and change management) and are typically sponsored to support the company's unique capabilities in both people and processes—capabilities that sustain its differentiation from competitors in the market. When the company is beginning or transitioning an information system for its CRM, it gets enormous value in simply adopting the information system that is already available in the CRM On Demand product, built on industry standard business process models of CRM. To go live out of the box, that is, without any customization, is effective for new companies and new organizations. When an enterprise has established its place in the market with custom-tailored CRM processes that may not map exactly to industry standards but at the same time work well for them as an organization, a customized CRM On Demand should be the order of the day. Standard enterprise technology management, such as listing the business drivers, defining the business objectives, mapping the business processes, capturing master data, identifying the transactional data to be captured, and the overarching change management towards user adoption of the new system, is independent of whether you go live with a customized CRM On Demand or go live straight out of the box. The objective of this article is to provide you with the complete list of activities to be performed to go live with CRM On Demand without any customization of the product. For example, assume your company is a global logistics business with sales, marketing, and support teams operating in many countries, bought as many CRM On Demand user licenses as there are staff in the sales, marketing, and service teams, and intends to standardize its customer relationship management system across the board. The company management has opted to go live with CRM On Demand without any customization. With an additional user license for you to administer the new system, you have the responsibility of deploying the system to the users. Here, we will explore in detail the activities that a CRM On Demand administrator would perform to deploy CRM On Demand out of the box to the intended users across the countries. We have grouped the activities in three steps. The steps are sequential and each step represents a reliable status of the deployment of the system. These steps are as follows: The first step is to familiarize with the prebuilt content in the CRM On Demand for the marketing, sales, and service organizations. The second step is setting your company-level parameters, which includes creating the login IDs, territories, and company content, such as the products catalog and the sales forecast reports. The third and last step is issuing the login IDs to the users and sustaining their adoption of the new system. By the end of this article, you will be able to do the following: Understand the business functionalities of the Vanilla CRM On Demand application Establish the primary settings in the CRM On Demand application to implement it in your company Create login IDs for the users of the application in your company. The preceding information and skills will help you deploy CRM On Demand out of the box in a structured way. Lead A lead is any addressable person who is in a potentially opportunistic position with a prospective or existing customer (account or contact) of yours, and with whom you can interact to develop an opportunity for the prospective/existing customer. The sales process might originate with lead generation. Leads move progressively through qualification to conversion. You can convert leads to contacts, accounts, deal registrations, and opportunities. After a lead has been converted to an opportunity, it enters the sales process. Certain fields in opportunity obtain their values from the lead record. These values are based on mapping the leads that have been converted during the sales process. The list of preconfigured fields in the lead object can be found in the CRM On Demand help text reference at http://docs.oracle.com/cd/E27437_01/books/OnDemOLH/index.htm?toc.htm?LeadEditHelp.html. Function — Sales The various sales functions will be explored in the upcoming sections. Account Use the Account Edit page to create, update, and track accounts. Accounts are generally organizations that you do business with, but you can also track partners, competitors, affiliates, and so on as accounts. If account records are central to how your company manages its business, as is the case in many companies, enter as much information about accounts as you can. Some of that information, such as the Region or the Industry field, can be used in reports as a way to categorize information. Similarly, if you link a record, such as an opportunity, to an account record with the Region or Industry field filled in, those opportunities can be grouped by the region. A list of preconfigured fields in the account object can be found in the CRM On Demand help text reference at http://docs.oracle.com/cd/E27437_01/books/OnDemOLH/index.htm?toc.htm?AccountEditHelp.html The Account Name and Location fields help us to uniquely identify an account record in the system, meaning there can't be two accounts in the system with the same Account Name and Location fields. Thus, an opportunity at the sales stage X that puts a probability of 60 percent implies the opportunity with that customer having a 60 percent probability of reaching Closed/Won by the expected closing date for the given revenue. Different sales processes may be defined for different types of opportunities. Multiple sales processes can be normalized using sales categories, to enable forecasting at a global level. An opportunity can be associated with only a single sales process. Revenues You can link products or services (drawn from your product catalog) to opportunities in order to do the following tasks: Track which products belong to the opportunity Calculate revenue-based opportunity on product revenue Base your company's forecasts on product revenue or product quantities If the product represents recurring revenue, you can input the Frequency and # of Periods information. For usability, you can link a product to an opportunity when you create the opportunity in an unbroken sequential step, or alternatively at a later time. To calculate the opportunity revenue based on the linked product revenue, follows these steps: On the Opportunity Detail page, click the Update Opportunity Totals button available in the Opportunity Product Revenue section. This totals the product revenue for each linked product and displays it in the Revenue and Expected Revenue fields for the opportunity. The calculation behind this functionality differs depending on whether the Product Probability Averaging Enabled option is enabled on the company profile. The company forecasting method determines which fields you must select when linking products to your opportunities. If your company forecasts the revenue, based on opportunities, rather than products, do not select the Forecast checkbox on the Opportunity Product Revenue record. If your company forecasts revenue based on product revenue, and you want to include this product revenue record as part of your forecasted revenue totals, or your forecasted quantities, or both, select the Forecast checkbox. Make sure that the date in the Start/Close Date field falls within the forecast period, and that the record is owned by a forecast participant. If a product is not sold, you can update the associated Start/Close Date and clear the Forecast checkbox on the Product Revenue page for that product to prevent the revenue for the product from being added to your company's forecasts. Alternatively, if one of the several products linked to the opportunity is on hold, you can remove the product from the opportunity, and create another opportunity for that product to prevent its revenue from being included in the forecast. A list of preconfigured fields in the opportunity revenue line object can be found in the CRM On Demand help text reference at http://docs.oracle.com/cd/ E27437_01/books/OnDemOLH/index.htm?toc.htm?opptyproducthelp.html. Assets When you want to track a product that you have sold to a customer or company, link the product record to the account as an asset. If you enter the Notify Date field's value on the asset record, a task is created when you save this asset record. The task appears as Asset Name requires follow-up on My Homepage, Account Homepage , and Calendar. A list of preconfigured fields in the asset object can be found in the CRM On Demand help text reference at http://docs.oracle.com/cd/E27437_01/books/OnDemOLH/index.htm?toc.htm?acctassethelp.html. Sales forecasts Use the Forecast page to review, adjust, and submit forecasts. A forecast is a saved snapshot of expected revenues over time. CRM On Demand calculates forecasts for each quarter and breaks down that information by fiscal month. Forecasts in CRM On Demand automate a process that is often manual and sometimes inaccurate. Forecasts help companies to develop sales strategies. They also help companies to identify future business needs by giving managers accurate and up-to-date information about expected sales and quarterly progress toward sales targets. Individual sales representatives do not have to compile statistics. Instead, they decide when to include a record in their forecasts. The remainder of the process is automatic. Function — Service The various service functions will be explored in the upcoming sections. Service requests Use the Service Request Edit page to record, track, and address customer requests for information or assistance. A service request holds all the relevant and detailed information about a particular service activity. You can also use the service request to capture additional information, such as solutions or activities required to resolve the service request. Service representatives can access all the relevant information about service requests in one location. To ensure that a service request record captures all the service activity, the changes to records can be tracked through an audit trail. A list of preconfigured fields in the service request object can be found in the CRM On Demand help text reference at http://docs.oracle.com/cd/E27437_01/books/OnDemOLH/index.htm?toc.htm?SerReqEditHelp.html. Solutions Use the Solution Edit page to create, update, and track solutions. Solutions contain information about how to resolve a customer query. By maintaining a knowledge base of solutions, your service representatives have access to a centralized knowledge base to help them resolve customer problems. In addition, the knowledge base expands as users interact with customers and create new solutions. CRM On Demand tracks the usage of solutions and enables users to rate solutions. This information helps organizations to improve the solutions that they provide to customers and to identify problems in products or services. Frequently-used solutions give indicators to the organization on areas where product quality or supporting documents have to be improved. Poor solution ratings might indicate the need to improve solutions. A list of preconfigured fields in the solution object can be found in the CRM On Demand help text reference at http://docs.oracle.com/cd/ E27437_01/books/OnDemOLH/index.htm?toc.htm?SolutionEditHelp.html. Activity Use the Calendar page to review, create, and update your activities. An activity consists of tasks that you need to accomplish before a certain date and appointments that you want to schedule for a specific time. Tasks and appointments can be meetings, calls, demonstrations, or events. The difference between tasks and appointments is that tasks appear in a task list and have a due date and status, whereas appointments are scheduled on your calendar with a specific date and time. Activities can be associated to most of the standard and custom objects in the CRM On Demand application. A list of preconfigured fields can be found in the CRM On Demand help text reference at http://docs.oracle.com/cd/E27437_01/books/OnDemOLH/index.htm?toc.htm?AppointEditHelp.html. CRM staff A user of your company's CRM On Demand gets access to the CRM data based on the accesses assigned to his user ID. Every user ID is associated to a user role, which defines all the access rights. A user ID can be associated to only one user role. There is no limit on the number of user roles that you can define on the system. The user role access levels are broadly captured in the following two types: Feature access: Features (more commonly known as privileges) refer to the type of managerial/administrative workflows and actions that a u ser can perform in the CRM system. These actions include accessing all the data in the analytics tables, accessing prebuilt dashboards, accessing prebuilt reports, creating personal reports, creating custom reports, publishing list templates, creating campaigns, leads qualification and conversion, publishing solutions, sharing calendar with others, recovering deleted data, creating the assignment rules for automatic assignment of records, accessing CRM On Demand offline version, integrating the CRM On Demand with their e-mail client and PIM, exporting their data, importing personal data, and personalizing their homepages and detail pages. Record access: These are the permissions to create/read-all/edit/delete the records in the system, and in reference to the user's ownership or absence of ownership on the record. For example, can the user create campaign records, can the user read all the leads available in the system, can the user delete his activity records, and so on. The following table describes the prebuilt user roles. You will need to map this to the staff roles in your company's CRM organization: User role Privileges Record access Executive Has access to all the features, other than administration and customization features Has access and Read all records privilege to the most common horizontal record types such as accounts, contacts, activities, assets, campaigns, leads, opportunities, service requests, and solutions and sales forecasts. They can create records of these record types except solutions. Advanced User Has access to create custom reports, create assignment rules, publish lists templates, and leads evaluation (qualification, archiving, rejection, and conversion). Has access to all "non-admin" features, such as Access All Data in Analytics. Has access and can create a privilege for the most common horizontal record types, such as accounts, contacts, activities, assets, campaigns, leads, opportunities, service requests, solutions, and sales forecasts. But advanced users can only read her/his own records. Sales and Marketing Manager Has access to sales and marketing related privileges such as create assignment rules, publish lists templates, and so on. Has access to the most common horizontal type of records and can read all the records in the system. The sales and marketing manager has no access to generate sales forecasts (this means that the manager cannot force an automatic submit of the sales forecasts of his sales representative but has to wait for the sales representative to submit their forecasts). Field Sales Rep Has access to leads evaluation (qualification, archiving, rejection, and conversion). Has access to the most common horizontal types of records and can read only those records owned by him. Inside Sales Rep Has access to leads qualification and archiving. Has access to the most common horizontal types of records and reads all the records in the system. Regional Manager Has access to leads evaluation,  Campaigns management and to create assignment rules. Has access to the most common horizontal types of records and can read only those records owned by him. Service Manager Has access to publish Solutions, publish lists templates and recover deleted data. Has access to the most common horizontal types of records (except Sales Forecasts) and can read only those records owned by him. The Service Manager can however read all the Accounts and Contacts records in the system. Service Rep - Has access to the most common horizontal types of records and can read only those records owned by himself / herself. The service representative can however read all the accounts and contacts records in the system. Administrator Has access to all features in the system, and the access to modify the accesses of other user roles. Has access to create/read/delete all types of records. The previous table lists the permissions of each prebuilt role to access a record type, to create a record of a specific record type, and whether the user has access to view all the records created in the system. The permissions on each record (read-only/edit/delete) are defined by the Owner Access Profile and the Default Access Profile settings for each role as shown in the next screenshot; these profiles are explained as follows: Owner Access Profile: Defines permission on a record when the user is the direct owner and/or a derived owner through the manager hierarchy Default Access Profile: Defines the permission on a record to a user who is not directly or indirectly the owner of that record but is visible to the user because the Can Read All Records option is selected for the relevant record type in the record-type access settings on the user's role To understand the details of the access profiles of a particular user role you will need to know two things. First, that the name of the access profile follows the convention [user role name] Default Access Profile and [user role name] Owner Access Profile. Secondly, the path to the access profiles, which is Admin | User Management and Access Controls | Access Profiles. A screenshot of Access Profile is shown as follows: This completes the first step. If you are a novice to the CRM On Demand service, we hope the preceding pages have given you the confidence and the "view" for step 2. Step 2 — Setting your company profile The second step of deploying out of the box involves giving the system the details about your company. The activities that comprise this step are as follows: The Company Administration data. Creating the login IDs for users. Creating the product catalog. Enabling the sales forecasts. Each of these activities are explored in detail in the following sections. The Company Administration data The Company Administration page is the place where you define the company profile and some other global settings. The following screenshot details some of the important parameters that can be defined as part of the company administration. You can access this section by going to Admin | Company Administration: The Company Profile The Company Profile page carries the key parameters. The screenshot is as follows: Under Company Key Information, ensure that you set the CRM On Demand administrator user as Primary Contact along with his phone number. Don't make the mistake of setting the CEO of the company as Primary Contact. If you do so, Oracle support may end up calling your CEO on any support-related matters. Under Company Settings, most of the default options such as Language, Currency, and Time Zone are set by Oracle, based on the inputs provided by you at the time of taking the trial run and/or the purchase order. As these are long-term and fairly static settings, you will need to select them thoughtfully at the start. You can change these settings with a service request to customer support. As a global company with staff distributed across various time zones, you would do well to set up the parameters in a manner that would be applicable for most of the users of the system at the company level. Note that CRM On Demand is designed for global deployments and therefore, these company-level defaults can be overridden at the user level using user-level settings. The In-Line Edit, Message Centre, and Heads-Up Display options are meant to enhance user productivity when using the system. In-Line Edit provides a facility to edit the details of the record in the list view or a detailed view without going to the edit mode. In-Line Edit reduces the amount of data sent from the client browser to the CRM On Demand server. In the following screenshot, the Location field under Company Profile can be edited without getting into the edit mode by clicking the Edit button: Similarly in the list view too, In-Line Edit facilitates a quick edit of the listed records without getting to the detailed record page. Message Center is a mini collaboration tool available to the users to share general information with other users of the system-specific or record-specific information. As you can see in the following screenshot, on clicking the notes icon on the right-hand corner of the Opportunity Detail page, the notes pop-up appears displaying the notes written by users on this opportunity record. If you opt to subscribe for the notes, any message posted by any user of the system on this opportunity record will be displayed in the Message Centre section in the left-hand side navigation bar, giving an easy access to view all the messages posted by the users. Heads-up Display provides quick links to go to a specific related information section of a record without scrolling the browser. On clicking the Contacts link, as shown in the following screenshot, the user is directly taken to the account's Contacts list applet that appears at the bottom of the Account Detail page. Clicking on the Top link will take you to the Opportunity Detail section. The Record Preview mode opens the preview window when a user points to a hyperlink or clicks the preview icon depending on the settings selected (click on the preview icon on the link). For example, if an opportunity is associated to an account, the account name is displayed as a hyperlink to navigate easily to the related Account Detail page. Enabling preview would help you to view the details of the account from the Opportunity Detail page without navigating to the Account Detail page. As shown in the following screenshot, on clicking the preview icon in the Opportunity Detail page, the details of Account Action Rentals are displayed in an overlay pop up: Global Search Method provides a facility to specify the search option you would like to enable in the system. If you choose the Targeted Search option, the system provides a facility to search by one or more of the configured fields in the object to search the records stored in the object. As you can see in the following screenshot, I have set the Targeted Search option at the company level and as a result, on the left-hand side navigation search applet, I have a facility to search Contacts in the system by Last Name, First Name, and Email. If you key in more than one field, it performs an AND search. If you like to search by a different set of fields, you can either use the Advanced search facility or customize the Search panel for all users. On the other hand if you have selected Keyword Search, the search applet in the left-hand side navigation appears as shown in the following screenshot, providing you with a single blank field where you can key in any text to do a wildcard search against a set of preconfigured fields. Unlike Targeted Search, here, the system uses the OR condition if there are more than one preconfigured field. In the previous screenshot, when you key in armexplc as an input to the search field, it gets you all contacts with the e-mail domain ending with armexplc-od.com. The prebuilt Search panel has the relevant set of fields for each object. A complete list of the preconfigured keyword search fields, sorted by objects, is available in the online help file of CRM On Demand at http://docs.oracle.com/cd/E27437_01/books/ OnDemOLH/index.htm?toc.htm?defaultsearchfieldshelp.html. Summary In this way, we can better understand the Company Administration page and work with its various settings as given in this article. Resources for Article : Further resources on this subject: Planning and Preparing the Oracle Siebel CRM Installation [Article] CRM Deployment Options [Article] Communicating from Dynamics CRM to BizTalk Server [Article]
Read more
  • 0
  • 0
  • 1173
Banner background image

article-image-using-processes-microsoft-dynamics-crm-2011
Packt
07 Feb 2013
13 min read
Save for later

Using Processes in Microsoft Dynamics CRM 2011

Packt
07 Feb 2013
13 min read
  (For more resources related to this topic, see here.) Employee Recruitment Management System basics Hiring the right candidate is a challenge for the recruitment team of any company. The process of hiring candidates can differ from company to company. Different sources such as job sites, networking, and consulting firms can be used to get the right candidate, but most companies prefer to hire a candidate from their own employee network. Before starting the hiring process, a recruiter should have a proper understanding of the candidate profile that fits the company's requirements. Normally, this process starts by screening candidate resumes fetched from different sources. Once they have resumes of appropriate candidates, the recruitment team starts working on resumes one by one. Recruiters talk to potential candidates and enquire about their skills and test their interpersonal skills. Recruiters play an important role in the hiring process; they prepare candidates for interview and provide interview feedback. Employee Recruitment Management System design In the employee recruitment applications, we will be using the key objects shown in the following figure to capture the required information: The blocks perform the following tasks: Company: This block stores the company details Candidate: This block stores information about the candidate profile Employee: This block stores employee data CRM User : This block stores Microsoft CRM user information As we are going to use Microsoft CRM 2011 as a platform to build our application, let's map these key blocks with Microsoft CRM 2011 entities: Company: The term "account" in Microsoft CRM represents an organization, so we can map the company object with an account entity and can store company information in the account entity. Candidate: The Candidate object will store information about suitable candidates for our company. We will use the candidate entity to store all interview related feedback, other position details, and address information. We are going to map the candidate entity with a lead entity, because it has most of the fields OOB that we need for our candidate entity. Employee: In Microsoft CRM 2011 sales process, when lead is qualified, it is converted to an account, a contact, and an opportunity, so we utilize this process for our application. When a candidate is selected, we will convert the candidate to an employee using the OOB process, which will map all the candidate information to the Candidate entity automatically. When a lead is converted to an account or contact or opportunity, the lead record is deactivated by Microsoft CRM 2011. Let's talk about the process flow that we are going to use in our employee recruitment application. Recruiters will start the process of hiring a candidate by importing candidate resumes in Microsoft CRM under the Candidate entity; we will customize our OOB entities to include the required information. Once data is imported in Microsoft CRM, the recruiter will start the screening of candidates one by one. He will schedule Technical, Project Manager, and finally HR rounds. Once the candidate is selected the recruiter will create an offer letter for that candidate, send it to the respective candidate, and convert the Candidate entity to Employee. The following flowchart shows our employee recruitment application process flow: Data model We have identified the data model for required entities. We need to customize OOB entities based on the data model tables.   Customizing entities for Employee Recruitment Management System Once we have the data model ready, we need to customize the Microsoft CRM UI and OOB entities. Let's first create our solution called HR Module and add the required entities to that solution. Customizing Microsoft CRM UI We need to customize the Microsoft CRM site map. We have options to modify the sitemap manually or using the site map editor tool. We need to customize the site map based on the following table: Sr No Customization Detail 1 Remove Left Navigation: Marketing, Service, Resource Center 2 Rename Left Navigation: Sales to  HR Module, Setting to  Configuration 3 Remove Left Navigation items under  My Work:  Queues, Articles,Announcements 4 Remove all Left Navigation items under HR Module left navigation: except Lead, Accounts , and Contacts After customizing the site map, Microsoft CRM UI should look like the following screenshot: It is recommended that you comment unwanted navigation areas out of the site map instead of removing them. Customizing OOB entities After we have customized Microsoft CRM UI, we need to rename the entity and entity views. We also need to perform the following actions: Renaming OOB entities: We need to rename the lead, account, and contact entities to candidate, company, and employee. Open the entities in edit mode and rename them. Changing Translation labels: After renaming the OOB entities, we need to change the translation labels in Microsoft CRM. We need to convert Lead to Candidate and Contact to Employee. Creating/customizing entity fields: We need to create and customize entity fields; based on the data model we just saw, let's create candidate entity fields. Use the following steps to create fields: Open our HRModule solution. Navigate to Entities | Candidate | Fields. Click on New to create a new field. Enter the following field properties: Display Name: Text that you want to show to the user on the form. Name: This will be populated automatically as we tab out from the Display Name field. The Display Name field is used as a label in Microsoft CRM 2011 entity form and views, whereas the Name field is used to refer to the field in code. Requirement Level: Used to enforce data validation on the form. Searchable: If this is true, this field will be available in the Advance Finds field list. Field Security: Used to enable field-level security. It is a new feature added in Microsoft CRM 2011. Refer to the Setting field-level security in Microsoft CRM 2011 section for more details. Auditing: Used to enable auditing for entity fields. It is also a new feature added in Microsoft CRM 2011. Using auditing, we can track entity and attribute data changes for an organization. You can refer to http://msdn.microsoft.com/en-us/library/gg309664.aspx for more details on the auditing feature. Description: Used to provide additional information about fields. Type: Represents what type of data we are going to store in this field; based on the type selected, we need to set other properties. You can't change the data type of a created field, but you can change its properties. After filling in this information, our entity form should look like the following screenshot: We need to create fields for all entities based on the preceding steps, one by one. Setting relationship mapping In Microsoft CRM 2011, we can relate two entities by creating a relationship between them. We can create three types of relationships: One-to-many relationship: A one-to-many relationship is created between one primary entity and many related entities. Microsoft CRM 2011 creates a relationship field (lookup field) automatically for each related entity when a one-to-many relationship is created. We can create a self-relationship by selecting a primary entity on both sides. Many-to-one relationship: A many-to-one relationship is created between many related entities and one primary entity. Many-to-many relationship: A many-to-many relationship can be created between many related entities. To create a many-to-many relationship, the user must have Append and Append To privileges in both side entities. We can define different relationship behaviors while creating a relationship; you can refer to http://msdn.microsoft.com/en-us/library/gg309412.aspx for more details. After creating a relationship, we can define a mapping to transfer values from parent entity to child entity, but this functionality can only achieved when a child entity record is created from a parent entity using the Add New button from the Associated view We need to set up relationship mapping so that we can take the candidate field values to the employee entity when the recruiter converts a candidate into an employee. Use the following steps to set the mapping: Navigate to 1:N Relationship under the Candidate entity. Open the contact_originating_lead mapping to edit it. Navigate to Mapping and click on New to add a mapping. Select new_variablecompensation from the Source and Target entities and click on OK. Follow step 4 to add mapping for the fields shown in the following screenshot: Form design Now we need to design forms for our entity, and we need to remove unnecessary fields from entity forms. Use the following steps to customize entity forms: Open the solution that we created. Navigate to Entity | Account | Forms. Open the main form to modify it, as shown in the following screenshot: We can remove unwanted fields easily by selecting them one by one and using the Remove ribbon button on the entity form. To place the field, we just need to drag-and-drop it from the right-hand side field explorer. Account form design Once we have customized the account entity, we need to design the account form shown in the following screenshot: Candidate form design The candidate form should look like the following screenshot after customization: Employee form design After removing unwanted fields and adding required fields, the employee form should look like the following screenshot: Setting a security model for ERMS Microsoft CRM provides us the OOB security model that helps us to prevent unauthorized access to our data. We can enforce security in Microsoft CRM using security roles. A security role is a combination of different privileges and access levels. Privileges: These are actions such as Create , Write, Delete , Read, Append, Append To, Assign, Share, and Reparent that a Microsoft CRM user can perform on entities. The list of the actions performed is as follows: Create : This action is used to create an entity record Read: This action is used to read an entity record Write: This action is used to modify an entity record Delete : This action is used to delete an entity record Append: This action is used to relate one entity record to another entity record Append To: This action is used to relate other entity records to the current entity record Share: This action is used to share an entity record with another user Reparent: This action is used to assign a different owner to an entity record Access level: This defines on which entity record a Microsoft CRM user can perform actions defined by privileges. We have the following actions under access levels: Organization: This action is used to provide access to all records in an organization Parent-child Business Unit: This action is used to provide access to all the records in the user's business unit as well as in all child business units of the user's business unit Business: This action is used to provide access to all records in the user's business unit User: This action allows the user to access records created by him/her, or shared with him/her, or shared with his/her team We must assign at least one security role to access Microsoft CRM applications. Microsoft CRM provides us with 14 OOB security roles that can be customized based on our requirements. The following diagram is the security-role hierarchy that we have identified for the Employee Management System: The blocks in the preceding diagram can be explained as follows: HR Manager : This role will have access to all information for an employee in the ERMS system Recruiter: This role will not have access to information about offered packages to an employee System Administrator: This role will have administrative privileges and will be responsible for customizing and maintaining ERMS We will be customizing the existing security roles for our ERMS . The following table shows the security role mapping that we be will using: Microsoft CRM Security Role ERMS Security Role Sales Manager Manager Salesperson Salesperson System Administrator System Administrator Customizing the existing security role We need to use the following steps to customize the existing security role: Navigation to Setting | Administration | Security Roles. Double-click on the Sales Manager role to open it in edit mode. Change Role Name to Manager. Click on Save and then on Close . Follow the same steps to change the name of the Sales Person role to Recruiter. You can also create a new Manager Security role by copying the Sales Manager role. Once we have changed the security role name, we need to configure the Security Manager and Recruiter roles to remove unnecessary privileges. Follow the ensuing instructions to configure the Manager Security role: Navigate to the Core Records tab in the Manager Security role. Clear all privileges from the Opportunity and Document location entities. Navigate to the Marketing tab and clear all privileges from the Campaign and Marketing list entities. Navigate to the Sales tab and clear all privileges from all sales module entities, as shown in the following screenshot: Navigate to the Service tab and clear all privileges from all service module entities. Click on Save and Close . Follow all the preceding steps to remove the same privileges from the Recruiter role as well. Setting field-level security in Microsoft CRM 2011 Microsoft CRM 2011 contains an OOB feature for field-level security. Using field-level security, we can protect Microsoft CRM form fields from unauthorized access. This feature is only available in custom attributes. You can only apply field-level security to the custom fields of system entities. While creating/modifying fields, you can enable field-level security. The following screenshot shows how we can Enable/Disable the Field Security option: Once field-level security is enabled, we can set the field-level security profile. Let's apply field-level security in the offered package section in the Candidate entity. We have already enabled field-level security for these three fields under the offered package section in Candidate entity. Use the following steps to set the field-level security profile: Navigate to Settings | Administration | Field Security Profiles. Click on New to create the new security profile. Fill in the following information: Name: Recruitment Team Profile Description: Security profile for recruitment team Click on Save. Navigate to Users, under the Members section, in the left-hand navigation. Click on Add to add a user from whom you want to secure these fields. Navigate to Field Permission under the Common section in the left-hand navigation. Select all records and click on the Edit button. Select No from all drop-down fields. These fields can be implemented as shown in the following screenshot: Now all Microsoft CRM users with the Recruitment security role won't be able to see the values in these fields. They won't even be able to set values for these fields.
Read more
  • 0
  • 0
  • 2117

article-image-applying-linq-entities-wcf-service
Packt
05 Feb 2013
15 min read
Save for later

Applying LINQ to Entities to a WCF Service

Packt
05 Feb 2013
15 min read
(For more resources related to this topic, see here.) Creating the LINQNorthwind solution The first thing we need to do is create a test solution. In this article, we will start from the data access layer. Perform the following steps: Start Visual Studio. Create a new class library project LINQNorthwindDAL with solution name LINQNorthwind (make sure the Create directory for the solution is checked to specify the solution name). Delete the Class1.cs file. Add a new class ProductDAO to the project. Change the new class ProductDAO to be public. Now you should have a new solution with the empty data access layer class. Next, we will add a model to this layer and create the business logic layer and the service interface layer. Modeling the Northwind database In the previous section, we created the LINQNorthwind solution. Next, we will apply LINQ to Entities to this new solution. For the data access layer, we will use LINQ to Entities instead of the raw ADO.NET data adapters. As you will see in the next section, we will use one LINQ statement to retrieve product information from the database and the update LINQ statements will handle the concurrency control for us easily and reliably. As you may recall, to use LINQ to Entities in the data access layer of our WCF service, we first need to add an entity data model to the project. In the Solution Explorer, right-click on the project item LINQNorthwindDAL, select menu options Add | New Item..., and then choose Visual C# Items | ADO.NET Entity Data Model as Template and enter Northwind.edmx as the name. Select Generate from database, choose the existing Northwind connection, and add the Products table to the model. Click on the Finish button to add the model to the project. The new column RowVersion should be in the Product entity . If it is not there, add it to the database table with a type of Timestamp and refresh the entity data model from the database In the EMD designer, select the RowVersion property of the Product entity and change its Concurrency Mode from None to Fixed. Note that its StoreGeneratedPattern should remain as Computed. This will generate a file called Northwind.Context.cs, which contains the Db context for the Northwind database. Another file called Product.cs is also generated, which contains the Product entity class. You need to save the data model in order to see these two files in the Solution Explorer. In Visual Studio Solution Explorer, the Northwind.Context.cs file is under the template file Northwind.Context.tt and Product.cs is under Northwind.tt. However, in Windows Explorer, they are two separate files from the template files. Creating the business domain object project During Implementing a WCF Service in the Real World, we create a business domain object (BDO) project to hold the intermediate data between the data access objects and the service interface objects. In this section, we will also add such a project to the solution for the same purpose. In the Solution Explorer, right-click on the LINQNorthwind solution. Select Add | New Project... to add a new class library project named LINQNorthwindBDO. Delete the Class1.cs file. Add a new class file ProductBDO.cs. Change the new class ProductBDO to be public. Add the following properties to this class: ProductID ProductName QuantityPerUnit UnitPrice Discontinued UnitsInStock UnitsOnOrder ReorderLevel RowVersion The following is the code list of the ProductBDO class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LINQNorthwindBDO { public class ProductBDO { public int ProductID { get; set; } public string ProductName { get; set; } public string QuantityPerUnit { get; set; } public decimal UnitPrice { get; set; } public int UnitsInStock { get; set; } public int ReorderLevel { get; set; } public int UnitsOnOrder { get; set; } public bool Discontinued { get; set; } public byte[] RowVersion { get; set; } } } As noted earlier, in this article we will use BDO to hold the intermediate data between the data access objects and the data contract objects. Besides this approach, there are some other ways to pass data back and forth between the data access layer and the service interface layer, and two of them are listed as follows: The first one is to expose the Entity Framework context objects from the data access layer up to the service interface layer. In this way, both the service interface layer and the business logic layer—we will implement them soon in following sections—can interact directly with the Entity Framework. This approach is not recommended as it goes against the best practice of service layering. Another approach is to use self-tracking entities. Self-tracking entities are entities that know how to do their own change tracking regardless of which tier those changes are made on. You can expose self-tracking entities from the data access layer to the business logic layer, then to the service interface layer, and even share the entities with the clients. Because self-tracking entities are independent of entity context, you don't need to expose the entity context objects. The problem of this approach is, you have to share the binary files with all the clients, thus it is the least interoperable approach for a WCF service. Now this approach is not recommended by Microsoft, so in this book we will not discuss it. Using LINQ to Entities in the data access layer Next we will modify the data access layer to use LINQ to Entities to retrieve and update products. We will first create GetProduct to retrieve a product from the database and then create UpdateProduct to update a product in the database. Adding a reference to the BDO project Now we have the BDO project in the solution, we need to modify the data access layer project to reference it. In the Solution Explorer, right-click on the LINQNorthwindDAL project. Select Add Reference.... Select the LINQNorthwindBDO project from the Projects tab under Solution. Click on the OK button to add the reference to the project. Creating GetProduct in the data access layer We can now create the GetProduct method in the data access layer class ProductDAO, to use LINQ to Entities to retrieve a product from the database. We will first create an entity DbContext object and then use LINQ to Entities to get the product from the DbContext object. The product we get from DbContext will be a conceptual entity model object. However, we don't want to pass this product object back to the upper-level layer because we don't want to tightly couple the business logic layer with the data access layer. Therefore, we will convert this entity model product object to a ProductBDO object and then pass this ProductBDO object back to the upper-level layers. To create the new method, first add the following using statement to the ProductBDO class: using LINQNorthwindBDO; Then add the following method to the ProductBDO class: public ProductBDO GetProduct(int id) { ProductBDO productBDO = null; using (var NWEntities = new NorthwindEntities()) { Product product = (from p in NWEntities.Products where p.ProductID == id select p).FirstOrDefault(); if (product != null) productBDO = new ProductBDO() { ProductID = product.ProductID, ProductName = product.ProductName, QuantityPerUnit = product.QuantityPerUnit, UnitPrice = (decimal)product.UnitPrice, UnitsInStock = (int)product.UnitsInStock, ReorderLevel = (int)product.ReorderLevel, UnitsOnOrder = (int)product.UnitsOnOrder, Discontinued = product.Discontinued, RowVersion = product.RowVersion }; } return productBDO; } Within the GetProduct method, we had to create an ADO.NET connection, create an ADO. NET command object with that connection, specify the command text, connect to the Northwind database, and send the SQL statement to the database for execution. After the result was returned from the database, we had to loop through the DataReader and cast the columns to our entity object one by one. With LINQ to Entities, we only construct one LINQ to Entities statement and everything else is handled by LINQ to Entities. Not only do we need to write less code, but now the statement is also strongly typed. We won't have a runtime error such as invalid query syntax or invalid column name. Also, a SQL Injection attack is no longer an issue, as LINQ to Entities will also take care of this when translating LINQ expressions to the underlying SQL statements. Creating UpdateProduct in the data access layer In the previous section, we created the GetProduct method in the data access layer, using LINQ to Entities instead of ADO.NET. Now in this section, we will create the UpdateProduct method, using LINQ to Entities instead of ADO.NET. Let's create the UpdateProduct method in the data access layer class ProductBDO, as follows: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { message = "product updated successfully"; bool ret = true; using (var NWEntities = new NorthwindEntities()) { var productID = productBDO.ProductID; Product productInDB = (from p in NWEntities.Products where p.ProductID == productID select p).FirstOrDefault(); // check product if (productInDB == null) { throw new Exception("No product with ID " + productBDO.ProductID); } NWEntities.Products.Remove(productInDB); // update product productInDB.ProductName = productBDO.ProductName; productInDB.QuantityPerUnit = productBDO.QuantityPerUnit; productInDB.UnitPrice = productBDO.UnitPrice; productInDB.Discontinued = productBDO.Discontinued; productInDB.RowVersion = productBDO.RowVersion; NWEntities.Products.Attach(productInDB); NWEntities.Entry(productInDB).State = System.Data.EntityState.Modified; int num = NWEntities.SaveChanges(); productBDO.RowVersion = productInDB.RowVersion; if (num != 1) { ret = false; message = "no product is updated"; } } return ret; } Within this method, we first get the product from database, making sure the product ID is a valid value in the database. Then, we apply the changes from the passed-in object to the object we have just retrieved from the database, and submit the changes back to the database. Let's go through a few notes about this method: You have to save productID in a new variable and then use it in the LINQ query. Otherwise, you will get an error saying Cannot use ref or out parameter 'productBDO' inside an anonymous method, lambda expression, or query expression. If Remove and Attach are not called, RowVersion from database (not from the client) will be used when submitting to database, even though you have updated its value before submitting to the database. An update will always succeed, but without concurrency control. If Remove is not called and you call the Attach method, you will get an error saying The object cannot be attached because it is already in the object context. If the object state is not set to be Modified, Entity Framework will not honor your changes to the entity object and you will not be able to save any change to the database. Creating the business logic layer Now let's create the business logic layer. Right click on the solution item and select Add | New Project.... Add a class library project with the name LINQNorthwindLogic. Add a project reference to LINQNorthwindDAL and LINQNorthwindBDO to this new project. Delete the Class1.cs file. Add a new class file ProductLogic.cs. Change the new class ProductLogic to be public. Add the following two using statements to the ProductLogic.cs class file: using LINQNorthwindDAL; using LINQNorthwindBDO; Add the following class member variable to the ProductLogic class: ProductDAO productDAO = new ProductDAO(); Add the following new method GetProduct to the ProductLogic class: public ProductBDO GetProduct(int id) { return productDAO.GetProduct(id); } Add the following new method UpdateProduct to the ProductLogic class: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { var productInDB = GetProduct(productBDO.ProductID); // invalid product to update if (productInDB == null) { message = "cannot get product for this ID"; return false; } // a product cannot be discontinued // if there are non-fulfilled orders if (productBDO.Discontinued == true && productInDB.UnitsOnOrder > 0) { message = "cannot discontinue this product"; return false; } else { return productDAO.UpdateProduct(ref productBDO, ref message); } } Build the solution. We now have only one more step to go, that is, adding the service interface layer. Creating the service interface layer The last step is to create the service interface layer. Right-click on the solution item and select Add | New Project.... Add a WCF service library project with the name of LINQNorthwindService. Add a project reference to LINQNorthwindLogic and LINQNorthwindBDO to this new service interface project. Change the service interface file IService1.cs, as follows: Change its filename from IService1.cs to IProductService.cs. Change the interface name from IService1 to IProductService, if it is not done for you. Remove the original two service operations and add the following two new operations: [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); Remove the original CompositeType and add the following data contract classes: [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } The following is the content of the IProductService.cs file: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace LINQNorthwindService { [ServiceContract] public interface IProductService { [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); } [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } } Change the service implementation file Service1.cs, as follows: Change its filename from Service1.cs to ProductService.cs. Change its class name from Service1 to ProductService, if it is not done for you. Add the following two using statements to the ProductService.cs file: using LINQNorthwindLogic; using LINQNorthwindBDO; Add the following class member variable: ProductLogic productLogic = new ProductLogic(); Remove the original two methods and add following two methods: public Product GetProduct(int id) { ProductBDO productBDO = null; try { productBDO = productLogic.GetProduct(id); } catch (Exception e) { string msg = e.Message; string reason = "GetProduct Exception"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } if (productBDO == null) { string msg = string.Format("No product found for id {0}", id); string reason = "GetProduct Empty Product"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } Product product = new Product(); TranslateProductBDOToProductDTO(productBDO, product); return product; } public bool UpdateProduct(ref Product product, ref string message) { bool result = true; // first check to see if it is a valid price if (product.UnitPrice <= 0) { message = "Price cannot be <= 0"; result = false; } // ProductName can't be empty else if (string.IsNullOrEmpty(product.ProductName)) { message = "Product name cannot be empty"; result = false; } // QuantityPerUnit can't be empty else if (string.IsNullOrEmpty(product.QuantityPerUnit)) { message = "Quantity cannot be empty"; result = false; } else { try { var productBDO = new ProductBDO(); TranslateProductDTOToProductBDO(product, productBDO); result = productLogic.UpdateProduct( ref productBDO, ref message); product.RowVersion = productBDO.RowVersion; } catch (Exception e) { string msg = e.Message; throw new FaultException<ProductFault> (new ProductFault(msg), msg); } } return result; } Because we have to convert between the data contract objects and the business domain objects, we need to add the following two methods: private void TranslateProductBDOToProductDTO( ProductBDO productBDO, Product product) { product.ProductID = productBDO.ProductID; product.ProductName = productBDO.ProductName; product.QuantityPerUnit = productBDO.QuantityPerUnit; product.UnitPrice = productBDO.UnitPrice; product.Discontinued = productBDO.Discontinued; product.RowVersion = productBDO.RowVersion; } private void TranslateProductDTOToProductBDO( Product product, ProductBDO productBDO) { productBDO.ProductID = product.ProductID; productBDO.ProductName = product.ProductName; productBDO.QuantityPerUnit = product.QuantityPerUnit; productBDO.UnitPrice = product.UnitPrice; productBDO.Discontinued = product.Discontinued; productBDO.RowVersion = product.RowVersion; } Change the config file App.config, as follows: Change Service1 to ProductService. Remove the word Design_Time_Addresses. Change the port to 8080. Now, BaseAddress should be as follows: http://localhost:8080/LINQNorthwindService/ProductService/ Copy the connection string from the App.config file in the LINQNorthwindDAL project to the following App.config file: <connectionStrings> <add name="NorthwindEntities" connectionString="metadata=res://*/Northwind. csdl|res://*/Northwind.ssdl|res://*/Northwind. msl;provider=System.Data.SqlClient;provider connection string="data source=localhost;initial catalog=Northwind;integrated security=True;Multipl eActiveResultSets=True;App=EntityFramework"" providerName="System.Data.EntityClient" /> </connectionStrings> You should leave the original connection string untouched in the App.config file in the data access layer project. This connection string is used by the Entity Model Designer at design time. It is not used at all during runtime, but if you remove it, whenever you open the entity model designer in Visual Studio, you will be prompted to specify a connection to your database. Now build the solution and there should be no errors. Testing the service with the WCF Test Client Now we can run the program to test the GetProduct and UpdateProduct operations with the WCF Test Client. You may need to run Visual Studio as administrator to start the WCF Test Client. First set LINQNorthwindService as the startup project and then press Ctrl + F5 to start the WCF Test Client. Double-click on the GetProduct operation, enter a valid product ID, and click on the Invoke button. The detailed product information should be retrieved and displayed on the screen, as shown in the following screenshot: Now double-click on the UpdateProduct operation, enter a valid product ID, and specify a name, price, quantity per unit, and then click on Invoke. This time you will get an exception as shown in the following screenshot: From this image we can see that the update failed. The error details, which are in HTML View in the preceding screenshot, actually tell us it is a concurrency error. This is because, from WCF Test Client, we can't enter a row version as it is not a simple datatype parameter, thus we didn't pass in the original RowVersion for the object to be updated, and when updating the object in the database, the Entity Framework thinks this product has been updated by some other users.
Read more
  • 0
  • 0
  • 4445

article-image-behavior-driven-development-selenium-webdriver
Packt
31 Jan 2013
15 min read
Save for later

Behavior-driven Development with Selenium WebDriver

Packt
31 Jan 2013
15 min read
Behavior-driven Development (BDD) is an agile software development practice that enhances the paradigm of Test Driven Development (TDD) and acceptance tests, and encourages the collaboration between developers, quality assurance, domain experts, and stakeholders. Behavior-driven Development was introduced by Dan North in the year 2003 in his seminal article available at http://dannorth.net/introducing-bdd/. In this article by Unmesh Gundecha, author of Selenium Testing Tools Cookbook, we will cover: Using Cucumber-JVM and Selenium WebDriver in Java for BDD Using SpecFlow.NET and Selenium WebDriver in .NET for BDD Using JBehave and Selenium WebDriver in Java Using Capybara, Cucumber, and Selenium WebDriver in Ruby (For more resources related to this topic, see here.) Using Cucumber-JVM and Selenium WebDriver in Java for BDD BDD/ATDD is becoming widely accepted practice in agile software development, and Cucumber-JVM is a mainstream tool used to implement this practice in Java. Cucumber-JVM is based on Cucumber framework, widely used in Ruby on Rails world. Cucumber-JVM allows developers, QA, and non-technical or business participants to write features and scenarios in a plain text file using Gherkin language with minimal restrictions about grammar in a typical Given, When, and Then structure. This feature file is then supported by a step definition file, which implements automated steps to execute the scenarios written in a feature file. Apart from testing APIs with Cucumber-JVM, we can also test UI level tests by combining Selenium WebDriver. In this recipe, we will use Cucumber-JVM, Maven, and Selenium WebDriver for implementing tests for the fund transfer feature from an online banking application. Getting ready Create a new Maven project named FundTransfer in Eclipse. Add the following dependencies to POM.XML: <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http:// maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>FundTransfer</groupId> <artifactId>FundTransfer</artifactId> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-java</artifactId> <version>1.0.14</version> <scope>test</scope> </dependency> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-junit</artifactId> <version>1.0.14</version> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope> </dependency> <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>2.25.0</version> </dependency> </dependencies> </project> How to do it... Perform the following steps for creating BDD/ATDD tests with Cucumber-JVM: Select the FundTransfer project in Package Explorer in Eclipse. Select and right-click on src/test/resources in Package Explorer. Select New | Package from the menu to add a new package as shown in the following screenshot: Enter fundtransfer.test in the Name: textbox and click on the Finish button. Add a new file to this package. Name this file as fundtransfer.feature as shown in the following screenshot: Add the Fund Transfer feature and scenarios to this file: Feature: Customer Transfer's Fund As a customer, I want to transfer funds so that I can send money to my friends and family Scenario: Valid Payee Given the user is on Fund Transfer Page When he enters "Jim" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure the fund transfer is complete with "$100 transferred successfully to Jim!!" message Scenario: Invalid Payee Given the user is on Fund Transfer Page When he enters "Jack" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure a transaction failure message "Transfer failed!! 'Jack' is not registered in your List of Payees" is displayed Scenario: Account is overdrawn past the overdraft limit Given the user is on Fund Transfer Page When he enters "Tim" as payee name And he enters "1000000" as amount And he Submits request for Fund Transfer Then ensure a transaction failure message "Transfer failed!! account cannot be overdrawn" is displayed Select and right-click on src/test/java in Package Explorer. Select New | Package from menu to add a new Package as shown in the following screenshot: Create a class named FundTransferStepDefs in the newly-created package. Add the following code to this class: package fundtransfer.test; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.By; import cucumber.annotation.*; import cucumber.annotation.en.*; import static org.junit.Assert.assertEquals; public class FundTransferStepDefs { protected WebDriver driver; @Before public void setUp() { driver = new ChromeDriver(); } @Given("the user is on Fund Transfer Page") public void The_user_is_on_fund_transfer_page() { driver.get("http://dl.dropbox.com/u/55228056/fundTransfer. html"); } @When("he enters "([^"]*)" as payee name") public void He_enters_payee_name(String payeeName) { driver.findElement(By.id("payee")).sendKeys(payeeName); } @And("he enters "([^"]*)" as amount") public void He_enters_amount(String amount) { driver.findElement(By.id("amount")).sendKeys(amount); } @And("he Submits request for Fund Transfer") public void He_submits_request_for_fund_transfer() { driver.findElement(By.id("transfer")).click(); Behavior-driven Development 276 } @Then("ensure the fund transfer is complete with "([^"]*)" message") public void Ensure_the_fund_transfer_is_complete(String msg) { WebElement message = driver.findElement(By.id("message")); assertEquals(message.getText(),msg); } @Then("ensure a transaction failure message "([^"]*)" is displayed") public void Ensure_a_transaction_failure_message(String msg) { WebElement message = driver.findElement(By.id("message")); assertEquals(message.getText(),msg); } @After public void tearDown() { driver.close(); } } Create a support class RunCukesTest which will define the Cucumber-JVM configurations: package fundtransfer.test; import cucumber.junit.Cucumber; import org.junit.runner.RunWith; @RunWith(Cucumber.class) @Cucumber.Options(format = {"pretty", "html:target/cucumber-htmlreport", "json-pretty:target/cucumber-report.json"}) public class RunCukesTest { } To run the tests in Maven life cycle select the FundTransfer project in Package Explorer. Right-click on the project name and select Run As | Maven test. Maven will execute all the tests from the project. At the end of the test, an HTML report will be generated as shown in the following screenshot. To view this report open index.html in the targetcucumber-htmlreport folder: How it works... Creating tests in Cucumber-JVM involves three major steps: writing a feature file, implementing automated steps using the step definition file, and creating support code as needed. For writing features, Cucumber-JVM uses 100 percent Gherkin syntax. The feature file describes the feature and then the scenarios to test the feature: Feature: Customer Transfer's Fund As a customer, I want to transfer funds so that I can send money to my friends and family You can write as many scenarios as needed to test the feature in the feature file. The scenario section contains the name and steps to execute the defined scenario along with test data required to execute that scenario with the application: Scenario: Valid Payee Given the user is on Fund Transfer Page When he enters "Jim" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure the fund transfer is complete with "$100 transferred successfully to Jim!!" message Team members use these feature files and scenarios to build and validate the system. Frameworks like Cucumber or JBehave provide an ability to automatically validate the features by allowing us to implement automated steps. For this we need to create the step definition file that maps the steps from the feature file to automation code. Step definition files implement a method for steps using special annotations. For example, in the following code, the @When annotation is used to map the step "When he enters "Jim" as payee name" from the feature file in the step definition file. When this step is to be executed by the framework, the He_enters_payee_name() method will be called by passing the data extracted using regular expressions from the step: @When("he enters "([^"]*)" as payee name") public void He_enters_payee_name(String payeeName) { driver.findElement(By.id("payee")).sendKeys(payeeName); } In this method, the WebDriver code is written to locate the payee name textbox and enter the name value using the sendKeys() method. The step definition file acts like a template for all the steps from the feature file while scenarios can use a mix and match of the steps based on the test conditions. A helper class RunCukesTest is defined to provide Cucumber-JVM configurations such as how to run the features and steps with JUnit, report format, and location, shown as follows: @RunWith(Cucumber.class) @Cucumber.Options(format = {"pretty", "html:target/cucumber-htmlreport", "json-pretty:target/cucumber-report.json"}) public class RunCukesTest { } There's more… In this example, step definition methods are calling Selenium WebDriver methods directly. However, a layer of abstraction can be created using the Page object where a separate class is defined with the definition of all the elements from FundTransferPage: import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.support.CacheLookup; import org.openqa.selenium.support.FindBy; import org.openqa.selenium.support.PageFactory; public class FundTransferPage { @FindBy(id = "payee") @CacheLookup Chapter 11 279 public WebElement payeeField; @FindBy(id = "amount") public WebElement amountField; @FindBy(id = "transfer") public WebElement transferButton; @FindBy(id = "message") public WebElement messageLabel; public FundTransferPage(WebDriver driver) { if(!"Online Fund Transfers".equals(driver.getTitle())) throw new IllegalStateException("This is not Fund Transfer Page"); PageFactory.initElements(driver, this); } } Using SpecFlow.NET and Selenium WebDriver in .NET for BDD We saw how to use Selenium WebDriver with Cucumber-JVM for BDD/ATDD. Now let's try using a similar combination in .NET using SpecFlow.NET. We can implement BDD in .NET using the SpecFlow.NET and Selenium WebDriver .NET bindings. SpecFlow.NET is inspired by Cucumber and uses the same Gherkin language for writing specs. In this recipe, we will implement tests for the Fund Transfer feature using SpecFlow.NET. We will also use the Page objects for FundTransferPage in this recipe. Getting ready This recipe is created with SpecFlow.NET Version 1.9.0 and Microsoft Visual Studio Professional 2012. Download and install SpecFlow from Visual Studio Gallery http://visualstudiogallery.msdn.microsoft.com/9915524d-7fb0-43c3-bb3c-a8a14fbd40ee. Download and install NUnit Test Adapter from http://visualstudiogallery.msdn.microsoft.com/9915524d-7fb0-43c3-bb3c-a8a14fbd40ee. This will install the project template and other support files for SpecFlow.NET in Visual Studio 2012. How to do it... You will find the Fund Transfer feature in any online banking application where users can transfer funds to a registered payee who could be a family member or a friend. Let's test this feature using SpecFlow.NET by performing the following steps: Launch Microsoft Visual Studio. In Visual Studio create a new project by going to File | New | Project. Select Visual C# Class Library Project. Name the project FundTransfer.specs as shown in the following screenshot: Next, add SpecFlow.NET, WebDriver, and NUnit using NuGet. Right-click on the FundTransfer.specs solution in Solution Explorer and select Manage NuGet Packages... as shown in the following screenshot: On the FundTransfer.specs - Manage NuGet Packages dialog box, select Online, and search for SpecFlow packages. The search will result with the following suggestions: Select SpecFlow.NUnit from the list and click on Install button. NuGet will download and install SpecFlow.NUnit and any other package dependencies to the solution. This will take a while. Next, search for the WebDriver package on the FundTransfer.specs - Manage NuGet Packages dialog box. Select Selenium WebDriver and Selenium WebDriver Support Classes from the list and click on the Install button. Close the FundTransfer.specs - Manage NuGet Packages dialog box. Creating a spec file The steps for creating a spec file are as follows: Right-click on the FundTransfer.specs solution in Solution Explorer. Select Add | New Item. On the Add New Item – FundTransfer.specs dialog box, select SpecFlow Feature File and enter FundTransfer.feature in the Name: textbox. Click Add button as shown in the following screenshot: In the Editor window, your will see the FundTransfer.feature tab. By default, SpecFlow will add a dummy feature in the feature file. Replace the content of this file with the following feature and scenarios: Feature: Customer Transfer's Fund As a customer, I want to transfer funds so that I can send money to my friends and family Scenario: Valid Payee Given the user is on Fund Transfer Page When he enters "Jim" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure the fund transfer is complete with "$100 transferred successfully to Jim!!" message Scenario: Invalid Payee Given the user is on Fund Transfer Page When he enters "Jack" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure a transaction failure message "Transfer failed!! 'Jack' is not registered in your List of Payees" is displayed Scenario: Account is overdrawn past the overdraft limit Given the user is on Fund Transfer Page When he enters "Tim" as payee name And he enters "1000000" as amount And he Submits request for Fund Transfer Then ensure a transaction failure message "Transfer failed!! account cannot be overdrawn" is displayed Creating a step definition file The steps for creating a step definition file are as follows: To add a step definition file, right-click on the FundTransfer.sepcs solution in Solution Explorer. Select Add | New Item. On the Add New Item - FundTransfer.specs dialog box, select SpecFlow Step Definition File and enter FundTransferStepDefs.cs in the Name: textbox. Click on Add button. A new C# class will be added with dummy steps. Replace the content of this file with the following code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using TechTalk.SpecFlow; using NUnit.Framework; using OpenQA.Selenium; namespace FundTransfer.specs { [Binding] public class FundTransferStepDefs { FundsTransferPage _ftPage = new FundsTransferPage(Environment.Driver); [Given(@"the user is on Fund Transfer Page")] public void GivenUserIsOnFundTransferPage() { Environment.Driver.Navigate().GoToUrl("http:// localhost:64895/Default.aspx"); } [When(@"he enters ""(.*)"" as payee name")] public void WhenUserEneteredIntoThePayeeNameField(string payeeName) { _ftPage.payeeNameField.SendKeys(payeeName); } [When(@"he enters ""(.*)"" as amount")] public void WhenUserEneteredIntoTheAmountField(string amount) { _ftPage.amountField.SendKeys(amount); } [When(@"he enters ""(.*)"" as amount above his limit")] public void WhenUserEneteredIntoTheAmountFieldAboveLimit (string amount) { _ftPage.amountField.SendKeys(amount); } [When(@"he Submits request for Fund Transfer")] public void WhenUserPressTransferButton() { _ftPage.transferButton.Click(); } [Then(@"ensure the fund transfer is complete with ""(.*)"" message")] public void ThenFundTransferIsComplete(string message) { Assert.AreEqual(message, _ftPage.messageLabel.Text); } [Then(@"ensure a transaction failure message ""(.*)"" is displayed")] public void ThenFundTransferIsFailed(string message) { Assert.AreEqual(message, _ftPage.messageLabel.Text); } } } Defining a Page object and a helper class The steps for defining a Page object and a helper class are as follows: Define a Page object for the Fund Transfer Page by adding a new C# class file. Name this class FundTransferPage. Copy the following code to this class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using OpenQA.Selenium; using OpenQA.Selenium.Support.PageObjects; namespace FundTransfer.specs { class FundTransferPage { public FundTransferPage(IWebDriver driver) { PageFactory.InitElements(driver, this); } [FindsBy(How = How.Id, Using = "payee")] public IWebElement payeeNameField { get; set; } [FindsBy(How = How.Id, Using = "amount")] public IWebElement amountField { get; set; } [FindsBy(How = How.Id, Using = "transfer")] public IWebElement transferButton { get; set; } [FindsBy(How = How.Id, Using = "message")] public IWebElement messageLabel { get; set; } } } We need a helper class that will provide an instance of WebDriver and perform clean up activity at the end. Name this class Environment and copy the following code to this class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using OpenQA.Selenium; using OpenQA.Selenium.Chrome; using TechTalk.SpecFlow; namespace FundTransfer.specs { [Binding] public class Environment { private static ChromeDriver driver; public static IWebDriver Driver { get { return driver ?? (driver = new ChromeDriver(@"C:ChromeDriver")); } } [AfterTestRun] public static void AfterTestRun() { Driver.Close(); Driver.Quit(); driver = null; } } } Build the solution. Running tests The steps for running tests are as follows: Open the Test Explorer window by clicking the Test Explorer option on Test | Windows on Main Menu. It will display the three scenarios listed in the feature file as shown in the following screenshot: Click on Run All to test the feature as shown in the following screenshot: How it works... SpecFlow.NET first needs the feature files for the features we will be testing. SpecFlow.NET supports the Gherkin language for writing features. In the step definition file, we create a method for each step written in a feature file using the Given, When, and Then attributes. These methods can also take the parameter values specified in the steps using the arguments. Following is an example where we are entering the name of the payee: [When(@"he enters ""(.*)"" as payee name")] public void WhenUserEneteredIntoThePayeeNameField(string payeeName) { _ftPage.payeeNameField.SendKeys(payeeName); } In this example, we are automating the "When he enters "Jim" as payee name" step. We used the When attribute and created a method: WhenUserEneteredIntoThePayeeNameField. This method will need the value of the payee name embedded in the step which is extracted using the regular expression by the SpecFlow.NET. Inside the method, we are using an instance of the FundTransferPage class and calling its payeeNameField member's SendKeysl() method, passing the name of the payee extracted from the step. Using the Page object helps in abstracting locator and page details from the step definition files, making it more manageable and easy to maintain. SpecFlow.NET automatically generates the NUnit test code when the project is built. Using the Visual Studio Test Explorer and NUnit Test Adaptor for Visual Studio, these tests are executed and the features are validated.
Read more
  • 0
  • 0
  • 13332
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-smart-processes-using-rules
Packt
25 Jan 2013
16 min read
Save for later

Smart Processes Using Rules

Packt
25 Jan 2013
16 min read
(For more resources related to this topic, see here.) Good old integration patterns What we've learnt from experience about jBPM3 is that a rule engine can become very handy when evaluating different situations and making automatic decisions based on the information available. Based on my experience in consulting, I've noticed that people who understand how a process engine works and feel comfortable with it, start looking at rule engines such as Drools. The most intuitive first step is to delegate the business decisions in your processes and the data validations to a rule engine. In the past, adopting one of these two different technologies at the same time was difficult, mostly because of the learning curve as well as the maturity and investment required by a company to learn and use both technologies at once. At the end of the day, companies spend time and money to create in-house integrations and solutions so that they can merge these two worlds. The following example shows what people have done with jBPM3 and the Drools Rule Engine: The first and most typical use case is to use a rule engine to choose between different paths in a process. Usually, the information that is sent to the rule engine is the same information that is flowing through the process tasks or just some pieces of it; we expect a return value from the rule engine that will be used to select which path to take. Most of the time, we send small pieces of information (for example, the age or salary of a person, and so on) and we expect to get a Boolean (true/false) value, in the case that we want to decide between just two paths, or a value (integers such as 1, 2, 3, and so on) that will be used to match each outgoing sequence flow. In this kind of integration, the rule engine is considered just an external component. We expect a very stateless behavior and an immediate response from the rule engine. The previous figure shows a similar situation, when we want to validate some data and then define a task inside our process to achieve this information's validation or decoration. Usually we send a set of objects that we want to validate or decorate, and we expect an immediate answer from the rule engine. The type of answer that we receive depends on the type of validation or the decoration rules that we write. Usually, these interactions interchange complex data structures, such as a full graph of objects. And that's it! Those two examples show the classic interaction from a process engine to a rule engine. You may have noticed the stateless nature of both the examples, where the most interesting features of the rule engine are not being used at all. In order to understand a little bit better why a rule engine is an important tool and the advantages of using it (in contrast to any other service), we need to understand some of the basic concepts behind it. The following section briefly introduces the Drools Rule Engine and its features, as well as an explanation of the basic topics that we need to know in order to use it. The Drools Rule Engine The reason why rule engines are extremely useful is that they allow us to express declaratively what to do in specific scenarios. In contrast to imperative languages such as Java, the Rule Engine provides a declarative language that is used to evaluate the available information. In this article we will analyze some Drools rules, but this article will not explain in detail the Drools Rule syntax. For more information on this take a look at the official documentation at http://docs.jboss.org/drools/release/5.5.0.Final/drools-expert-docs/html_single/. Let's analyze the following simple example to understand the differences between the declarative approaches and the imperative approaches: rule "over 18 enabled to drive" when $p: Person( age > 18, enabledToDrive == false) then $p.setEnabledToDrive(true); update($p); end rule "over 21 enabled to vote" when $p: Person( age > 21, enabledToVote == false) Then $p.setEnabledToVote(true); update($p); end Before explaining what these two rules are doing (which is kind of obvious as they are self-descriptive!), let's analyze the following java code snippet: if(person.getAge() > 18 && person.isEnabledToDrive() == false){ person.setEnabledToDrive(true); } if(person.getAge() > 21 && person.isEnabledToVote() == false){ person.setEnabledToVote(true); } Usually most people, who are not familiar with rule engines but have heard of them, think that rule engines are used to extract if/else statements from the application's code. This definition is far from reality, and doesn't explain the power of rule engines. First of all, rule engines provide us a declarative language to express our rules, in contrast to the imperative nature of languages such as Java. In Java code, we know that the first line is evaluated first, so if the expression inside the if statement evaluates to true, the next line will be executed; if not, the execution will jump to the next if statement. There are no doubts about how Java will analyze and execute these statements: one after the other until there are no more instructions. We commonly say that Java is an imperative language in which we specify the actions that need to be executed and the sequence of these actions. Java, C, PHP, Python, and Cobol are imperative languages, meaning that they follow the instructions that we give them, one after the other. Now if we analyze the DRL snippet (DRL means Drools Rule Language), we are not specifying a sequence of imperative actions. We are specifying situations that are evaluated by the rule engine, so when those situations are detected, the rule consequence (the then section of the rule) is eligible to be executed. Each rule defines a situation that the engine will evaluate. Rules are defined using two sections: the conditional section, which starts with the when keyword that defines the filter that will be applied to the information available inside the rule engine. This example rule contains the following condition: when $p: Person( age > 18 ) This DRL conditional statement filters all the objects inside the rule engine instance that match this condition. This conditional statement means "match for each person whose age is over 18". If we have at least one Person instance that matches this condition, this rule will be activated for that Person instance. A rule that is activated is said to be eligible to be fired. When a rule is fired, the consequence side of the rule is executed. For this example rule, the consequence section looks like this: then $p.setEnabledToDrive(true); update($p); In the rule consequence, you can write any Java code you want. This code will be executed as regular Java code. In this case, we are getting the object that matches the filter—Person ( age > 18 )—that is bonded to the variable called $p and changing one of its attributes. The second line inside the consequence notifies the rule engine of this change so it can be used by other rules. A rule is composed of a conditional side, also called Left-Hand Side (LHS for short) and a consequence side, also called Right-Hand Side (RHS for short). rule "Person over 60 – apply discount" when // LHS $p: Person(age > 60) then // RHS $p.setDiscount(40); end We will be in charge of writing these rules and making them available to a rule engine that is prepared to host a large number of rules. To understand the differences and advantages between the following lines, we need to understand how a rule engine works. The first big difference is behavior: we cannot force the rule engine to execute a given rule. The rule engine will pick up only the rules that match with the expressed conditions. if(person.getAge() > 18) And $p: Person( age > 18 ) If we try to compare rules with imperative code, we usually analyze how the declarative nature of rule languages can help us to create more maintainable code. The following example shows how application codes usually get so complicated, that maintaining them is not a simple task: If(…){ If(){ If(){ } }else(){ if(…){ } } } All of the evaluations must be done in a sequence. When the application grows, maintaining this spaghetti code becomes complex—even more so when the logic that it represents needs to be changed frequently to reflect business changes. In our simple example, if the person that we are analyzing is 19 years old, the only rule that will be evaluated and activated is the rule called "over 18 enabled to drive". Imagine that we had mixed and nested if statements evaluating different domain entities in our application. There would be no simple way to do the evaluations in the right order for every possible combination. Business rules offer us a simple and atomic way to describe the situations that we are interested in, which will be analyzed based on the data available. When the number of these situations grows and we need to frequently apply changes to reflect the business reality, a rule engine is a very good alternative to improve readability and maintenance. Rules represent what to do for a specific situation. That's why business rules must be atomic. When we read a business rule, we need to clearly identify what's the condition and exactly what will happen when the condition is true. To finish this quick introduction to the Drools Rule Engine, let's look at the following example: rule "enabled to drive must have a car" When $p: Person( enabledToDrive == true ) not(Car(person == $p)) then insert(new Car($p)); end rule "person with new car must be happy" when $p: Person() $c: Car(person == $p) then $p.setHappy(true); end rule "over 18 enabled to drive" when $p: Person( age > 18, enabledToDrive == false) then $p.setEnabledToDrive(true); update($p); end When you get used to the Drools Rule Language, you can easily see how the rules will work for a given situation. The rule called "over 18 enabled to drive" checks the person's age in order to see if he/she is enabled to drive or not. By default, persons are not enabled to drive. When this rule finds one instance of the Person object that matches with this filter, it will activate the rule; and when the rule's consequence gets executed, the enabledToDrive attribute will be set to true and we will notify the engine of this change. Because the Person instance has been updated, the rule called "enabled to drive must have a car" is now eligible to be fired. Because there is no other active rule, the rule's consequence will be executed, causing the insertion of a new car instance. As soon as we insert a new car instance, the last rule's conditions will be true. Notice that the last rule is evaluating two different types of objects as well as joining them. The rule called "person with new car must be happy" is checking that the car belongs to the person with $c: Car(person == $p). As you may imagine, the $p: creates a binding to the object instances that match the conditions for that pattern. In all the examples in this book, I've used the $ sign to denote variables that are being bound inside rules. This is not a requirement, but it is a good practice that allows you to quickly identify variables versus object field filters. Please notice that the rule engine doesn't care about the order of the rules that we provide; it will analyze them by their conditional sections, not by the order in which we provide the rules. This article provides a very simple project implementing this scenario, so feel free to open it from inside the chapter_09 directory and experiment with it. It's called drools5-SimpleExample. This project contains a test class called MyFirstDrools5RulesTest, which tests the previously introduced rules. Feel free to change the order of the rules provided in the /src/test/resources/simpleRules.drl file. Please take a look at the official documentation at www.drools.org to find more about the advantages of using a rule engine. What Drools needs to work If you remember the jBPM5 API introduction section, you will recall the StatefulKnowledgeSession interface that hosts our business processes. This stateful knowledge session is all that we need in order to host and interact with our rules as well. We can run our processes and business rules in the same instance of a knowledge session without any trouble. In order to make our rules available in our knowledge session, we will need to use the knowledge builder to parse and compile our business rules and to create the proper knowledge packages. Now we will use the ResourceType.DRL file instead of the ResourceType.BPMN2 file that we were using for our business processes. So the knowledge session will represent our world. The business rules that we put in it will evaluate all the information available in the context. From our application side, we will need to notify the rule engine which pieces of information will be available to be analyzed by it. In order to inform and interact with the engine, there are four basic methods provided by the StatefulKnowledgeSession object that we need to know. We will be sharing a StatefulKnowledgeSession instance between our processes and our rules. From the rule engine perspective, we will need to insert information to be analyzed. These pieces of information (which are Java objects) are called facts according to the rule engine's terminology. Our rules are in charge of evaluating these facts against our defined conditions. The following four methods become a fundamental piece of our toolbox: FactHandle insert(Object object) void update(FactHandle handle, Object object) void retract(FactHandle handle) int fireAllRules() The insert() method notifies the engine of an object instance that we want to analyze using our rules. When we use the insert() method, our object instance becomes a fact. A fact is just a piece of information that is considered to be true inside the rule engine. Based on this assumption, a wrapper to the object instance will be created and returned from the insert() method. This wrapper is called FactHandle and it will allow us to make references to an inserted fact. Notice that the update() and retract() methods use this FactHandle wrapper to modify or remove an object that we have previously inserted. Another important thing to understand at this point is that only top-level objects will be handled as facts, which implies the following: FactHandle personHandle = ksession.insert(new Person()); This sentence will notify the engine about the presence of a new fact, the Person instance. Having the instances of Person as facts will enable us to write rules using the pattern Person() to filter the available objects. What if we have a more complex structure? Here, for example, the Person class defines a list of addresses as: class Person{ private String name; private List<Address> addresses; } In such cases we will need to define if we are interested in making inferences about addresses. If we just insert the Person object instance, none of the addresses instances will be treated as facts by the engine. Only the Person object will be filtered. In other words, a condition such as the following would never be true: when $p: Person() $a: Address() This rule condition would never match, because we don't have any Address facts. In order to make the Address instances available to the engine, we can iterate the person's addresses and insert them as facts. ksession.insert(person); for(Address addr : person.getAddresses()){ ksession.insert(addr); } If our object changes, we need to notify the engine about the changes. For that purpose, the update() method allows us to modify a fact using its fact handler. Using the update() method will ensure that only the rules that were filtering this fact type gets re-evaluated. When a fact is no longer true or when we don't need it anymore, we can use the retract() method to remove that piece of information from the rule engine. Up until now, the rule engine has generated activations for all the rules and facts that match with those rules. No rule's consequence will be executed if we don't call the fireAllRules() method. The fireAllRules() method will first look for activations inside our ksession object and select one. Then it will execute that activation, which can cause new activations to be created or current ones canceled. At this point, the loop begins again; the method picks one activation from the Agenda (where all the activations go) and executes it. This loop goes on until there are no more activations to execute. At that point the fireAllRules() method returns control to our application. The following figure shows this execution cycle: This cycle represents the inference process, since our rules can generate new information (based on the information that is available), and new conclusions can be derived by the end of this cycle. Understanding this cycle is vital in working with the rule engine. As soon as we understand the power of making data inferences as opposed to just plain data validation, the power of the rule engine is unleashed. It usually takes some time to digest the full range of possibilities that can be modeled using rules, but it's definitely worth it. Another characteristic of rule engines that you need to understand is the difference between stateless and stateful sessions. In this book, all the examples use the StatefulKnowledgeSession instance to interact with processes and rules. A stateless session is considered a very simple StatefulKnowledgeSession that can execute a previously described execution cycle just once. Stateless sessions can be used when we only need to evaluate our data once and then dispose of that session, because we are not planning to use it anymore. Most of the time, because the processes are long running and multiple interactions will be required, we need to use a StatefulKnowledgeSession instance. In a StatefulKnowledgeSession, we will be able to go throughout the previous cycle multiple times, which allows us to introduce more information over time instead of all at the beginning. Just so you know, the StatelessKnowledgeSession instance in Drools exposes an execute() method that internally inserts all the facts provided as parameters, calls the fireAllRules() method, and finally disposes of the session. There have been several discussions about the performance of these two approaches, but inside the Drools Engine, both stateful and stateless sessions perform the same, because StatelessKnowledgeSession uses StatefulKnowledgeSession under the hood. There is no performance difference between stateless and stateful sessions in Drools. The last function that we need to know is the dispose() method provided by the StatefulKnowledgeSession interface. Disposing of the session will release all the references to our domain objects that are kept, allowing those objects to be collected by the JVM's garbage collector. As soon as we know that we are not going to use a session anymore, we should dispose of it by using dispose().
Read more
  • 0
  • 0
  • 1468

article-image-mobile-devices
Packt
21 Jan 2013
11 min read
Save for later

Mobile Devices

Packt
21 Jan 2013
11 min read
(For more resources related to this topic, see here.) So let's get on with it... Important preliminary points While you can use the Android emulator for the Android parts of the article, it is highly recommended that you have a real device that you can use. The reason is that the emulator tries to emulate the hardware that phones run on. This means that it needs to translate it to a low-level command that ARM-based devices would understand. A real iOS device is not needed as that simulates a device and therefore is significantly faster. The device will also need to have Android 4.0+ or better known as Ice Cream Sandwich. You will need to download the Android App from http://code.google.com/p/selenium/downloads/list. It will be named android-server-<version>.apk where <version> is the latest version. You will however need to have a machine with OS X on to start the simulator since it is part of XCode. If you do not have XCode installed you can download it via the AppStore. You will also need to install all of the command-line tools that come with XCode. You will also need to check out the Selenium code from its source repository. You need to build the WebDriver code for iOS since it can't be added to the Apple App Store to be downloaded on to devices. Working with Android Android devices are becoming commonplace with owners of smartphones and tablets. This is because there are a number of handset providers in the market. This has meant that in some parts of the world, it is the only way that some people can access the Internet. With this in mind, we need to make sure that we can test the functionality. Emulator While it is not recommended to use the emulator due to the speed of it, it can be really useful. Since it will act like a real device in that it will run all the bits of code that we want on the virtual device, we can see how a web application will react. Time for action — creating an emulator If you do not have an Android device that you can use for testing, then you can set up an Android emulator. The emulator will then get the Selenium WebDriver APK installed and then that will control the browser on the device. Before we start, you will need to download the Android SDK from http://developer.android.com/sdk/index.html. Open up a command prompt or a terminal. Enter cd <path>/android-sdk/tools where <path> is the path to the android-sdk directory. Now enter ./android create avd -n my_android -t 14 where: –n my_android gives the emulator the name my_android. –t 14 tells it which version of android to use. 14 and higher is Android 4 and higher support. When prompted Do you wish to create a custom hardware profile [no], enter no. Run the emulator with: ./emulator -avd my_android & It will take some time to come up but once it has been started, you will not have to restart unless it crashes or you purposefully close it. Once loaded you should see something like the following: What just happened? We have just seen what is involved in setting up the Android emulator that we can use for testing of mobile versions of our applications. As was mentioned, we need to make sure that we set up the emulator to work with Android 4.0 or later. For the emulator we need to have a target platform of 14 or later. Now that we have this done, we can have a look at installing the WebDriver Server on the device. Installing the Selenium WebDriver Android Server We have seen that we can access different machines and control the browsers on those machines with Selenium WebDriver RemoteDriver. We need to do the same with Android. The APK file that you downloaded earlier is the Selenium Server that is specifically designed for Android devices. It has a smaller memory footprint since mobile devices do not have the same amount of memory as your desktop machine. We need to install this on the emulator or the physical device that you have. Time for action — installing the Android Server In this section, we will learn the steps required to install the Android server on the device or emulator that you are going to be using. To do this, you will need to have downloaded the APK file from http://code.google.com/p/selenium/downloads/list. If you are installing this onto a real device make sure that you allow installs from Unknown Sources. Open a command prompt or a terminal. Start the emulator or device if you haven't already. We need to run the available devices: <path to>/android_sdk/platform-tools/adb devices It will look like this: Take the serial number of the device. Now we need to install. We do that with the following command: adb -s <serialId> -e install -r android-server.apk Once that is done you will see this in the command prompt or terminal: And on the device you will see: What just happened? We have just seen how we can install the Android Server on the server. This process is useful for installing any Android app from the command line. Now that this is done we are ready to start looking at running some Selenium WebDriver code against the device. Creating a test for Android Now that we have looked at getting the device or emulator ready, we are ready to start creating a test that will work against a site. The good thing about the Selenium WebDriver, like Selenium RC, is that we can easily move from browser to browser with only a small change. In this section, we are going to be introduced to the AndroidDriver. Time for action — using the Android driver In this section we are going to be looking at running some tests against an Android device or emulator. This should be a fairly simple change to our test, but there are a couple of things that we need to do right before the test runs. Open a command prompt or terminal. We need to start the server. We can do this by touching the app or we can do this from the command line with the following command: adb -s <serialId> shell am start -a android.intent.action.MAIN -n org.openqa.selenium.android.app/.MainActivity We now need to forward all the HTTP traffic to the device or emulator. This means that all the JSON Wire Protocol calls, that we learnt earlier, go to the device. We do it with: adb -s <serialId> forward tcp:8080 tcp:8080 Now we are ready to update our test. I will show an example from the previous test: import junit.framework.TestCase; import org.openqa.selenium.By; import org.openqa.selenium.WebElement; import org.openqa.selenium.android.AndroidDriver; public class TestChapter7 { WebDriver driver; @Before public void setUp(){ driver = new AndroidDriver(); driver.get("http://book.theautomatedtester.co.uk/chapter4"); } @After public void tearDown(){ driver.quit(); } @Test public void testExamples(){ WebElement element = driver.findElement(By.id("nextBid")); element.sendKeys("100"); } } Run the test. You will see that it runs the same test against the Android device. What just happened? We have just run our first test against an Android device. We saw that we had to forward the HTTP traffic to port 8080 to the device. This means that the normal calls, which use the JSON Wire Protocol, will then be run on the device. Currently Opera Software is working on getting OperaDriver to work on Mobile devices. There are a few technical details that are being worked on and hopefully in the future we will be able to use it. Mozilla is also working on their solution for Mobile with Selenium. Currently a project called Marionette is being worked on that allows Selenium to work on Firefox OS, Firefox Mobile for Android as well as Firefox for Desktop. You can read up on it at https://wiki.mozilla. org/Auto-tools/Projects/Marionette. Have a go hero — updating tests for Android Have a look at updating all of the tests that you would have written so far in the book to run on Android. It should not take you long to update them. Running with OperaDriver on a mobile device In this section we are going to have a look at using the OperaDriver, the Selenium WebDriver object to control Opera, in order to drive Opera Mobile. Opera has a large market share on mobile devices especially on lower end Android devices. Before we start we are going to need to download a special emulator for Opera Mobile. As of writing this, it has just come out of Opera's Labs so the download links may have been updated. Windows: http://www.opera.com/download/get.pl?id=34969⊂=true&nothanks=yes&location=360. Mac: http://www.opera.com/download/get.pl?id=34970⊂=true&nothanks=yes&location=360. Linux 64 Bit: Deb: http://www.opera.com/download/get.pl?id=34967⊂=true&nothanks=yes&location=360. Tarball: http://www.opera.com/download/get.pl?id=34968⊂=true&nothanks=yes&location=360. Linux 32 Bit: Deb: http://www.opera.com/download/get.pl?id=34965⊂=true&nothanks=yes&location=360. TarBall: http://www.opera.com/download/get.pl?id=34966⊂=true&nothanks=yes&location=360. Let's now see this in action. Time for action — using OperaDriver on Opera Mobile To make sure that we have the right amount of coverage over the browsers that users may be using, there is a good chance that you will need to add Opera Mobile. Before starting, make sure that you have downloaded the version of the emulator for your Operating System with one of the links mentioned previously. Create a new test file. Add the following code to it: import junit.framework.TestCase; import org.openqa.selenium.By; import org.openqa.selenium.WebElement; public class TestChapter7OperaMobile{ WebDriver driver; } What we now need to do is add a setup method. We will have to add a couple of items to our DesiredCapabilities object. This will tell OperaDriver that we want to work against a mobile version. @Before public void setUp(){ DesiredCapabilities c = DesiredCapabilities.opera(); c.setCapability("opera.product", OperaProduct.MOBILE); c.setCapability("opera.binary", "/path/to/my/custom/opera-mobile-build"); driver = new OperaDriver(c); } Now we can add a test to make sure that we have a working test again: @Test public void testShouldLoadGoogle() { driver.get("http://www.google.com"); //Let's find an element to see if it works driver.findElement(By.name("q")); } Let's now add a teardown: @After public void teardown(){ driver.quit(); } Your class altogether should look like the following: import junit.framework.TestCase; import org.openqa.selenium.By; import org.openqa.selenium.WebElement; public class TestChapter7OperaMobile{ WebDriver driver; @Before public void setUp(){ DesiredCapabilities c = DesiredCapabilities.opera(); c.setCapability("opera.product", OperaProduct.MOBILE); c.setCapability("opera.binary", "/path/to/my/custom/opera-mobile-build"); driver = new OperaDriver(c); } @After public void teardown(){ driver.quit(); } @Test public void testShouldLoadGoogle() { driver.get("http://book.theautomatedtester.co.uk"); } } And the following should appear in your emulator: What just happened? We have just seen what is required to run a test against Opera Mobile using OperaDriver. This uses the same communication layer that is used in communicating with the Opera desktop browser called Scope. We will see the mobile versions of web applications, if they are available, and be able to interact with them. If you would like the OperaDriver to load up tablet size UI, then you can add the following to use the tablet UI with a display of 1280x800 pixels. This is a common size for tablets that are currently on the market. c.setCapability("opera.arguments", "-tabletui -displaysize 1280x800"); If you want to see the current orientation of the device and to access the touch screen elements, you can swap OperaDriver object for OperaDriverMobile. For the most part, you should be able to do nearly all of your work against the normal driver.
Read more
  • 0
  • 0
  • 1704

article-image-creating-bar-charts
Packt
14 Jan 2013
10 min read
Save for later

Creating Bar Charts

Packt
14 Jan 2013
10 min read
(For more resources related to this topic, see here.) Drawing a bar chart with Flex The Flex framework offers some charting components that are fairly easy to use. It is not ActionScript per say, but it still compiles to the SWF format. Because the resulting charts look good and are pretty customizable, we decided to cover it in one recipe. There is a downside though to using this: the Flex framework will be included in your SWF, which will increase its size. Future recipes will explain how to do the same thing using just ActionScript. Getting ready Open FlashDevelop and create a new Flex Project. How to do it... The following are the steps required to build a bar chart using the Flex framework. Copy and paste the following code in the Main.mxml file. When you run it, it will show you a bar chart. <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="955" minHeight="600"> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; [Bindable] private var monthsAmount:ArrayCollection = new ArrayCollection( [ { Month: "January", Amount: 35}, { Month: "February", Amount: 32 }, { Month: "March", Amount: 27 } ]); ]]> </fx:Script> <mx:BarChart id="barchart" x="30" y="30" dataProvider="{monthsAmount}"> <mx:verticalAxis> <mx:CategoryAxis categoryField="Month"/> </mx:verticalAxis> <mx:horizontalAxis> <mx:LinearAxis minimum="10"/> </mx:horizontalAxis> <mx:series> <mx:BarSeries yField="Month" xField="Amount" /> </mx:series> </mx:BarChart> </s:Application> How it works... When you create a new Flex project, Flash Builder will generate for you the XML file and the Application tag. After that, in the script tag we created the data we will need to show in the chart. We do so by creating an ArrayCollection data structure, which is an array encapsulated to be used as DataProvider for multiple components of the Flex framework, in this case mx:BarChart. Once we have the data part done, we can start creating the chart. Everything is done in the BarChart tag. Inside that tag you can see we linked it with ArrayCollection, which we previously created using this code: dataProvider = "{monthsAmount}". Inside the BarChart tag we added the verticalAxis tag. This tag is used to associate values in the ArrayCollection to an axis. In this case we say that the values of the month will be displayed on the vertical axis. Next comes the horizontalAxis tag, we added it to tell the chart to use 10 as a minimum value for the horizontal axis. It's optional, but if you were to remove the tag it would use the smallest value in ArrayCollection as the minimum for the axis, so one month, in this case, March, would have no bar and the bar chart wouldn't look as good. Finally, the series tag will tell for a column, what data to use in ArrayCollection. You can basically think of the series as representing the bars in the chart. There's more... As we mentioned earlier, this component of the Flex framework is pretty customizable and you can use it to display multiple kinds of bar charts. Showing data tips Multiple options are available using this component; if you want to display the numbers that the bar represents in the chart while the user moves the mouse over the bar, simply add showDataTips = "true" inside the BarChart tag and it is done. Displaying vertical bars If you would like to use vertical bars instead of horizontal bars in the graph, Flex provides the ColumnChart charts to do so. In the previous code, change the BarChart tag to ColumnChart, and change BarSeries to ColumnSeries. Also, since the vertical axis and horizontal axis will be inverted, you will need verticalAxis by horizontalAxis and horizontalAxis by verticalAxis (switch them, but keep their internal tags) and in the ColumnSeries tag, xField should be Month and yField should be Amount. When you run that code it will show vertical bars. Adding more bars By adding more data in the ArrayCollection data structure and by adding another BarSeries tag, you can display multiple bars for each month. See the Adobe documentation at the following link to learn how to do it: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/mx/charts/BarChart.html. Building vertical bar charts Now that we have built a bar chart using Flex, we are ready to do the same in pure ActionScript. This bar chart version will allow you to expand it in multiple ways and will remove the weight that the Flex framework adds to the file size. Now a bit about bar charts; Bar charts are good when you don't have too much data (more than 20 bars starts to make a big chart), or when you've averaged it. It is a quick way to compare data visually. Getting ready All we will need for this is to start a new project in FlashDevelop. Also, it would help to read about preparing data and about axes in the book ActionScript Graphing Cookbook. How to do it... This section will refer a lot to the code provided with the book. You will notice that we divided all the elements in the charts into their own classes. It all starts in the Main.as file, where we create the data that we will use to display in the chart after that we just create the chart and add it to the display list. var data:Vector.<BarData> = new Vector.<BarData>(); data.push(new BarData("January", 60)); data.push(new BarData("February", 100)); data.push(new BarData("March", 30)); var chart:BarChart = new BarChart(data, 400, 410); chart.x = 30; chart.y = 30; addChild(chart); From here you can look into the BarData class, which it is just two variables, a string and a number that represents the data that we are going to show. We now need to create a class for all the elements that comprise a bar chart. They are: the bars, the vertical axis, and the horizontal axis. Now this recipe is building a vertical bar chart so the vertical axis is the one that will have numerical marks and the horizontal axis will have labels on the marks. First the Bar class: This class will only draw a rectangle with the height representing the data for a certain label. The following is its constructor: public function Bar(width:int, height:int) { graphics.beginFill(0xfca25a); graphics.drawRect(-width/2, 0, width, -height); graphics.endFill(); } The horizontal axis will take the x coordinate of the created bars and will place a label under it. public function HorizontalAxis(listOfMark:Vector.<Number>, data:Vector.<BarData>, width:Number) { drawAxisLine(new Point(0, 0), new Point(width, 0)); for (var i:int = 0; i < listOfMark.length; i++) { drawAxisLine(new Point(listOfMark[i], -3), new Point(listOfMark[i], 3)); var textField:TextField = new TextField(); textField.text = data[i].label; textField.width = textField.textWidth + 5; textField.height = textField.textHeight + 3; textField.x = listOfMark[i] - textField.width / 2; textField.y = 5; addChild(textField); } } Now the vertical axis will make 10 marks at regular interval and will add a label with the associated value in it: for (var i:int = 0; i < _numberOfMarks; i++) { drawAxisLine(new Point( -3, (i + 1) * -heightOfAxis / _ numberOfMarks ), new Point(3, (i + 1) * -heightOfAxis / _ numberOfMarks)); var textField:TextField = new TextField(); textField.text = String(((i + 1) / (_numberOfMarks)) * maximumValue ); textField.width = textField.textWidth + 5; textField.height = textField.textHeight + 3; textField.x = -textField.width - 3; textField.y = (i + 1) * -heightOfAxis / _numberOfMarks - textField.height / 2; addChild(textField); } Finally, the BarChart class will take the three classes we just created and put it all together. By iterating through all the data, it will find the maximum value, so that we know what range of values to put on the vertical axis. var i:int; var maximumValue:Number = data[0].data; for (i = 1; i < data.length; i++) { if (data[i].data > maximumValue) { maximumValue = data[i].data; } } After that we create each bar, notice that we also keep the position of each bar to give it to the horizontal axis thereafter: var listOfMarks:Vector.<Number> = new Vector.<Number>(); var bar:Bar; for (i = 0; i < data.length; i++) { bar = new Bar(_barWidth, data[i].data * scaleHeight); bar.x = MARGIN + _barSpacing + _barWidth / 2 + i * (_barWidth + _barSpacing); listOfMarks.push(bar.x - MARGIN); bar.y = height - MARGIN; addChild(bar); } Now all we have left to do is create the axes and then we are done; this is done really easily as shown in the following code: _horizontalAxis = new HorizontalAxis(listOfMarks, data, width - MARGIN); _horizontalAxis.x = MARGIN; _horizontalAxis.y = height - MARGIN; addChild(_horizontalAxis); _verticalAxis = new VerticalAxis(height - MARGIN, maximumValue); _verticalAxis.x = MARGIN; _verticalAxis.y = height -MARGIN; addChild(_verticalAxis); How it works... So we divided all the elements into their own classes because this will permit us to extend and modify them more easily in the future. So let's begin where it all starts, the data. Well, our BarChart class accepts a vector of BarData as an argument. We did this so that you could add as many bars as you want and the chart would still work. Be aware that if you add many bars, you might have to give more width to the chart so that it can accommodate them. You can see in the code, that the width of the bar of determined by the width of the graph divided by the number bars. We decided that 85 percent of that value would be given to the bars and 15 percent would be given to the space between the bars. Those values are arbitrary and you can play with them to give different styles to the chart. Also the other important step is to determine what our data range is. We do so by finding what the maximum value is. For simplicity, we assume that the values will start at 0, but the validity of a chart is always relative to the data, so if there are negative values it wouldn't work, but you could always fix this. So when we found our maximum value, we can decide for a scale for the rest of the values. You can use the following formula for it: var scaleHeight:Number = (height - 10) / maximumValue; Here, height is the height of the chart and 10 is just a margin we leave to the graph to place the labels. After that, if we multiply that scale by the value of the data, it will give us the height of each bar and there you have it, a completed bar chart. There's more... We created a very simple version of a bar chart but there are numerous things we could do to improve it. Styling, interactivity, and the possibility of accommodating a wider range of data are just some examples. Styling This basic chart could use a little bit of styling. By modifying the color of the bars, the font of the labels, and by adding a drop shadow to the bars, it could be greatly enhanced. You could also make all of them dynamic so that you could specify them when you create a new chart. Interactivity It would be really good to show the values for the bars when you move the mouse over them. Right now you can kind of get an idea of which one is the biggest bar but that is all. If this feature is implemented, you can get the exact value. Accommodating a wider data range As we explained earlier, we didn't account for all the data range. Values could be very different; some could be negative, some could be very small (between 0 and 1), or you would want to set the minimum and maximum value of the vertical axes. The good thing here is that you can modify the code to better fit your data.
Read more
  • 0
  • 0
  • 1752

article-image-delving-deep-application-design
Packt
11 Jan 2013
14 min read
Save for later

Delving Deep into Application Design

Packt
11 Jan 2013
14 min read
(For more resources related to this topic, see here.) Before we get started, please note the folder structure that we'll be using. This will help you quickly find the files referred to in each recipe. Executables for every sample project will be output in the bin/debug or bin/release folders depending on the project's build configuration. These folders also contain the following required DLLs and configuration files: File name   Description   OgreMain.dll   Main Ogre DLL.   RenderSystem_Direct3D9.dll   DirectX 9 Ogre render system DLL. This is necessary only if you want Ogre to use the DirectX 9 graphics library.   RenderSystem_GL.dll   OpenGL Ogre render system DLL. This is necessary only if you want Ogre to use the OpenGL graphics library.   Plugin_OctreeSceneManager.dll   Octree scene manager Ogre plugin DLL.   Plugin_ParticleFX.dll   Particle effects Ogre plugin DLL.   ogre.cfg   Ogre main configuration file that includes render system settings.   resources.cfg   Ogre resource configuration file that contains paths to all resource locations. Resources include graphics files, shaders, material files, mesh files, and so on.   plugins.cfg   Ogre plugin configuration file that contains a list of all the plugins we want Ogre to use. Typical plugins include the Plugin_OctreeSceneManager, RenderSystem_Direct3D9, RenderSystem_ GL, and so on.   In the bin/debug folder, you'll notice that the debug versions of the Ogre plugin DLLs all have a _d appended to the filename. For example, the debug version of OgreMain.dll is OgreMain_d.dll. This is the standard method for naming debug versions of Ogre DLLs. The media folder contains all the Ogre resource files, and the OgreSDK_vc10_v1-7-1 folder contains the Ogre header and library files. Creating a Win32 Ogre application The Win32 application is the leanest and meanest of windowed applications, which makes it a good candidate for graphics. In this recipe, we will create a simple Win32 application that displays a 3D robot model that comes with Ogre, in a window. Because these steps are identical for all Win32 Ogre applications, you can use the completed project as a starting point for new Win32 applications. Getting ready To follow along with this recipe, open the solution located in the Recipes/Chapter01/OgreInWin32 folder in the code bundle available on the Packt website. How to do it... We'll start off by creating a new Win32 application using the Visual C++ Win32 application wizard. Create a new project by clicking on File | New | Project. In the New Project dialog-box, expand Visual C++, and click on Win32 Project. Name the project OgreInWin32. For Location, browse to the Recipes folder and append Chapter_01_Examples, then click on OK. In the Win32 Application Wizard that appears, click on Next. For Application type, select Windows application, and then click on Finish to create the project. At this point, we have everything we need for a bare-bones Win32 application without Ogre. Next, we need to adjust our project properties, so that the compiler and linker know where to put our executable and find the Ogre header and library files. Open the Property Pages dialog-box, by selecting the Project menu and clicking on Properties. Expand Configuration Properties and click on General. Set Character Set to Not Set. Next, click on Debugging. Select the Local Windows Debugger as the Debugger to launch, then specify the Command for starting the application as ......bindebug$(TargetName)$(TargetExt). Each project property setting is automatically written to a per-user file with the extension .vcxproj.user, whenever you save the solution. Next we'll specify our VC++ Directories, so they match our Cookbook folder structure. Select VC++ Directories to bring up the property page where we'll specify general Include Directories and Library Directories. Click on Include Directories, then click on the down arrow button that appears on the right of the property value, and click on <edit>. In the Include Directories dialog-box that appears, click on the first line of the text area, and enter the relative path to the Boost header files: ......OgreSDK_vc10_v1-7-1boost_1_42. Click on the second line, and enter the relative path to the Ogre header files ......OgreSDK_vc10_v1-7-1includeOGRE, and click OK. Edit the Library Directories property in the same way. Add the library directory ......OgreSDK_vc10_v1-7-1boost_1_42lib for Boost, and ......OgreSDK_vc10_v1-7-1libdebug for Ogre, then click OK. Next, expand the Linker section, and select General. Change the Output File property to ......bindebug$(TargetName)$(TargetExt). Then, change the Additional Library Directories property to ......OgreOgreSDK_vc10_v1-7-1libdebug. Finally, provide the linker with the location of the main Ogre code library. Select the Input properties section, and prepend OgreMain_d.lib; at the beginning of the line. Note that if we were setting properties for the release configuration, we would use OgreMain.lib instead of OgreMain_d.lib. Now that the project properties are set, let's add the code necessary to integrate Ogre in our Win32 application. Copy the Engine.cpp and Engine.h files from the Cookbook sample files to your new project folder, and add them to the project. These files contain the CEngine wrapper class that we'll be using to interface with Ogre. Open the OgreInWin32.cpp file, and include Engine.h, then declare a global instance of the CEngine class, and a forward declaration of our InitEngine() function with the other globals at the top of the file. CEngine *m_Engine = NULL; void InitEngine(HWND hWnd); Next, create a utility function to instantiate our CEngine class, called void InitEngine(HWND hWnd){ m_Engine = new CEngine(hWnd); } Then, call InitEngine() from inside the InitInstance() function, just after the window handle has been created successfully, as follows: hWnd = CreateWindow(szWindowClass, szTitle, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, hInstance, NULL); if (!hWnd){ return FALSE; } InitEngine(hWnd); Our last task is to render the 3D scene and display it in the window when we receive a WM_PAINT message. Add a call to renderOneFrame() to the WndProc() function, as follows: case WM_PAINT: hdc = BeginPaint(hWnd, &ps); m_Engine->m_Root->renderOneFrame(); EndPaint(hWnd, &ps); break; And that's it! How it works... Let's look at the CEngine class to see how we create and initialize an instance of the Ogre engine, and add a camera and robot model to the scene. Open Engine.cpp, and look at the constructor for CEngine. In the constructor, we create an instance of the Ogre engine, and store it in the m_Root class member variable. m_Root = new Ogre::Root("", "", Ogre::String(ApplicationPath + Ogre::String("OgreInWin32.log"))); An instance of Ogre::Root must exist before any other Ogre functions are called. The first parameter to the constructor is the plugins configuration filename, which defaults to plugins.cfg, but we pass it an empty string because we are going to load that file manually later. The second parameter is the main configuration filename, which defaults to ogre.cfg, but we pass it an empty string, also because we'll be loading that file manually as well. The third parameter is the name of the log file where Ogre will write the debugging and the hardware information. Once the Ogre::Root instance has been created, it can be globally accessed by Root::getSingleton(), which returns a reference or Root::getSingletonPtr(), which returns a pointer. Next, we manually load the configuration file ogre.cfg, which resides in the same directory as our application executable. OgreConfigFile.load(Ogre::String(ApplicationPath + Ogre::String("ogre.cfg")), "t:=", false); The ogre.cfg configuration file contains Ogre 3D engine graphics settings and typically looks as follows: # Render System indicates which of the render systems # in this configuration file we'll be using. Render System=Direct3D9 Rendering Subsystem [Direct3D9 Rendering Subsystem] Allow NVPerfHUD=No Anti aliasing=None Floating-point mode=Fastest Full Screen=Yes Rendering Device=NVIDIA GeForce 7600 GS (Microsoft Corporation - WDDM) VSync=No Video Mode=800 x 600 @ 32-bit colour [OpenGL Rendering Subsystem] Colour Depth=32 Display Frequency=60 FSAA=0 Full Screen=Yes RTT Preferred Mode=FBO VSync=No Video Mode=1024 x 768 Once the main configuration file is loaded, we manually load the correct render system plugin and tell Ogre which render system to use. Ogre::String RenderSystemName; RenderSystemName = OgreConfigFile.getSetting("Render System"); m_Root->loadPlugin("RenderSystem_Direct3D9_d); Ogre::RenderSystemList RendersList = m_Root->getAvailableRenderers(); m_Root->setRenderSystem(RendersList[0]); There's actually a little more code in Engine.cpp for selecting the correct render system plugin to load, but for our render system settings the RenderSystem_Direct3D9_d plugin is all we need. Next, we load the resources.cfg configuration file. Ogre::ConfigFile cf; Ogre::String ResourcePath = ApplicationPath + Ogre::String("resources. cfg"); cf.load(ResourcePath); The resources.cfg file contains a list of all the paths where Ogre should search for graphic resources. Then, we go through all the sections and settings in the resource configuration file, and add every location to the Ogre resource manager. Ogre::ConfigFile::SectionIterator seci = cf.getSectionIterator(); Ogre::String secName, typeName, archName; while (seci.hasMoreElements()){ secName = seci.peekNextKey(); Ogre::ConfigFile::SettingsMultiMap *settings = seci.getNext(); Ogre::ConfigFile::SettingsMultiMap::iterator i; for(i = settings->begin(); i != settings->end(); ++i){ typeName = i->first; archName = i->second; archName = ApplicationPath + archName; Ogre::ResourceGroupManager::getSingleton(). addResourceLocation(archName, typeName, secName); } } Now, we are ready to initialize the engine. m_Root->initialise(false); We pass in false to the initialize() function, to indicate that we don't want Ogre to create a render window for us. We'll be manually creating a render window later, using the hWnd window handle from our Win32 Application. Every graphics object in the scene including all meshes, lights, and cameras are managed by the Ogre scene manager. There are several scene managers to choose from, and each specializes in managing certain types of scenes of varying sizes. Some scene managers support rendering vast landscapes, while others are best for enclosed spaces. We'll use the generic scene manager for this recipe, because we don't need any extra features. m_SceneManager = m_Root->createSceneManager(Ogre::ST_GENERIC, "Win32Ogre"); Remember when we initialized Ogre::Root, and specifically told it not to auto-create a render window? We did that because we create a render window manually using the externalWindowHandle parameter. Ogre::NameValuePairList params; params["externalWindowHandle"] = Ogre::StringConverter::toString((long)hWnd); params["vsync"] = "true"; RECT rect; GetClientRect(hWnd, &rect); Ogre::RenderTarget *RenderWindow = NULL; try{ m_RenderWindow = m_Root->createRenderWindow("Ogre in Win32", rect. right - rect.left, rect.bottom - rect.top, false, &params); } catch(...){ MessageBox(hWnd, "Failed to create the Ogre::RenderWindownCheck that your graphics card driver is up-to-date", "Initialize Render System", MB_OK | MB_ICONSTOP); exit(EXIT_SUCCESS); } As you have probably guessed, the createRenderWindow() method creates a new RenderWindow instance. The first parameter is the name of the window. The second and third parameters are the width and height of the window, respectively. The fourth parameter is set to false to indicate that we don't want to run in full-screen mode. The last parameter is our NameValuePair list, in which we provide the external window handle for embedding the Ogre renderer in our application window. If we want to see anything, we need to create a camera, and add it to our scene. The next bit of code does just that. m_Camera = m_SceneManager->createCamera("Camera"); m_Camera->setNearClipDistance(0.5); m_Camera->setFarClipDistance(5000); m_Camera->setCastShadows(false); m_Camera->setUseRenderingDistance(true); m_Camera->setPosition(Ogre::Vector3(200.0, 50.0, 100.0)); Ogre::SceneNode *CameraNode = NULL; CameraNode = m_SceneManager->getRootSceneNode()- >createChildSceneNode("CameraNode"); First, we tell the scene manager to create a camera, and give it the highly controversial name Camera. Next, we set some basic camera properties, such as the near and far clip distances, whether to cast shadows or not, and where to put the camera in the scene. Now that the camera is created and configured, we still have to attach it to a scene node for Ogre to consider it a part of the scene graph, so we create a new child scene node named CameraNode, and attach our camera to that node. The last bit of the camera-related code involves us telling Ogre that we want the content for our camera to end up in our render window. We do this by defining a viewport that gets its content from the camera, and displays it in the render window. Ogre::Viewport* Viewport = NULL; if (0 == m_RenderWindow->getNumViewports()){ Viewport = m_RenderWindow->addViewport(m_Camera); Viewport->setBackgroundColour(Ogre::ColourValue(0.8f, 1.0f, 0.8f)); } m_Camera->setAspectRatio(Ogre::Real(rect.right - rect.left) / Ogre::Real(rect.bottom - rect.top)); The first line of code checks whether we have already created a viewport for our render window or not; if not, it creates one with a greenish background color. We also set the aspect ratio of the camera to match the aspect ratio of our viewport. Without setting the aspect ratio, we could end up with some really squashed or stretched-looking scenes. You may wonder why you might want to have multiple viewports for a single render window. Consider a car racing game where you want to display the rear view mirror in the top portion of your render window. One way to accomplish, this would be to define a viewport that draws to the entire render window, and gets its content from a camera facing out the front windshield of the car, and another viewport that draws to a small subsection of the render window and gets its content from a camera facing out the back windshield. The last lines of code in the CEngine constructor are for loading and creating the 3D robot model that comes with the Ogre SDK. Ogre::Entity *RobotEntity = m_SceneManager->createEntity("Robot", "robot.mesh"); Ogre::SceneNode *RobotNode = m_SceneManager->getRootSceneNode()- >createChildSceneNode(); RobotNode->attachObject(RobotEntity); Ogre::AxisAlignedBox RobotBox = RobotEntity->getBoundingBox(); Ogre::Vector3 RobotCenter = RobotBox.getCenter(); m_Camera->lookAt(RobotCenter); We tell the scene manager to create a new entity named Robot, and to load the robot.mesh resource file for this new entity. The robot.mesh file is a model file in the Ogre .mesh format that describes the triangles, textures, and texture mappings for the robot model. We then create a new scene node just like we did for the camera, and attach our robot entity to this new scene node, making our killer robot visible in our scene graph. Finally, we tell the camera to look at the center of our robot's bounding box. Finally, we tell Ogre to render the scene. m_Root->renderOneFrame(); We also tell Ogre to render the scene in OgreInWin32.cpp whenever our application receives a WM_PAINT message. The WM_PAINT message is sent when the operating system, or another application, makes a request that our application paints a portion of its window. Let's take a look at the WM_PAINT specific code in the WndProc() function again. case WM_PAINT: hdc = BeginPaint(hWnd, &ps); m_Engine->m_Root->renderOneFrame(); EndPaint(hWnd, &ps); break; The BeginPaint() function prepares the window for painting, and the corresponding EndPaint() function denotes the end of painting. In between those two calls is the Ogre function call to renderOneFrame(), which will draw the contents of our viewport in our application window. During the renderOneFrame() function call, Ogre gathers all the objects, lights, and materials that are to be drawn from the scene manager based on the camera's frustum or visible bounds. It then passes that information to the render system, which executes the 3D library function calls that run on your system's graphics hardware, to do the actual drawing on a render surface. In our case, the 3D library is Direct X and the render surface is the hdc, or Handle to the device context, of our application window. The result of all our hard work can be seen in the following screenshot: Flee in terror earthling! There's more... If you want to use the release configuration instead of debug, change the Configuration type to Release in the project properties, substitute the word release where you see the word debug in this recipe, and link the OgreMain.lib instead of OgreMain_d.lib in the linker settings. It is likely that at some point you will want to use a newer version of the Ogre SDK. If you download a newer version and extract it to the Recipes folder, you will need to change the paths in the project settings so that they match the paths for the version of the SDK you downloaded.
Read more
  • 0
  • 0
  • 1041
article-image-more-adf-business-components-and-fusion-page-runtime
Packt
09 Jan 2013
11 min read
Save for later

More on ADF Business Components and Fusion Page Runtime

Packt
09 Jan 2013
11 min read
(For more resources related to this topic, see here.) Lifecycle of an ADF Fusion web page with region When a client requests for a page with region, at the server, ADF runtime intercepts and pre-processes the request before passing it to the page lifecycle handler. The pre-processing tasks include security check, initialization of Trinidad runtime, and setting up the binding context and ADF context. This is shown in the following diagram: After setting up the context for processing the request, the ADF framework starts the page lifecycle for the page. During the Before Restore View phase of the page, the framework will try to synchronize the controller state with the request, using the state token sent by the client. If this is a new request, a new root view port is created for the top-level page. In simple words, a view port maps to a page or page fragment in the current view. During view port initialization, the framework will build a data control frame for holding the data controls. During this phrase runtime also builds the binding containers used in the current page. The data control frame will then be added to the binding context object for future use. After setting up the basic infrastructure required for processing the request, the page lifecycle moves to the Restore View phase. During the Restore View phase, the framework generates a component tree for the page. Note that the UI component tree, at this stage, contains only metadata for instantiating the UI components. The component instantiation happens only during the Render Response phase, which happens later in the page lifecycle. If this is a fresh request, the lifecycle moves to the Render Response phase. Note that, in this article, we are not discussing how the framework handles the post back requests from the client. During the Render Response phase, the framework instantiates the UI components for each node in the component tree by traversing the tree hierarchy. The completed component tree is appended to UIViewRoot, which represents the root of the UI component tree. Once the UI components are created, runtime walks through the component tree and performs the pre-rendering tasks. The pre-rendering event is used by components with lazy initialization abilities, such as region, to keep themselves ready for rendering if they are added to the component tree during the page cycle. While processing a region, the framework creates a new child view port and controller state for the region, and starts processing the associated task flow. The following is the algorithm used by the framework while initializing the task flow: If the task flow is configured not to share data control, the framework creates a new data control frame for the task flow and adds to the parent data control frame (the data control frame for the parent view port). If the task flow is configured to start a new transaction, the framework calls beginTransaction() on the control frame. If the task flow is configured to use existing transaction, the framework asks the data control frame to create a save point and to associate it to the page flow stack. If the task flow is configured to 'use existing transaction if possible', framework will start a new transaction on the data control, if there is no transaction opened on it. If a transaction is already opened on the data control, the framework will use the existing one. Once the pre-render processing is over, each component will be asked to write out its value into the response object. During this action, the framework will evaluate the EL expressions specified for the component properties, whenever they are referred in the page lifecycle. If the EL expressions contain binding expression referring properties of the business components, evaluation of the EL will end up in instantiating corresponding model components. The framework performs the following tasks during the evaluation of the model-bound EL expressions: It instantiates the data control if it is missing from the current data control frame. It performs a check out of the application module. It attaches the transaction object to the application module. Note that it is the transaction object that manages all database transactions for an application module. Runtime uses the following algorithm for attaching transactions to the application module: If the application module is nested under a root application module or if it is used in a task flow that has been configured to use an existing transaction, the framework will identify the existing DBTransaction object that has been created for the root application module or for the calling task flow, and attach it to the current application module. Under the cover, the framework uses the jbo.shared.txn parameter (named transaction) to share the transaction between the application modules. In other words, if an application module needs to share a transaction with another module, the framework assigns the same jbo.shared.txn value for both application modules at runtime. While attaching the transaction to the application module, runtime will look up the transaction object by using the jbo.shared.txn value set for the application module and if any transaction object is found for this key, it re-uses the same. If the application module is a regular one, and not part of the task flow that shares a transaction with caller, the framework will generate a new DBTransaction object and attach it to the application module. After initializing the data control, the framework adds it to the data control frame. The data control frame holds all the data control used in the current view port. Remember that a, view port maps to a page or a page fragment. Execute an appropriate view object instance, which is bound to the iterator. At the end of the render response phase, the framework will output the DOM content to the client. Before finishing the request, the ADF binding filter will call endRequest() on each data control instance participating in the request. Data controls use this callback to clean up the resources and check in the application modules back to the pool. Transaction management in Fusion web applications A transaction for a business application may be thought of as a unit of work resulting in changes to the application state. Oracle ADF simplifies the transaction handling by abstracting the micro-level management of transactions from the developers. This section discusses the internal aspects of the transaction management in Fusion web applications. In this discussion, we will consider only ADF Business Component-based applications. What happens when the task flow commits a transaction Oracle ADF allows you to define transactional boundaries, using task flows. Each task flow can be configured to define the transactional unit of work. Let us see what happens when a task flow return activity tries to commit the currently opened transaction. The following is the algorithm used by the framework when the task flow commits the transaction: When you action a task flow return activity, a check is carried over to see whether the task flow is configured for committing the current transaction or not. And if found true, runtime will identify the data control frame associated with the current view port and call the commit operation on it. The data control frame delegates the "commit" call to the transaction handler instance for further processing. The transaction handler iterates over all data controls added to the data control frame and invokes commitTransaction on each root data control. It is the transaction handler that engages all data controls added to the data control frame in the transaction commit cycle. Data control delegates the commit call to the transaction object that is attached to the application module. Note that if you have a child task flow participating in the transaction started by the caller or application modules nested under a root application module, they all share the same transaction object. The commit call on a transaction object will commit changes done by all application modules attached to it. The following diagram illustrates how transaction objects that are attached to the application modules are getting engaged when a client calls commit on the data control frame: Programmatically managing a transaction in a task flow If the declarative solution provided by the task flow for managing the transaction is not flexible enough to meet your use case, you can handle the transaction programmatically by calling the beginTransaction(), commit(), and rollback() methods exposed by oracle.adf.model.DataControlFrame. The data control frame acts as a bucket for holding data controls used in a binding container (page definition file). A data control frame may also hold child data control frames, if the page or page fragment has regions sharing the transaction with the parent. When you call beginTransaction(), commit(), or rollback() on a data control frame, all the data controls added to the data control frame will participate in the appropriate transaction cycle. In plain words, the data control frame provides a mechanism to manage the transaction seamlessly, freeing you from the pain of managing transactions separately for each data control present in the page definition. Note that you can use the DataControlFrame APIs for managing a transaction only in the context of a bounded task flow with an appropriate transaction setting (in the context of a controller transaction). The following example illustrates the APIs for programmatically managing a transaction, using the data control frame: //In managed bean class public void commit(){ //Get the binding context BindingContext bindingContext = BindingContext. getCurrent(); //Gets the name of current(root) DataControlFrame String currentFrame = bindingContext.getCurrentDataControlFrame(); //Finds DataControlFrame instance DataControlFrame dcFrame = bindingContext.findDataControlFrame(currentFrame); try { // Commit the trensaction dcFrame.commit(); //Open a new transaction allowing user to continue //editing data dcFrame.beginTransaction(null); } catch (Exception e) { //Report error through binding container ((DCBindingContainer)bindingContext. getCurrentBindingsEntry()). reportException(e); } } Programmatically managing a transaction in the business components The preceding solution of calling the commit method on the data control frame is ideal to be used in the client tier in the context of the bounded task flows. What if you need to programmatically commit the transaction from the business service layer, which does not have any binding context? To commit or roll back the transactions in the business service layer logic where there is no binding context, you can call commit() or rollback() on the oracle.jbo.Transaction object associated with the root application modules. The following example shows a method defined in an application module, which invokes commit on the Transaction object attached to the root application module: //In application module implementation class/***This method calls commit on transaction object*/public void commit(){this.getRootApplicationModule().getTransaction().commit();} Sharing a transaction between application modules at runtime An application module, nested under a root application module, shares the same transaction context with the root. This solution will fit well if you know that the application module needs to be nested during the development phase of the application. What if an application module needs to invoke the business methods from various application modules whose names are known only at runtime, and all the method calls require to happen in same transaction? You can use the following API in such scenarios to create the required application module at runtime: DBTransaction::createApplicationModule(defName); The following method defined in a root application module creates a nested application module on the fly. Both calling and called application modules share the same transaction context. //In application module implementation class/*** Caller passes the AM definition name of the application* module that requires to participate in the existing* transaction. This method creates new AM if no instance is* found for the supplied amName and invokes required service* on it.* @param amName* @param defName*/public void nestAMIfRequiredAndInvokeMethod(String amName, StringdefName) {//TxnAppModule is a generic interface implemented by all//transactional AMs used in this exampleTxnAppModule txnAM = null;boolean generatedLocally = false;try {//Check whether the TxnAppModuleImpl is already nestedtxnAM = (TxnAppModule)getDBTransaction().getRootApplicationModule().findApplicationModule(amName);//create a new nested instance of the TxnAppModuleImpl,// if not nested alreadyif(txnAM == null) {txnAM = (TxnAppModule)this.getDBTransaction().createApplicationModule(defName);generatedLocally = true;}//Invoke business methodsif (txnAM != null) {txnAM.updateEmployee();}} catch (Exception e) {e.printStackTrace();} finally {//Remove locally created AM once use is overif (generatedLocally && txnAM != null) {txnAM.remove();}}}
Read more
  • 0
  • 0
  • 1701

article-image-overview-microsoft-dynamics-crm-2011
Packt
03 Jan 2013
11 min read
Save for later

Overview of Microsoft Dynamics CRM 2011

Packt
03 Jan 2013
11 min read
(For more resources related to this topic, see here.) Architecture of Microsoft Dynamics CRM 2011 Microsoft Dynamics CRM 2011 offers a rich set of marketing, sales, and service features for managing customers. It also offers a rich set of extensibility features for configuring and customizing the standard features, or creating custom features, to meet your requirements. Multi-tier architecture Previous generations of business software often had a two-tier, client-server architecture with most of the application logic contained in a rich client that had to be installed on the user's computer while the database was installed on a server. Microsoft Dynamics CRM is a web-based application that uses a multi-tier client-server architecture. This architecture provides greater scalability, flexibility, and extensibility than a two-tier, client-server architecture. In the multi-tier architecture, the CRM application tier separates the presentation tier from the data tier. The computing resources in the application and data tiers can be increased or decreased depending upon the performance requirements and workload. Presentation tier Microsoft Dynamics CRM 2011 provides user access through the CRM web client, Microsoft Dynamics CRM 2011 for Outlook or Microsoft Dynamics CRM Mobile Express. The presentation tier can be customized using: The user interface customization features native to Microsoft Dynamics CRM 2011 such as the ability to customize forms and views. Client-side integration with on-premise or cloud-based systems. Using web resources, such as JavaScript and Silverlight, enables rich userinterface customization, data validation, and other client-side features. For example, a Bing Maps integration where a customer address is passed to the Bing Maps service and a map displaying the customer's location is displayed in CRM. Custom reports by using SQL Server Reporting Services. Custom charts and dashboards by using the CRM's customization features. Application tier The Microsoft Dynamics CRM server runs the application tier (also known as the CRM platform tier) components. The application tier can be customized by using: Server-side integration by using the CRM Web services to integrate with on-premise or cloud-based systems. For example, when an enquiry form is submitted from a website, a lead is created in the CRM system. Workflows and dialogs can be configured, using the CRM's customization features. This enables you to automate business processes in the application tier. Processes are triggered by events when specified actions are performed or conditions are met. For example, when a sales opportunity's stage is updated to Negotiating, the probability is updated to 90 percent and a notification e-mail is sent to the commercial manager. Plugins and custom workflow activities can be developed as .NET assemblies in Visual Studio to provide event-based customization. For example, when an account's address changes in CRM, the addresses of all the contacts associated with the account are updated. Custom .NET development is outside the scope of the MB2-866 exam. Security can be customized by creating business units, security roles, field-security profiles, and teams. Every application that interacts with CRM does so through the web services in the CRM application tier. Data tier Microsoft SQL Server provides the data tier components of a Microsoft Dynamics CRM deployment. The data tier can be customized by using the metadata changes such as creating custom entities, relationships, attributes, forms, views, and option sets. Metadata changes can be made by using the CRM's customization features, importing a solution, or programmatically using the web services. Direct interaction with the data tier—for example, using a SQL statement to create or update records in the CRM database—is not supported. Filtered views provide an efficient method for securely retrieving the CRM records, using custom SQL-based queries and displaying the data to a user based on their Microsoft Dynamics CRM security roles. Supported and unsupported customization Microsoft Dynamics CRM 2011 can be customized by using all the configuration and customization features available in the web client (and described in this guide), and can be extended by using all the methods described in the Microsoft Dynamics CRM software development kit (SDK). Customizations made by using other methods are unsupported. Unsupported customizations might work initially, but they might not work after updates are applied or the application is upgraded, and these customizations are not supported by Microsoft. This section describes the most common supported and unsupported customization methods likely to be examined in the MB2-866 exam. For a complete list of supported and unsupported customizations, please refer to the CRM SDK available at http://msdn.microsoft.com/en-us/library/gg328350.aspx. Supported customizations In addition to the configuration and customization features available in the web client, the following customizations are also supported (using the CRM SDK): Use of the web services including DiscoveryService, OrganizationService, Organization Data Service, SOAP endpoint for web services, and DeploymentService. Form scripting using the documented objects and methods is available by using the Xrm.Page.data and Xrm.Page.ui objects. Ribbon customization using RibbonDiffXML to add, remove, or hide ribbon elements. The solution files can be customized by exporting and extracting the customization.xml file, and making modifications to the Customizations. xml file as long as the file still conforms to the CustomizationsSolution.xsd schema. Ribbon customization, SiteMap customization, form and dashboard customization using FormXML, and saved query customization, all require this technique. Plugins to handle custom business logic that are developed using the mechanism described in the CRM SDK are supported and upgradeable. Adding the plugins and custom workflow activities to the %installdir%serverbin folder is not supported for Microsoft Dynamics CRM Online. The custom workflow activities (assemblies) that are developed by using the mechanism described in the CRM SDK and called from the workflow processes, and the ability to edit the XAML workflows, is supported and upgradeable. Adding the custom web pages to the <serverroot>ISV<ISV name> folder is supported, but deprecated. This means this method will work for earlier versions of Microsoft Dynamics CRM that have been upgraded, but it is not supported for new deployments. Unsupported customizations The following types of customization are not supported: Modifications or additions to the files in the www root directories of Microsoft Dynamics CRM. Modifications to the Microsoft Dynamics CRM website, including the filesystem access control lists. Use of client certificates. Modifications to the physical schema of the CRM databases—such as adding or modifying tables, stored procedures or views, and so on—other than adding or updating database indexes. Creating or updating the records directly in the database by using T-SQL or any other method that is not described in the CRM SDK. Editing the Customizations.xml file within a solution to edit any solution components other than ribbons, forms, SiteMap, or saved queries. Deployment options There are three deployment options for Microsoft Dynamics CRM 2011: On-premise Partner-hosted Online This section summarizes the differences between the deployment options that are relevant to customization and configuration. On-premise deployment In an on-premise deployment, the Microsoft customer deploys Microsoft Dynamics CRM in its own data center. In an on-premise deployment, an internet-facing deployment (IFD) configuration is optional and only necessary when users outside the customer's network need access to the CRM application. Partner-hosted deployment In a partner-hosted deployment , a Microsoft hosting partner deploys Microsoft Dynamics CRM in the partner's data center. Customer access to the CRM application is usually achieved by using an IFD configuration. Online deployment In an online deployment, the customer subscribes to the Microsoft Dynamics CRM Online service that is hosted by Microsoft in its data centers. Deployment differences There are some important differences between the customization and configuration options available in an on-premise deployment and an online deployment, as described in the following table: Customization and configuration option On-premise Online Internet Lead Capture feature Not available Included Scheduled reports feature Included Not available Query language for custom reports SQL or FetchXML FetchXML only Maximum number of the custom entities Unlimited 300 Maximum number of the workflow processes Unlimited 200 Custom workflow activities (assemblies) Supported Not supported Custom database indexes Supported Not supported Database backup As required Upon request Database restore As required Not available The customization and configuration options of a partner-hosted deployment can vary widely, depending on the service provided by the partner, and are not discussed further here. Using an implementation methodology When implementing Microsoft Dynamics CRM 2011, the use of an implementation methodology is highly recommended. An implementation methodology ensures that a proven, repeatable process is followed so that nothing gets overlooked or omitted. The result is a higher-quality system that better matches the requirements of your organization. Without following a proven methodology, the CRM system gets implemented in an improvised fashion without a clear plan, specification, or design. This often leads to delays, missed requirements, poor user satisfaction, and more expensive implementation costs. Microsoft Dynamics Sure Step Microsoft Dynamics Sure Step is a popular implementation methodology released by Microsoft, based on the best practices used by Microsoft Consulting Services and several of Microsoft's partners. Sure Step provides a range of tools to help Microsoft partners envision, deploy, upgrade, and optimize the Microsoft Dynamics line of business solutions. Sure Step can be used for the CRM 2011 and CRM Online projects, and tailored to various project types such as the rapid, standard, enterprise, agile, and upgrade projects. Sure Step is available to Microsoft partners through the PartnerSource website (http://go.microsoft.com/fwlink/?linkid=88066). Customization security roles There are two security roles that are often assigned to users who are responsible for customizing CRM: System Administrator: Users with the System Administrator security role have full access to all the customization features and there are some solution components, such as plugins and web resources, which can be modified, imported, or exported only by a system administrator. Users with the System Administrator security role always have all privileges for all system and custom entities. The System Administrator security role cannot be modified, and at least one user must have the System Administrator security role assigned to him/her. System Customizer: Users with the System Customizer security role can customize most of the CRM solution components, with a few restrictions such as plugins and web resources. For this reason, it is more common for developers to be assigned the System Administrator security role within a CRM development environment. The System Customizer security role is useful in smaller deployments when it is assigned to a technical super-user who needs to make simple customization changes to the system. For example, the System Customizer role could be assigned to a marketing manager who needs to add fields, modify views, and create system charts and dashboards. Summary Microsoft Dynamics CRM 2011 has a multi-tier architecture that provides greater scalability, flexibility, and extensibility than a two-tier, client-server architecture. The presentation tier displays the user interface through the CRM web client, CRM for Outlook, or CRM for the Mobile clients, and can be customized by using the client-side integration and web resources. The application tier runs on the CRM server and includes the web servers, business logic, security, and data access components. It can be customized by using the server-side integration, workflows and dialogs, and the plugins and custom workflow activities. The data tier stores the customer data and metadata. Customization is supported through metadata changes, but direct database access is not supported. Every application that interacts with CRM does so through the web services in the CRM platform. Alternatively, applications can use the SQL-based queries to retrieve the CRM data by using filtered views. There are a range of supported and unsupported configuration and customization methods available for Microsoft Dynamics CRM 2011. The unsupported methods may work initially, but might not work after an update or upgrade and will not be supported by Microsoft. Microsoft Dynamics CRM offers the on-premise, partner-hosted, and online deployment options, with a few customization and configuration differences between these options. Using an implementation methodology, such as Microsoft Dynamics Sure Step, ensures that a proven, repeatable process is followed so that nothing gets overlooked or omitted. A System Administrator or System Customizer security role is required to customize Microsoft Dynamics CRM 2011. The System Customizer security role has some limitations, such as creating plugins and web resources. Resources for Article : Further resources on this subject: Working with Dashboards in Dynamics CRM [Article] Integrating BizTalk Server and Microsoft Dynamics CRM [Article] Communicating from Dynamics CRM to BizTalk Server [Article]
Read more
  • 0
  • 0
  • 3494

article-image-visual-basic-applications-vba
Packt
31 Dec 2012
5 min read
Save for later

Visual Basic for Applications (VBA)

Packt
31 Dec 2012
5 min read
(For more resources related to this topic, see here.) What kind of things can you do with it? Once you have pushed your experience using the Office application to the limits and you can no longer get your job done due to a lack of built-in tools, using VBA will help avert frustrations you may encounter along the way. VBA enables you to build custom functions, also called User-defined Functions (UDFs), and you can automate tedious tasks such as defining and cleaning formats, manipulate system objects such as files and folders, as well as work together with Windows as a combined system, through its Application Programming Interface (API), and other applications by referencing their object libraries or Dynamic-link Libraries (DLLs). Of course you can also use VBA to manipulate the Office application that hosts your code. For example, you can customize the user interface in order to facilitate the work you and others do. An important thing to remember, though, is that the VBA code that you create is used within the host application. In our case, the code will run within Excel. Such VBA programs are not standalone, that is, they cannot run by themselves; they need the host application in order to operate correctly. How can you use this technology within your existing projects? You can use VBA in two different ways. The first, and most common way is to code directly into your VBA project. For example, you may have an Excel workbook with some custom functions that calculate commissions. You can add modules to this workbook and code the UDFs in this module. Another option would be to save the workbook as an Addin. An Addin is a specialized document that hosts the code and makes it available to other workbooks. This is very useful when you need to share the solutions you develop with other workbooks and co-workers. Recording a macro, adding modules, browsing objects, and variables Before you get your hands "dirty" with coding in VBA, there are a few things you need to know. These things will help when it comes to coding. In this section, you will learn how to: Record a macro Add modules Browse objects Get some background on declaring variables We will start with macro recording, a feature which is available in most Office applications. Recording a macro A macro, in Office applications, is a synonym for VBA code. In Excel, we can record almost any action we perform (such as mouse clicks and typing), which in turn is registered as VBA code. This can come in handy when we need to discover properties and methods related to an object. Let us now have a look at some ways you can record a macro in Excel. There are two options: Recording a macro from the status bar. Recording from the Developer tab. Option 1 — Recording a macro from the status bar From the status bar, click on the Record Macro button. If the button is not visible, right-click on the status bar and from the pop-up menu, choose the Macro Recording option, as shown in the following screenshot: Option 2 — Recording from the Developer tab Now that you know how to record a macro from the status bar, let us check another option. This option requires that you activate the Developer tab. In order to activate it, assuming it is not active yet, follow these steps: Go to File | Excel Options | Customize Ribbon. Under Main Tabs check the Developer checkbox, as shown in the following screenshot : Next, activate the Developer tab and click on Record Macro, as shown in the following screenshot: Once the macro recording process starts, you will be prompted to enter some basic information about the macro such as the macro name, the shortcut key, location where the macro should be stored, and its description. The following screenshot shows these options filled out: Once the macro has been recorded, you can access its container module by pressing, simultaneously, the Alt + F11 keys. Alternatively, you can click on the Visual Basic button in the Developer tab. This button is to the left of the Record Macro button introduced previously. This will open the Visual Basic Editor (VBE), where all the VBA code is kept. The VBE is the tool we use to create, modify, and maintain any code we write or record. The following screenshot shows the VBE window with the project explorer, properties, and code window visible: If upon opening the VBE, the VBA project explorer window is not visible, then follow these steps: Go to View | Project Explorer. Alternatively, press the Ctrl + R keys simultaneously. If, on the other hand, the VBA project explorer is visible, but the code window is not, you can choose which code window to show. Suppose you are interested in the content of the module you've recorded from the project explorer window, follow these step to show the module window: Click on View | Code. Alternatively, press F7. Summary In this article, you have learned some basic stuff about VBA. These included macro recording, adding modules, and browsing objects. Resources for Article : Further resources on this subject: Understanding ShapeSheet™ in Microsoft Visio 2010 [Article] Excel 2010 Financials: Using Graphs for Analysis [Article] Excel 2010 Financials: Adding Animations to Excel Graphs [Article]
Read more
  • 0
  • 0
  • 2015
article-image-thread-executors
Packt
31 Dec 2012
7 min read
Save for later

Thread Executors

Packt
31 Dec 2012
7 min read
(For more resources related to this topic, see here.) Creating a thread executor The first step to work with the Executor framework is to create an object of the ThreadPoolExecutor class. You can use the four constructors provided by that class or use a factory class named Executors that creates ThreadPoolExecutor. Once you have an executor, you can send Runnable or Callable objects to be executed. In this recipe, you will learn how these two operations implement an example that will simulate a web server processing requests from various clients. Getting ready You can compare both mechanisms and select the best one depending on the problem. The example of this recipe has been implemented using the Eclipse IDE. If you use Eclipse or other IDE such as NetBeans, open it and create a new Java project. How to do it... Follow these steps to implement the example: First, you have to implement the tasks that will be executed by the server. Create a class named Task that implements the Runnable interface. public class Task implements Runnable { Declare a Date attribute named initDate to store the creation date of the task and a String attribute named name to store the name of the task. private Date initDate; private String name; Implement the constructor of the class that initializes both attributes. public Task(String name){ initDate=new Date(); this.name=name; } Implement the run() method. @Override public void run() { First, write to the console the initDate attribute and the actual date, which is the starting date of the task. System.out.printf("%s: Task %s: Created on: %sn",Thread. currentThread().getName(),name,initDate); System.out.printf("%s: Task %s: Started on: %sn",Thread. currentThread().getName(),name,new Date()); Then, put the task to sleep for a random period of time. try { Long duration=(long)(Math.random()*10); System.out.printf("%s: Task %s: Doing a task during %d secondsn",Thread.currentThread().getName(),name,duration); TimeUnit.SECONDS.sleep(duration); } catch (InterruptedException e) { e.printStackTrace(); } Finally, write to the console the completion date of the task. System.out.printf("%s: Task %s: Finished on: %sn",Thread. currentThread().getName(),name,new Date()); Now, implement the Server class that will execute every task it receives using an executor. Create a class named Server. public class Server { Declare a ThreadPoolExecutor attribute named executor. private ThreadPoolExecutor executor; Implement the constructor of the class that initializes the ThreadPoolExecutor object using the Executors class. public Server(){ executor=(ThreadPoolExecutor)Executors.newCachedThreadPool(); } Implement the executeTask() method. It receives a Task object as a parameter and sends it to the executor. First, write a message to the console indicating that a new task has arrived. public void executeTask(Task task){ System.out.printf("Server: A new task has arrivedn"); Then, call the execute() method of the executor to send it the task. executor.execute(task); Finally, write some executor data to the console to see its status. System.out.printf("Server: Pool Size: %dn",executor. getPoolSize()); System.out.printf("Server: Active Count: %dn",executor. getActiveCount()); System.out.printf("Server: Completed Tasks: %dn",executor. getCompletedTaskCount()); Implement the endServer() method. In this method, call the shutdown() method of the executor to finish its execution. public void endServer() { executor.shutdown(); } Finally, implement the main class of the example by creating a class named Main and implement the main() method. public class Main { public static void main(String[] args) { Server server=new Server(); for (int i=0; i<100; i++){ Task task=new Task("Task "+i); server.executeTask(task); } server.endServer(); } } How it works... The key of this example is the Server class. This class creates and uses ThreadPoolExecutor to execute tasks. The first important point is the creation of ThreadPoolExecutor in the constructor of the Server class. The ThreadPoolExecutor class has four different constructors but, due to their complexity, the Java concurrency API provides the Executors class to construct executors and other related objects. Although we can create ThreadPoolExecutor directly using one of its constructors, it's recommended to use the Executors class. In this case, you have created a cached thread pool using the newCachedThreadPool() method. This method returns an ExecutorService object, so it's been cast to ThreadPoolExecutor to have access to all its methods. The cached thread pool you have created creates new threads if needed to execute the new tasks, and reuses the existing ones if they have finished the execution of the task they were running, which are now available. The reutilization of threads has the advantage that it reduces the time taken for thread creation. The cached thread pool has, however, a disadvantage of constant lying threads for new tasks, so if you send too many tasks to this executor, you can overload the system. Use the executor created by the newCachedThreadPool() method only when you have a reasonable number of threads or when they have a short duration. Once you have created the executor, you can send tasks of the Runnable or Callable type for execution using the execute() method. In this case, you send objects of the Task class that implements the Runnable interface. You also have printed some log messages with information about the executor. Specifcally, you have used the following methods: getPoolSize(): This method returns the actual number of threads in the pool of the executor getActiveCount(): This m ethod returns the number of threads that are executing tasks in the executor getCompletedTaskCount(): This method returns the number of tasks completed by the executor One critical aspect of the ThreadPoolExecutor class, and of the executors in general, is that you have to end it explicitly. If you don't do this, the executor will continue its execution and the program won't end. If the executor doesn't have tasks to execute, it continues waiting for new tasks and it doesn't end its execution. A Java application won't end until all its non-daemon threads finish their execution, so, if you don't terminate the executor, your application will never end. To indicate to the executor that you want to finish it, you can use the shutdown() method of the ThreadPoolExecutor class. When the executor finishes the execution of all pending tasks, it finishes its execution. After you call the shutdown() method, if you try to send another task to the executor, it will be rejected and the executor will throw a RejectedExecutionException exception. The following screenshot shows part of one execution of this example: When the last task arrives to the server, the executor has a pool of 100 tasks and 97 active threads. There's more... The ThreadPoolExecutor class provides a lot of methods to obtain information about its status. We used in the example the getPoolSize(), getActiveCount(), and getCompletedTaskCount() methods to obtain information about the size of the pool, the number of threads, and the number of completed tasks of the executor. You can also use the getLargestPoolSize() method that returns the maximum number of threads that has been in the pool at a time. The ThreadPoolExecutor class also provides other methods related with the finalization of the executor. These methods are: shutdownNow(): This method shut downs the executor immediately. It doesn't execute the pending tasks. It returns a list with all these pending tasks. The tasks that are running when you call this method continue with their execution, but the method doesn't wait for their finalization. isTerminated(): This m ethod returns true if you have called the shutdown() or shutdownNow() methods and the executor finishes the process of shutting it down. isShutdown(): This method returns true if you have called the shutdown() method of the executor. awaitTermination(long timeout, TimeUnit unit): This m ethod blocks the calling thread until the tasks of the executor have ended or the timeout occurs. The TimeUnit class is an enumeration with the following constants: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS, and SECONDS. If you want to wait for the completion of the tasks, regardless of their duration, use a big timeout, for example, DAYS.
Read more
  • 0
  • 0
  • 1124

article-image-prepare-and-build
Packt
10 Dec 2012
13 min read
Save for later

Prepare and Build

Packt
10 Dec 2012
13 min read
(For more resources related to this topic, see here.) Let's take a look at the history and background of APEX. History and background APEX is a very powerful development tool, which is used to create web-based database-centric applications. The tool itself consists of a schema in the database with a lot of tables, views, and PL/SQL code. It's available for every edition of the database. The techniques that are used with this tool are PL/SQL, HTML, CSS, and JavaScript. Before APEX there was WebDB, which was based on the same techniques. WebDB became part of Oracle Portal and disappeared in silence. The difference between APEX and WebDB is that WebDB generates packages that generate the HTML pages, while APEX generates the HTML pages at runtime from the repository. Despite this approach APEX is amazingly fast. Because the database is doing all the hard work, the architecture is fairly simple. We only have to add a web server. We can choose one of the following web servers: Oracle HTTP Server (OHS) Embedded PL/SQL Gateway (EPG) APEX Listener APEX became available to the public in 2004 and then it was part of version 10g of the database. At that time it was called HTMLDB and the first version was 1.5. Before HTMLDB, it was called Oracle Flows , Oracle Platform, and Project Marvel. Throughout the years many versions have come out and at the time of writing the current version is 4.1.1. These many versions prove that Oracle has continuously invested in the development and support of APEX. This is important for the developers and companies who have to make a decision about which techniques to use in the future. According to Oracle, as written in their statement of direction, new versions of APEX will be released at least annually. The following screenshot shows the home screen of the current version of APEX: Home screen of APEX For the last few years, there is an increasing interest in the use of APEX from developers. The popularity came mainly from developers who found themselves comfortable with PL/SQL and wanted to easily enter the world of web-based applications. Oracle gave ADF a higher priority, because APEX was a no cost option of the database and with ADF (and all the related techniques and frameworks from Java), additional licenses could be sold. Especially now Oracle has pointed out APEX as one of the important tools for building applications in their Oracle Database Cloud Service, this interest will only grow. APEX shared a lot of the characteristics of cloud computing, even before cloud computing became popular. These characteristics include: Elasticity Roles and authorization Browser-based development and runtime RESTful web services (REST stands for Representational State Transfer) Multi-tenant Simple and fast to join APEX has outstanding community support, witnessed by the number of posts and threads on the Oracle forum. This forum is the most popular after the database and PL/SQL. Oracle itself has some websites, based on APEX. Among others there are the following: http://asktom.oracle.com http://shop.oracle.com http://cloud.oracle.com Oracle uses quite a few internal APEX applications. Oracle also provides a hosted version of APEX at http://apex.oracle.com. Users can sign up for free for a workspace to evaluate and experiment with the latest version of APEX. This environment is for evaluations and demonstrations only, there are no guarantees! Apex.oracle.com is a very popular service—more than 16,000 workspaces are active. To give an idea of the performance of APEX, the server used for this service used to be a Dell Poweredge 1950 with two Dual Core Xeon processors with 16 GB. Installing APEX In this section, we will discuss some additional considerations to take care of while installing APEX. The best source for the installation process is the Installation Guide of APEX. Runtime or full development environment On a production database, the runtime environment of APEX should be installed. This installation lacks the Application Builder and the SQL Workshop. Users can run applications, but the applications cannot be modified. The runtime environment of APEX can be administered using SQL*Plus and SQL Developer. The (web interface) options for importing an application, which are only available in a full development environment, can be used manually with the APEX_INSTANCE_ADMIN API. Using a runtime environment for production is recommended for security purposes, so that we can be certain that installed applications cannot be modified by anyone. On a development environment the full development environment can be installed with all the features available to the developers. Build status Besides the environment of APEX itself, the applications can also be installed in a similar way. When importing or exporting an application the Run Application Only or Run and Build Application options can be selected. Changing an application to Run Application Only can be done in the Application Builder by choosing Edit Application Properties. Changing the Build Status to Run and Build Application can only be done as the admin user of the workspace internal. In the APEX Administration Services, choose Manage Workspaces and then select Manage Applications | Build Status. Another setting related to the Runtime Only option could be used in the APEX Administration Services at instance level. Select Manage Instance and then select Security. Setting the property Disable Workspace Login to yes, acts as setting a Runtime Only environment, while still allowing instance administrators to log in to the APEX Administration Services. Tablespaces Following the install guide for the full development environment, at a certain moment, we have to run the following command, when logged in as SYS with the SYSDBA role, on the command line: @apexins tablespace_apex tablespace_files tablespace_temp images The command is explained as follows: tablespace_apex is the name of the tablespace that contains all the objects for the APEX application user. tablespace_files is the name of the tablespace that contains all the objects for the APEX files user. tablespace_temp is the name of the temporary tablespace of the database. images will be the virtual directory for APEX images. Oracle recommends using /i/ to support the future APEX upgrades. For the runtime environment, the command is as follows: @apxrtins tablespace_apex tablespace_files tablespace_temp images In the documentation, SYSAUX is given as an example for both tablespace_apex and tablespace_files. There are several reasons for not using SYSAUX for these tablespaces, but to use our own instead: SYSAUX is an important tablespace of the database itself We have more control over sizing and growth It is easier for a DBA to manage tablespace placement Contention in the SYSAUX tablespace is less occurring It's easier to clean-up older versions of APEX And last but not least, it's only an example Converting runtime environment into a full development environment and vice versa It's always possible to switch from a runtime to a production environment and vice versa. If you want to convert a runtime to a full development environment log in as SYS with the SYSDBA role and on the command line type @apxdvins.sql. For converting a full development to a runtime environment, type @apxdevrm—but export websheet applications first. Another way to restrict user access can be accomplished by logging in to the APEX Administration Services, where we can (among others) manage the APEX instance settings and all the workspaces. We can do that in two ways: http://server:port/apex/apex_admin: Log in with the administrator credentials http://server:port/apex/: Log in to the workspace internal, with the administrator credentials After logging in, perform the following steps: Go to Manage Instance. Select Security. Select the appropriate settings for Disable Administrator Login and Disable Workspace Login. These settings can also be set manually with the APEX_INSTANCE_ADMIN API. Choosing a web server When using a web-based development and runtime environment, we have to use a web server. Architecture of APEX The choice of a web server and the underlying architecture of the system has a direct impact on performance and scalability. Oracle provides us with three choices: Oracle HTTP Server (OHS) Embedded PL/SQL Gateway (EPG) APEX Listener Simply put, the web server maps the URL in a web browser to a procedure in the database. Everything the procedure prints with sys.htp package, is sent to the browser of the user. This is the concept used by tools such as WebDB and APEX. OHS The OHS is the oldest of the three. It's based on the Apache HTTP Server and uses a custom Apache Module named as mod_plsql: Oracle HTTP Server In release 10g of the database, OHS was installed with the database on the same machine. Upward to the release 11g, this is not the case anymore. If you want to install the OHS, you have to install the web tier part of WebLogic. If you install it on the same machine as the database, it's free of extra licence costs. This installation takes up a lot of space and is rather complex, compared with the other two. On the other hand, it's very fl exible and it has a proven track record. Configuration is done with the text files. EPG The EPG is part of XML DB and lives inside the database. Because everything is in the database, we have to use the dbms_xdb and dbms_epg PL/SQL packages to configure the EPG. Another implication is that all images and other files are stored inside the database, which can be accessed with PL/SQL or FTP, for example: Embedded PL/SQL gateway The architecture is very simple. It's not possible to install the EPG on a different machine than the database. From a security point of view, this is not the recommended architecture for real-life Internet applications and in most cases the EPG is used in development, test, or other internal environments with few users. APEX Listener APEX Listener is the newest of the three, it's still in development and with every new release more features are added to it. In the latest version, RESTful APIs can be created by configuring resource templates. APEX Listener is a Java application with a very small footprint. APEX Listener can be installed in a standalone mode, which is ideal for development and testing purposes. For production environments, the APEX Listener can be deployed by using a J2EE compliant Application Server such as Glassfish, WebLogic, or Oracle Containers for J2EE: APEX Listener Configuration of the APEX Listener is done in a browser. With some extra configuration, uploading of Excel into APEX collections can be achieved. In future release, other functionalities, such as OAuth 2.0 and ICAP virus scanner integration, have been announced. Configuration options of the APEX Listener Like OHS, an architectural choice can be made if we want to install APEX Listener on the same machine as the database. For large public applications, it's better to use a separate web server. Many documents and articles have been written about choosing the right web server. If you read between the lines, you'll see that Oracle more or less recommends the use of APEX Listener. Given the functionality, enhanced security, file caching, fl exibility of deployment possibilities, and feature announcements makes it the best choice. Creating a second administrator When installing APEX, by default the workspace Internal with the administrator user Admin is created. Some users know more than the average end user. Also, developers have more knowledge than the average user. Imagine that such users try to log in to either the APEX Administration Services or the normal login page with the workspace Internal and administrator Admin, and consequently use the wrong password. As a consequence, the Admin account would be locked after a number of login attempts. This is a very annoying situation, especially when it happens often. Big companies and APEX Hosting companies with many workspaces and a lot of anonymous users or developers may suffer from this. Fortunately there is an easy solution, creating a second administrator account. Login attempt in workspace Internal as Admin If the account is already locked, we have to unlock it first. This can be easily done by running the apxchpwd.sql script, which can be found in the main Apex directory of the unzipped installation file of APEX: Start SQL*Plus and connect as sys with the sysdba role Run the script by entering @apxchpwd.sql. Follow the instructions and enter a new password. Now we are ready to create a second administrator account. This can be done in two ways, using the web interface or the command line. APEX web interface Follow these steps to create a new administrator, using the browser. First, we need to log in to the APEX Administrator Services at http://server:port/apex/. Log in to the workspace Internal, with the administrator credentials After logging in, perform the following steps: Go to Manage Workspaces. Select Existing Workspaces. You can also select the edit icon of the workspace Internal to inspect the settings. You cannot change them. Select Cancel to return to the previous screen. Select the workspace Internal by clicking on the name. Select Manage Users. Here you can see the user Admin. You can also select the user Admin to change the password. Other settings cannot be changed. Select Cancel or Apply Changes to return to the previous screen. Select Create User. Make sure that Internal is selected in the Workspace field and APEX_xxxxxx is selected in Default Schema, and that the new user is an administrator. xxxxxx has to match your APEX scheme version in the database, for instance, APEX_040100. Click on Create to finish. Settings for the new administrator Command line When we still have access, we can use the web interface of APEX. If not we can use the command line: Start SQL*Plus and connect as SYS with the SYSDBA role. Unlock the APEX_xxxxxx account by issuing the following command: alter user APEX_xxxxxx account unlock; Connect to the APEX_xxxxxx account. If you don't remember your password, you can just reset it, without impacting the APEX instance. Execute the following (use your own username, e-mail, and password): BEGIN wwv_flow_api.set_security_group_id (p_security_group_id=>10); wwv_flow_fnd_user_api.create_fnd_user( p_user_name => 'second_admin', p_email_address => '[email protected]', p_web_password => 'second_admin_password') ; END; / COMMIT / The new administrator is created. Connect again as SYS with the SYSDBA role and lock the account again with the following command: alter user APEX_xxxxxx account lock; Now you can log in to the Internal workspace with your newly created account and you'll be asked to change your password. Other accounts When an administrator of a developer workspace loses his/her password or has a locked account, you can bring that account back to life by following these steps: Log in to the APEX Administrator Services Go to Manage Workspace. Select Existing Workspaces. Select the workspace. Select Manage Users. Select the user, change the password, and unlock the user. A developer or an APEX end user account can be managed by the administrator of the workspace from the workspace itself. Follow these steps to do so: Log in to the workspace. Go to Administration. Select the user, change the password, and unlock the user.
Read more
  • 0
  • 0
  • 2680