Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-aspnet-site-performance-reducing-long-wait-times
Packt
12 Oct 2010
8 min read
Save for later

ASP.NET site performance: reducing long wait times

Packt
12 Oct 2010
8 min read
Measuring wait times We can use a number of ways to find out which external requests are most frequent and how long the site has to wait for a response: Run the code in the debugger with breakpoints around each external request. This will give you a quick hint of which external request is the likely culprit. However, you wouldn't do this in a production environment, as it only gives you information for a few requests. Use the Trace class (in the namespace System.Diagnostics) to trace how long each request takes. This will give you a lot of detailed information. However, the overhead incurred by processing all the trace messages may be too high to use in a production environment, and you would have to somehow aggregate the trace data to find which requests are the most frequent and take the longest. Build performance counters into your code that record the frequency of each request and the average wait time. These counters are light-weight, and hence, can be used in a production environment. Also, you can readily access them via perfmon, along with the counters provided by ASP.NET, SQL Server, and so on that you have already come across. The remainder of this section focuses on performance counters. Also, performance counters are a convenient way to keep an eye on off-box requests on a day-to-day basis instead of as a one-off. Windows offers you 28 types of performance counters to choose from. Some of these are esoteric, others extremely useful. For example, you can measure the rate per second that a request is made, and the average time in milliseconds that the site waits for a response. Adding your own custom counters is easy, and you can see their real-time values in perfmon, along with that of the built-in counters. The runtime overhead of counters is minimal. You have already come across some of the hundreds of counters published by ASP.NET, SQL Server, and Windows itself. Even if you add a lot of counters, CPU overhead would be well under one percent. This section describes only three commonly used counters: simple number, rate per second, and time. A list of all types of counters with examples of their use is available at http://msdn.microsoft.com/en-us/library/system.diagnostics.performancecountertype.aspx?ppud=4. To use the counters, you need to follow these three steps: Create custom counters. Update them in your code. See their values in perfmon. Creating custom counters In this example, we'll put counters on a page that simply waits for one second to simulate waiting for an external resource. Windows allows you to group counters into categories. We'll create a new category "Test Counters" for the new counters. Counter NameCounter TypeDescriptionNbr Page HitsNumberOfItems6464 bit counter, counting the total number of hits on the page since the website started.Hits/secondRateOfCountsPerSecond32Hits per secondAverage WaitAverageTimer32Time taken by the resource. Inspite of the name, it is used here to simply measure an interval, not an average.Average Wait Base*AverageBaseUtility counter required by Average Wait. *The text says there are three counters, but the table lists four. Why? The last counter, Average Wait Base, doesn't provide information on its own, but helps to compute the value of counter Average Wait. Later on, we'll see how this works. There are two ways to create the "Test Counters" category and the counters themselves: Using Visual Studio: This is relatively quick, but if you want to apply the same counters to for example your development and production environments, you'll have to enter the counters separately in each environment Programmatically: Because this involves writing code, it takes a bit longer upfront, but makes it easier to apply the same counters to multiple environments and to place the counters under source control Creating counters with Visual Studio To create the counters in Visual Studio: Make sure you have administrative privileges or are a member of the Performance Monitor Users group. Open Visual Studio. Click on the Server Explorer tab. Expand Servers. Expand your machine. Right-click on Performance Counters and choose Create New Category. Enter Test Counters in the Category Name field. Click on the New button for each of the four counters to add, as listed in the table you saw earlier. Be sure to add the Average Wait Base counter right after Average Wait, to properly associate the two counters. Click on OK when you're done. This technique is easy. However, you'll need to remember to add the same counters to the production machine when you release new code with new custom counters. Writing a program to create the counters is more work initially, but gives you easier maintenance in the long run. Let's see how to do this. Creating counters programmatically From a maintenance point of view, it would be best to create the counters when the web application starts, in the Global.asax file. However, you would then have to make the account under which the application pool runs part of the Performance Monitor Users group. An alternative is to create the counters in a separate console program. An administrator can then run the program to create the counters on the server. Here is the code. using System; using System.Diagnostics; namespace CreateCounters { class Program { static void Main(string[] args) { To create a group of counters, you create each one in turn, and add them to a CounterCreationDataCollection object: CounterCreationDataCollection ccdc = new CounterCreationDataCollection(); Create the first counter, Nbr Page Hits. Give it a short help message and the counter type. Now, add it to the CounterCreationDataCollection object: CounterCreationData ccd = new CounterCreationData ("Nbr Page Hits", "Total number of page hits", PerformanceCounterType.NumberOfItems64); ccdc.Add(ccd); Add the second, third, and fourth counters along the same lines: ccd = new CounterCreationData("Hits / second", "Total number of page hits / sec", PerformanceCounterType.RateOfCountsPerSecond32); ccdc.Add(ccd); ccd = new CounterCreationData("Average Wait", "Average wait in seconds", PerformanceCounterType.AverageTimer32); ccdc.Add(ccd); ccd = new CounterCreationData("Average Wait Base", "", PerformanceCounterType.AverageBase); ccdc.Add(ccd); Now, it's time to take the CounterCreationDataCollection object and make it into a category. Because you'll get an exception when you try to create a category that already exists if there already is a category with the same name, delete it now. Because you can't add new counters to an existing category, there is no simple work-around for this: if (PerformanceCounterCategory.Exists("Test Counters")) { PerformanceCounterCategory.Delete("Test Counters"); } Finally, create the Test Counters category. Give it a short help message, and make it a single instance. You can also make a category multi-instance, which allows you to split the category into instances. Also, pass in the CounterCreationDataCollection object with all the counters. This creates the complete category with all your counters in one go, as shown in the following code: PerformanceCounterCategory.Create("Test Counters", "Counters for test site",PerformanceCounterCategoryType. SingleInstance,ccdc); } } } Now that you know how to create the counters, let's see how to update them in your code Updating counters in your code To keep things simple, this example uses the counters in a page that simply waits for a second to simulate waiting for an external resource: using System; using System.Diagnostics; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { First, increment the nbrPageHits counter. To do this, create a PerformanceCounter object, attaching it to the nbrPageHits counter in the Test Counters category. Then, increment the PerformanceCounter object: PerformanceCounter nbrPageHitsCounter = new PerformanceCounter("Test Counters", "Nbr Page Hits", false); nbrPageHitsCounter.Increment(); Now, do the same with the Hits/second counter. Because you set its type to RateOfCountsPerSecond32 when you generated it in the console program, the counter will automatically give you a rate per second when viewed in perfmon: PerformanceCounter nbrPageHitsPerSecCounter = new PerformanceCounter("Test Counters", "Hits / second", false); nbrPageHitsPerSecCounter.Increment(); To measure how long the actual operation takes, create a Stopwatch object, and start it: Stopwatch sw = new Stopwatch(); sw.Start(); Execute the simulated operation: // Simulate actual operation System.Threading.Thread.Sleep(1000); Stop the stopwatch: sw.Stop(); Update the Average Wait counter and the associated Average Wait Base counter to record the elapsed time in the stopwatch. PerformanceCounter waitTimeCounter = new PerformanceCounter("Test Counters", "Average Wait", false); waitTimeCounter.IncrementBy(sw.ElapsedTicks); PerformanceCounter waitTimeBaseCounter = new PerformanceCounter("Test Counters", "Average Wait Base", false); waitTimeBaseCounter.Increment(); } } Now that we've seen how to create and use the most commonly used counters, it's time to retrieve their values. Viewing custom counters in perfmon Accessing your custom counters goes the following way: On the server, run perfmon from the command prompt. To open the command prompt on Vista, click on Start | All Programs | Accessories | Command Prompt. This opens the monitor window. Expand Monitoring Tools and click on Performance Monitor. Click on the green "plus" sign. In the Add Counters dialog, scroll down to your new Test Counters category. Expand that category and add your new counters. Click on OK. To see the counters in action, run a load test. If you use WCAT, you could use files runwcat_testcounters.bat and testcounters_scenario.ubr from the downloaded code bundle.
Read more
  • 0
  • 0
  • 6138

article-image-introducing-coldfusion-components
Packt
12 Oct 2010
15 min read
Save for later

Introducing ColdFusion Components

Packt
12 Oct 2010
15 min read
Object-Oriented Programming in ColdFusion Break free from procedural programming and learn how to optimize your applications and enhance your skills using objects and design patterns Fast-paced easy-to-follow guide introducing object-oriented programming for ColdFusion developers Enhance your applications by building structured applications utilizing basic design patterns and object-oriented principles Streamline your code base with reusable, modular objects Packed with example code and useful snippets        For those with any experience with ColdFusion, components should be relatively commonplace. Object-Oriented Programming (OOP) relies heavily on the use of ColdFusion components, so before proceeding onto the ins and outs of OOP, let's re-familiarize ourselves with components within ColdFusion. ColdFusion Components use the same ColdFusion Markup Language (CFML) as 'standard' ColdFusion pages. The core difference is the file extension—components must be saved with a .cfc file extension as opposed to the .cfm file extensions for template pages. The basic structure of a ColdFusion Component is: The component (the page within which you create the code to hold data or perform functions) The methods available to run within the CFC, also known as functions In simple terms, CFCs themselves form a framework within ColdFusion, allowing you to write structured, clear, and organized code. They make application development easier to manage, control, and maintain. ColdFusion Components use the same CFML as 'standard' ColdFusion pages. The core difference is the file extension. Why use CFCs? It is not unusual for applications to grow and seem overly complex. Pages containing detailed information, such as business logic, data access and manipulation, data validation, and layout/presentation logic, can become untidy and hard to manage. Creating and developing applications using CFCs enables you to separate the code logic from the design and presentation, and build an application based around, if not using, traditional Model View Controller (MVC) framework methodologies. Utilizing CFCs and creating a clear structured format for your code will help reduce the complexity of logic within your pages and improve the application speed. Having a clearly structured, well organized code base will make it easier to develop as an individual and share resources within a team. This is the instant benefit of CFC development. A well-written CFC will allow you to reuse your functions, or methods, across your entire application, helping to reduce the risk of code duplication. It will keep your component libraries and code base to a more easily manageable size, preventing it from becoming convoluted and difficult to follow. ColdFusion components are an incredibly powerful and valuable means of creating efficient code. They allow you to: Share properties and variables between other methods and functions Share and interact with functions contained within other CFCs Inherit the properties and methods of a base component Overwrite methods and functions within other components CFCs also give you the ability to clearly document and comment your code, letting you and other developers know what each function and property should do, what it should be expecting to receive to do the job and what output it will give you. ColdFusion components are able to read themselves and display this data to you, using a form of introspection. Although CFCs are an effective tool for code reuse, this is not to say they should be used for every reusable function within your application. They are not a complete replacement for custom tags and user-defined functions. When you load a CFC (instantiate the component), this uses up more processing time than it would to call a custom tag or a User-Defined Function (UDF) into use. Once a CFC has been instantiated, however, calling a method or function within the component will take approximately the same time as it would to call a UDF. It is important, therefore, that CFCs should not necessarily be used as a complete replacement for any UDFs or custom tags that you have in your application. Any code you write can, of course, be optimized, and changes can be made as you learn new things, but UDFs and custom tags perform perfectly well. Using them as they are will help to keep any processing overheads on your application to a minimum. Grouping your functions You may have already written custom tags and user-defined functions that allow similar functionality and reusability, for example, a series of UDFs that interact with a shopping cart. By grouping your functions within specific components according to their use and purpose, you can successfully keep your code library organized and more efficient. You can also further clean your code library by compiling or grouping multiple related components into a package, clearly named and stored in a directory within your application. Organizing your components A typical method for organizing your CFC library is to create a directory structure based on your company or domain name, followed by a directory whose name references the purpose of the included components, for example, 'com.coldfumonkeh.projecttracker' in the webroot of your application. Within this directory, you would then create a directory for each group (or package), of components, with a name reflecting or matching the component name and purpose. Use your ColdFusion Components to create a component structure, or a library, that contains grouped methods and functions, particularly if the methods share properties or data. The ColdFusion component tags You can use these following tags to create a ColdFusion Component. TagPurposecfcomponentThe core CFC tag that defines the component structure. All other content in the component is wrapped within this tag.cffunctionCreates a method (function) within the component.cfargumentCreates a parameter, otherwise known as an argument, to be sent to the function.cfpropertyCan be used to define and document the properties within your component. Can also be used to define variables within a CFC that is used as a web service. These previously mentioned tags are written within the .cfc file that defines the ColdFusion component. In the world of object-oriented programming, you will commonly hear or see reference to the word 'Class'. A class is essentially a blueprint that is used to instantiate an object, and typically contains methods and instance variables. When discussing a Class in the context of ColdFusion development, we are basically referencing a ColdFusion component, so when you see or read about classes, remember it is essentially an alias for a CFC. Our first component To get started, in this example, we will create a component and functions to output the message "Hello world". Create a new file called greetings.cfc and save it within your ColdFusion webroot. The following is a component base tag; add this code into the new CFC to define the component: <cfcomponent displayName="greetings"> </cfcomponent> Listing 1.1 – component base tags As you can see, the name attribute within the CFC matches the name of the file. The cfcomponent tags form the base structure of our ColdFusion Component. No other code can be placed outside of these tags, as it will simply display an error. It may be helpful to think of the cfcomponent tag as the wrapping paper on a parcel. It forms the outer shell of the package, holding everything else nicely in place. Defining a method We have now created the component, but at the moment it does not actually do anything. It has no function to run. We need to add a method into the CFC to create a function to call and use within our application. The following code is a basic function definition; place it between the opening and closing cfcomponent tags: <cffunction name="sayHello"> <!--- the CFML code for the method will go here ---> </cffunction> Listing 1.2 – basic function definition You have now added a method to the CFC. The cffunction tags are nested within the cfcomponent tags. We now need to add some CFML code within the cffunction tags to create our method and perform the operation. Let's create a variable within the function that will be our display message. The following code is for declaring a string variable; place it inside the cffunction tags: <cffunction name="sayHello"> <cfset var strHelloMessage = 'Hello World!' /> </cffunction> Listing 1.3 – declaring a string variable We have created a string variable containing the text to display to the browser. Returning the data To return the data we need to add an extra tag into the method. This is possible by using the cfreturn tag, which returns results from a component method. The cfreturn tag has one required attribute that is the expression or value you wish to return. Add the following code to your CFC so our method will return the welcome message and the completed component will look like this: <cfcomponent displayName="greetings"> <cffunction name="sayHello"> <cfset var strHelloMessage = 'Hello World!' /> <cfreturn strHelloMessage /> </cffunction> </cfcomponent> Listing 1.4 – returning data from the function ColdFusion 9 scripted components Since the release of ColdFusion 9, developers now have the ability to also write ColdFusion components in complete script syntax instead of pure tag form. To write the previous component in this format, the code would look as follows: component displayname="greetings" { function sayHello(){ // the CFML code for the method will go here var strHelloMessage='Hello World'; return strHelloMessage; } } Listing 1.5 – component declaration in the script syntax Although written using cfscript syntax, there is no requirement to wrap the code within <cfscript> tags, instead we can write it directly within the .cfc page. We do not even need to contain the code within cfcomponent tags, as the entire content of the component will be compiled as cfscript if left as plain text without tags. Creating your object There it is, a simple ColdFusion Component. The method is created using the cffunction tags, wrapped up nicely within the cfcomponent tags, and the value returned using the cfreturn tag. Now that we have written the function, how do we call it? In this example, we will call the component and run the method by using the createObject() function. Create a new file called hello.cfm and add the following code to the template: <cfset objGreeting = createObject('component', 'greetings') /> <cfoutput>#objGreeting.sayHello()#</cfoutput> Listing 1.6 – creating the component object In the previous code, we have created an instance of the greetings CFC, which we can reference by using the objGreeting variable. We have then accessed the sayHello() method within the component, surrounded by cfoutput tags, to display the returned data. Save the file and view it within your browser. You should now see the welcome message that we created within the method. Restricting your functions to scopes Imagine we are sending some data through to a login page in our application within the URL scope; the first and last name of a particular person. On the page, we want to join the two values and combine them into one string to form the individual's full name. We could write the code directly on the page, as follows: <cfoutput> Hello, #URL.firstName# #URL.lastName# </cfoutput> Listing 1.7 – displaying URL variables as a string Although this works, you can revise the code and transform it into a ColdFusion function to concatenate the two values into the required single string and return that value: <cffunction name="getName"> <cfset var strFullName = URL.firstName & ' ' & URL.lastName /> <cfreturn strFullName /> </cffunction> Listing 1.8 – concatenate variables into string You can then call this function within your .cfm page to output the resulting string from the function: <cfoutput> #getName()# </cfoutput> However, within this code you have restricted yourself to using only the specific URL scope. What if the first name and last name values were in the FORM scope, or pulled from a query? This block of code is useful only for values within the form scope. Using arguments within your methods To allow us to be able to pass in any parameters into the getName() function, we need to use the cfargument tag to send data into the method. By changing the function in the following code example, the method will create the concatenated string and produce the same results from two parameters or arguments that you choose to pass in. <cffunction name="getName"> <cfargument name="firstName" type="string" /> <cfargument name="lastName" type="string" /> <cfset var strFullName = arguments.firstName & ' ' & arguments.lastName /> <cfreturn strFullName /> </cffunction> Listing 1.10 – using arguments within your function The cfargument tag creates a parameter definition within the component method, and allows you to send in arguments for inclusion into the functions. The Arguments scope The Arguments scope only exists in a method. The scope contains any variables that you have passed into that method, and you can access the variables within the Arguments scope in the following ways: using structure notation - Arguments.variablename or Arguments["variablename"] using array notation - Arguments[1] The Arguments scope does not persist between calls to available CFC methods, meaning that you cannot access a value within the Arguments scope in one function from inside a different function. Redefine the function parameters By defining two arguments and sending in the values for the first and last names, you have created an unrestricted function that is not tied to a specific scope or set of hardcoded values. You can instead choose what values to pass into it on your calling page: <cfoutput> #getName('Gary', 'Brown')# </cfoutput> Listing 1.11a – sending parameters into our function Now that we have removed any restrictions to the values we pass in, and taken away any references to hardcoded variables, we can reuse this function, sending in whichever values or variables we choose to. For example, we could use variables from the FORM scope, URL scope, or query items to concatenate the string: <cfoutput> #getName(form.firstName, form.lastName)# </cfoutput> Listing 1.11b – sending parameters into our function Let's take our getName() method and add it into the greeting.cfc file. By doing so, we are grouping two methods that have a similarity in purpose into one component. This is good programming practice and will aid in creating manageable and clearly organized code. Our greeting.cfc should now look like this: <cfcomponent name="greetings"> <cffunction name="sayHello"> <cfset var strHelloMessage = 'Hello World!' /> <cfreturn strHelloMessage /> </cffunction> <cffunction name="getName"> <cfargument name="firstName" type="string" /> <cfargument name="lastName" type="string" /> <cfset var strFullName = arguments.firstName & ' ' & arguments.lastName /> <cfreturn strFullName /> </cffunction> </cfcomponent> Listing 1.12 – revised greeting.cfc Combining your methods As we have seen, you can easily access the methods within a defined CFC and output the data in a .cfm template page. You can also easily access the functionality of one method in a CFC from another method. This is particularly useful when your component definition contains grouped functions that may have a relationship based upon their common purpose. To show this, let's create a new method that will use the results from both of our existing functions within the greetings.cfc file. Instead of displaying a generic "Hello World" message, we will incorporate the returned data from the getName() method and display a personalized greeting. Create a new method within the CFC, called personalGreeting. <cffunction name="personalGreeting"> <cfargument name="firstName" type="string" /> <cfargument name="lastName" type="string" /> <cfscript> strHello = sayHello(); strFullName = getName(firstName=arguments.firstName, lastName=arguments.lastName); strHelloMessage = strHello & ' My name is ' & strFullName; </cfscript> <cfreturn strHelloMessage /> </cffunction> Listing 1.13 – personalGreeting method Within this method, we are calling our two previously defined methods. The returned value from the sayHello() method is being stored as a string variable, "strHello". We then retrieve the returned value from the getName() method and store this in a string variable "strFullName". As we have written the getName() function to accept two arguments to form the concatenated name string, we also need to add the same two arguments to the personalGreeting() method , as done in the previous code. They will then be passed through to the getName() method in exactly the same way as if we were calling that function directly. Using the two variables that now hold the returned data, we create our strHelloMessage variable, which joins the two values, and is then returned from the method using the cfreturn tag. In this method, we used CFScript instead of CFML and cfset tags, which were used in our previous functions. There is no hard and fast rule for this. You can use whichever coding method you find the most comfortable. Let's call this method on our hello.cfm template page, using the following code: <!--- instatiate the component ---> <cfset objGreeting = createObject('component', 'greetings') /> <!--- access the method and assign results to a string ---> <cfset strPersonalGreeting = objGreeting.personalGreeting( firstName="Gary", lastName="Brown") /> <cfoutput>#strPersonalGreeting#</cfoutput> Listing 1.14 – calling the personalGreeting method We are sending in the same arguments that we were passing through to the original getName() method, in the same way. This time we are passing these through using the newly created personalGreeting() method. You should now see a personalized greeting message displayed in your browser:
Read more
  • 0
  • 0
  • 1995

article-image-using-fluent-nhibernate-persistence-tester-and-ghostbusters-test
Packt
06 Oct 2010
3 min read
Save for later

Using the Fluent NHibernate Persistence Tester and the Ghostbusters Test

Packt
06 Oct 2010
3 min read
NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible      The reader would benefit from reading the previous article on Testing Using NHibernate Profiler and SQLite. Using the Fluent NHibernate Persistence Tester Mappings are a critical part of any NHibernate application. In this recipe, I'll show you how to test those mappings using Fluent NHibernate's Persistence tester. Getting ready Complete the Fast testing with SQLite in-Memory database recipe mentioned in the previous article. How to do it... Add a reference to FluentNHibernate. In PersistenceTests.cs, add the following using statement: using FluentNHibernate.Testing; Add the following three tests to the PersistenceTests fixture: [Test] public void Product_persistence_test() { new PersistenceSpecification<Product>(Session) .CheckProperty(p => p.Name, "Product Name") .CheckProperty(p => p.Description, "Product Description") .CheckProperty(p => p.UnitPrice, 300.85M) .VerifyTheMappings(); } [Test] public void ActorRole_persistence_test() { new PersistenceSpecification<ActorRole>(Session) .CheckProperty(p => p.Actor, "Actor Name") .CheckProperty(p => p.Role, "Role") .VerifyTheMappings(); } [Test] public void Movie_persistence_test() { new PersistenceSpecification<Movie>(Session) .CheckProperty(p => p.Name, "Movie Name") .CheckProperty(p => p.Description, "Movie Description") .CheckProperty(p => p.UnitPrice, 25M) .CheckProperty(p => p.Director, "Director Name") .CheckList(p => p.Actors, new List<ActorRole>() { new ActorRole() { Actor = "Actor Name", Role = "Role" } }) .VerifyTheMappings(); } Run these tests with NUnit. How it works... The Persistence tester in Fluent NHibernate can be used with any mapping method. It performs the following four steps: Create a new instance of the entity (Product, ActorRole, Movie) using the values provided. Save the entity to the database. Get the entity from the database. Verify that the fetched instance matches the original. At a minimum, each entity type should have a simple Persistence test, such as the one shown previously. More information about the Fluent NHibernate Persistence tester can be found on their wiki at http://wiki.fluentnhibernate.org/Persistence_specification_testing See also Testing with the SQLite in-memory database Using the Ghostbusters test  
Read more
  • 0
  • 0
  • 2064
Visually different images

article-image-nhibernate-30-testing-using-nhibernate-profiler-and-sqlite
Packt
06 Oct 2010
6 min read
Save for later

NHibernate 3.0: Testing Using NHibernate Profiler and SQLite

Packt
06 Oct 2010
6 min read
  NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on NHibernate, see here.) Using NHibernate Profiler NHibernate Profiler from Hibernating Rhinos is the number one tool for analyzing and visualizing what is happening inside your NHibernate application, and for discovering issues you may have. In this recipe, I'll show you how to get up and running with NHibernate Profiler. Getting ready Download NHibernate Profiler from http://nhprof.com, and unzip it. As it is a commercial product, you will also need a license file. You may request a 30-day trial license from the NHProf website. Using our Eg.Core model, set up a new NHibernate console application with log4net. (Download code). How to do it... Add a reference to HibernatingRhinos.Profiler.Appender.dll from the NH Profiler download. In the session-factory element of App.config, set the property generate_statistics to true. Add the following code to your Main method: log4net.Config.XmlConfigurator.Configure();HibernatingRhinos.Profiler.Appender. NHibernate.NHibernateProfiler.Initialize();var nhConfig = new Configuration().Configure();var sessionFactory = nhConfig.BuildSessionFactory();using (var session = sessionFactory.OpenSession()){ var books = from b in session.Query<Book>() where b.Author == "Jason Dentler" select b; foreach (var book in books) Console.WriteLine(book.Name);} Run NHProf.exe from the NH Profiler download, and activate the license. Build and run your console application. Check the NH Profiler. It should look like the next screenshot. Notice the gray dots indicating alerts next to the Session #1 and Recent Statements. Select Session #1 from the Sessions list at the top left pane. Select the statement from the top right pane. Notice the SQL statement in the following screenshot: Click on See the 1 row(s) resulting from this statement. Enter your database connection string in the field provided, and click on OK. Close the query results window. Switch to the Alerts tab, and notice the alert: Use of implicit transaction is discouraged. Click on the Read more link for more information and suggested solutions to this particular issue. Switch to the Stack Trace tab, as shown in the next screenshot: Double-click on the NHProfTest.NHProfTest.Program.Main stack frame to jump to that location inside Visual Studio. Using the following code, wrap the foreach loop in a transaction and commit the transaction: using (var tx = session.BeginTransaction()){ foreach (var book in books) Console.WriteLine(book.Name); tx.Commit();} In NH Profiler, right-click on Sessions on the top left pane, and select Clear All Sessions. Build and run your application. Check NH Profiler for alerts. How it works... NHibernate Profiler uses a custom log4net appender to capture data about NHibernate activities inside your application and transmit that data to the NH Profiler application. Setting generate_statistics allows NHibernate to capture many key data points. These statistics are displayed in the lower, left-hand side of the pane of NHibernate Profiler. We initialize NHibernate Profiler with a call to NHibernateProfiler.Initialize(). For best results, do this when your application begins, just after you have configured log4net. There's more... NHibernate Profiler also supports offline and remote profiling, as well as command-line options for use with build scripts and continuous integration systems. In addition to NHibernate warnings and errors, NH Profiler alerts us to 12 common misuses of NHibernate, which are as follows: Transaction disposed without explicit rollback or commit: If no action is taken, transactions will rollback when disposed. However, this often indicates a missing commit rather than a desire to rollback the transaction Using a single session on multiple threads is likely a bug: A Session should only be used by one thread at a time. Sharing a session across threads is usually a bug, not an explicit design choice with proper locking. Use of implicit transaction is discouraged: Nearly all session activity should happen inside an NHibernate transaction. Excessive number of rows: In nearly all cases, this indicates a poorly designed query or bug. Large number of individual writes: This indicates a failure to batch writes, either because adonet.batch_size is not set, or possibly because an Identity-type POID generator is used, which effectively disables batching. Select N+1: This alert indicates a particular type of anti-pattern where, typically, we load and enumerate a list of parent objects, lazy-loading their children as we move through the list. Instead, we should eagerly fetch those children before enumerating the list Superfluous updates, use inverse="true": NH Profiler detected an unnecessary update statement from a bi-directional one-to-many relationship. Use inverse="true" on the many side (list, bag, set, and others) of the relationship to avoid this. Too many cache calls per session: This alert is targeted particularly at applications using a distributed (remote) second-level cache. By design, NHibernate does not batch calls to the cache, which can easily lead to hundreds of slow remote calls. It can also indicate an over reliance on the second-level cache, whether remote or local. Too many database calls per session: This usually indicates a misuse of the database, such as querying inside a loop, a select N+1 bug, or an excessive number of writes. Too many joins: A query contains a large number of joins. When executed in a batch, multiple simple queries with only a few joins often perform better than a complex query with many joins. This alert can also indicate unexpected Cartesian products. Unbounded result set: NH Profiler detected a query without a row limit. When the application is moved to production, these queries may return huge result sets, leading to catastrophic performance issues. As insurance against these issues, set a reasonable maximum on the rows returned by each query Different parameter sizes result in inefficient query plan cache usage: NH Profiler detected two identical queries with different parameter sizes. Each of these queries will create a query plan. This problem grows exponentially with the size and number of parameters used. Setting prepare_sql to true allows NHibernate to generate queries with consistent parameter sizes. See also Configuring NHibernate with App.config Configuring log4net logging
Read more
  • 0
  • 0
  • 3339

article-image-getting-started-javafx
Packt
05 Oct 2010
11 min read
Save for later

Getting Started with JavaFX

Packt
05 Oct 2010
11 min read
  JavaFX 1.2 Application Development Cookbook Over 60 recipes to create rich Internet applications with many exciting features Easily develop feature-rich internet applications to interact with the user using various built-in components of JavaFX Make your application visually appealing by using various JavaFX classes—ListView, Slider, ProgressBar—to display your content and enhance its look with the help of CSS styling Enhance the look and feel of your application by embedding multimedia components such as images, audio, and video Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on JavaFX, see here.) Using javafxc to compile JavaFX code While it certainly makes it easier to build JavaFX with the support of an IDE (see the NetBeans and Eclipse recipes), it is not a requirement. In some situations, having direct access to the SDK tools is preferred (automated build for instance). This recipe explores the build tools that are shipped with the JavaFX SDK and provides steps to show you how to manually compile your applications. Getting ready To use the SDK tools, you will need to download and install the JavaFX SDK. See the recipe Installing the JavaFX SDK, for instructions on how to do it. How to do it... Open your favorite text/code editor and type the following code. The full code is available from ch01/source-code/src/hello/HelloJavaFX.fx. package hello;import javafx.stage.Stage;import javafx.scene.Sceneimport javafx.scene.text.Text;import javafx.scene.text.Font;Stage { title: "Hello JavaFX" width: 250 height: 80 scene: Scene { content: [ Text { font : Font {size : 16} x: 10 y: 30 content: "Hello World!" } ] }} Save the file at location hello/Main.fx. To compile the file, invoke the JavaFX compiler from the command line from a directory up from the where the file is stored (for this example, it would be executed from the src directory): javafxc hello/Main.fx If your compilation command works properly, you will not get any messages back from the compiler. You will, however, see the file HelloJavaFX.class created by the compiler in the hello directory. If, however, you get a "file not found" error during compilation, ensure that you have properly specified the path to the HelloJavaFX.fx file. How it works... The javafxc compiler works in similar ways as your regular Java compiler. It parses and compiles the JavaFX script into Java byte code with the .class extension. javafxc accepts numerous command-line arguments to control how and what sources get compiled, as shown in the following command: javafxc [options] [sourcefiles] [@argfiles] where options are your command-line options, followed by one or more source files, which can be followed by list of argument files. Below are some of the more commonly javafxc arguments: classpath (-cp)—the classpath option specifies the locations (separated by a path separator character) where the compiler can find class files and/or library jar files that are required for building the application. javafxc -cp .:lib/mylibrary.jar MyClass.fx sourcepath—in more complicated project structure, you can use this option to specify one or more locations where the compiler should search for source file and satisfy source dependencies. javafxc -cp . -sourcepath .:src:src1:src2 MyClass.fx -d—with this option, you can set the target directory where compiled class files are to be stored. The compiler will create the package structure of the class under this directory and place the compiled JavaFX classes accordingly. javafxc -cp . -d build MyClass.fx The @argfiles option lets you specify a file which can contain javafxc command-line arguments. When the compiler is invoked and a @argfile is found, it uses the content of the file as an argument for javafxc. This can help shorten tediously long arguments into short, succinct commands. Assume file cmdargs has the following content: -d build-cp .:lib/api1.jar:lib/api2.jar:lib/api3.jar-sourcepath core/src:components/src:tools/src Then you can invoke javafxc as: $> javafxc @cmdargs See also Installing the JavaFX SDK Creating and using JavaFX classes JavaFX is an object-oriented scripting language. As such, object types, represented as classes, are part of the basic constructs of the language. This section shows how to declare, initialize, and use JavaFX classes. Getting ready If you have used other scripting languages such as ActionScript, JavaScript, Python, or PHP, the concepts presented in this section should be familiar. If you have no idea what a class is or what it should be, just remember this: a class is code that represents a logical entity (tree, person, organization, and so on) that you can manipulate programmatically or while using your application. A class usually exposes properties and operations to access the state or behavior of the class. How to do it... Let's assume we are building an application for a dealership. You may have a class called Vehicle to represent cars and other type of vehicles processed in the application. The next code example creates the Vehicle class. Refer to ch01/source-code/src/javafx/Vehicle.fx for full listing of the code presented here. Open your favorite text editor (or fire up your favorite IDE). Type the following class declaration: class Vehicle { var make; var model; var color; var year; function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!") }} Once your class is properly declared, it is now ready to be used. To use the class, add the following (highlighted code) to the file: class Vehicle {...}var vehicle = Vehicle { year:2010 color: "Grey" make:"Mini" model:"Cooper"};vehicle.drive(); Save the file as Vehicle.fx. Now, from the command-line, compile it with: $> javafxc Vehicle.fx If you are using an IDE, you can simply right, click on the file to run it. When the code executes, you should see: $> You are driving a 2010 Grey Mini Cooper! How it works... The previous snippet shows how to declare a class in JavaFX. Albeit a simple class, it shows the basic structure of a JavaFX class. It has properties represented by variables declarations: var make;var model;var color;var year; and it has a function: function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!")} which can update the properties and/or modify the behavior (for details on JavaFX functions, see the recipe Creating and Using JavaFX functions). In this example, when the function is invoked on a vehicle object, it causes the object to display information about the vehicle on the console prompt. Object literal initialization Another aspect of JavaFX class usage is object declaration. JavaFX supports object literal declaration to initialize a new instance of the class. This format lets developers declaratively create a new instance of a class using the class's literal representation and pass in property literal values directly into the initialization block to the object's named public properties. var vehicle = Vehicle { year:2010 color: "Grey" make:"Mini" model:"Cooper"}; The previous snippet declares variable vehicle and assigns to it a new instance of the Vehicle class with year = 2010, color = Grey, make = Mini, and model = Cooper. The values that are passed in the literal block overwrite the default values of the named public properties. There's more... JavaFX class definition mechanism does not support a constructor as in languages such as Java and C#. However, to allow developers to hook into the life cycle of the object's instance creation phase, JavaFX exposes a specialized code block called init{} to let developers provide custom code which is executed during object initialization. Initialization block Code in the init block is executed as one of the final steps of object creation after properties declared in the object literal are initialized. Developers can use this facility to initialize values and initialize resources that the new object will need. To illustrate how this works, the previous code snippet has been modified with an init block. You can get the full listing of the code at ch01/source-code/src/javafx/Vehicle2.fx. class Vehicle {... init { color = "Black"; } function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!"); }}var vehicle = Vehicle { year:2010 make:"Mini" model:"Cooper"};vehicle.drive(); Notice that the object literal declaration of object vehicle no longer includes the color declaration. Nevertheless, the value of property color will be initialized to Black in the init{} code block during the object's initialization. When you run the application, it should display: You are driving a 2010 Black Mini Cooper! See also Declaring and using variables in JavaFX Creating and using JavaFX functions Creating and using variables in JavaFX JavaFX is a statically type-safe and type-strict scripting language. Therefore, variables (and anything which can be assigned to a variable, including functions and expressions) in JavaFX, must be associated with a type, which indicates the expected behavior and representation of the variable. This sections explores how to create, initialize, and update JavaFX variables. Getting ready Before we look at creating and using variables, it is beneficial to have an understanding of what is meant by data type and be familiar with some common data types such as String, Integer, Float, and Boolean. If you have written code in other scripting languages such as ActionScript, Python, and Ruby, you will find the concepts in this recipe easy to understand. How to do it... JavaFX provides two ways of declaring variables including the def and the var keywords. def X_STEP = 50;prntln (X_STEP);X_STEP++; // causes errorvar x : Number;x = 100;...x = x + X_LOC; How it works... In JavaFX, there are two ways of declaring a variable: def—The def keyword is used to declare and assign constant values. Once a variable is declared with the def keyword and assigned a value, it is not allowed be reassigned a new value. var—The var keyword declares variables which are able to be updated at any point after their declaration. There's more... All variables must have an associated type. The type can be declared explicitly or be automatically coerced by the compiler. Unlike Java (similar to ActionScript and Scala), the type of the variable follows the variable's name separated by a colon. var location:String; Explicit type declaration The following code specifies the type (class) that the variable will receive at runtime: var location:String;location = "New York"; The compiler also supports a short-hand notation that combines declaration and initialization. var location:String = "New York"; Implicit coercion In this format, the type is left out of the declaration. The compiler automatically converts the variable to the proper type based on the assignment. var location;location = "New York"; Variable location will automatically receive a type of String during compilation because the first assignment is a string literal. Or, the short-hand version: var location = "New York"; JavaFX types Similar to other languages, JavaFX supports a complete set of primitive types as listed: :String—this type represents a collection of characters contained within within quotes (double or single, see following). Unlike Java, the default value for String is empty (""). "The quick brown fox jumps over the lazy dog" or 'The quick brown fox jumps over the lazy dog' :Number—this is a numeric type that represents all numbers with decimal points. It is backed by the 64-bit double precision floating point Java type. The default value of Number is 0.0. 0.01234100.01.24e12 :Integer—this is a numeric type that represents all integral numbers. It is backed by the 32-bit integer Java type. The default value of an Integer is 0. -44700xFF :Boolean—as the name implies, this type represents the binary value of either true or false. :Duration—this type represent a unit of time. You will encounter its use heavily in animation and other instances where temporal values are needed. The supported units include ms, s, m, and h for millisecond, second, minute, and hour respectively. 12ms4s12h0.5m :Void—this type indicates that an expression or a function returns no value. Literal representation of Void is null. Variable scope Variables can have three distinct scopes, which implicitly indicates the access level of the variable when it is being used. Script level Script variables are defined at any point within the JavaFX script file outside of any code block (including class definition). When a script-level variable is declared, by default it is globally visible within the script and is not accessible from outside the script (without additional access modifiers). Instance level A variable that is defined at the top-level of a class is referred to as an instance variable. An instance level is visible within the class by the class members and can be accessed by creating an instance of the class. Local level The least visible scope are local variables. They are declared within code blocks such as functions. They are visible only to members within the block.
Read more
  • 0
  • 0
  • 4800

article-image-checking-openstreetmap-data-problems
Packt
27 Sep 2010
7 min read
Save for later

Checking OpenStreetMap Data for Problems

Packt
27 Sep 2010
7 min read
  OpenStreetMap Be your own cartographer Collect data for the area you want to map with this OpenStreetMap book and eBook Create your own custom maps to print or use online following our proven tutorials Collaborate with other OpenStreetMap contributors to improve the map data Learn how OpenStreetMap works and why it's different to other sources of geographical information with this professional guide Read more about this book (For more resources on OpenStreetMap, see here.) It's important to remember that there are few fixed ideas of what is "wrong" data in OpenStreetMap. It should certainly be an accurate representation of the real world, but that's not something an automatic data-checking tool can detect. There may be typographical errors in tags that prevent them from being recognized, but there are also undocumented tags that may accurately describe a feature, yet be unknown to anyone except the mapper who used them. The latter is fine, but the former is a problem. It's tempting to use the two map renderings on openstreetmap.org as a debugging tool, but this can be misleading. Not every possible feature is rendered, and many problems with the data, such as duplicate nodes or unjoined ways, won't be obvious from a rendered map. If a feature you've mapped doesn't render when a similarly tagged one does, there's an issue, but a feature appearing in the map doesn't mean it's free of problems, and a feature that doesn't appear isn't necessarily wrong. Ultimately, you will have to use your own judgment to find out whether or not an issue reported by one of these tools is really an error in the data. You can always contact other members of the OpenStreetMap community. This is only a selection of the more widely used quality assurance tools used by mappers. For a more complete list, refer to http://wiki.openstreetmap.org/wiki/Quality_Assurance. Inspecting data with openstreetmap.org's data overlay and browser The openstreetmap.org website has a range of tools you can use to inspect the data in the database, both current and past. Some of the tools aren't obvious from the front page of the site, but are easily found if you know where they are. The tools, which consist of the data map overlay and the data browser pages, allow you to see the details of any object in the OpenStreetMap database, including coordinates, tags, and editing history, without the need to launch an editor or read raw XML. As these tools work directly with the data in the OpenStreetMap database, they always show the most up-to-date information available. However, they simply provide raw information, and don't provide any guidance on whether the geometry or tagging of any feature could be problematic. The easiest way of inspecting data is to start with the data map overlay. Go to the map view and find Compton (or any other area you want to inspect). Open the layer chooser by clicking on the + sign at the top-right. Click the checkbox labeled Data, and a box will appear to the left of the map view. After a short delay, the data overlay will appear, and a list of objects will appear in the box. JavaScript speed and the OpenStreetMap data overlay The data overlay and the accompanying list of objects make heavy use of JavaScript in your browser, and depending on how many objects are currently in your map view, can use a lot of processing power and memory. Some older browsers may struggle to even show the data overlay. Mappers have reported that Firefox 3.5, Apple Safari, and Google Chrome all work well with the data overlay, even with large numbers of objects. Once the data for the area you're inspecting has loaded, you'll see something like the following image: In the preceding image, on the left you can see the Object list, which gives a text description of every feature in the current map view, giving its primitive type and either its ID number or a name, if the feature has one. On the right is the map with the data overlay, which highlights every feature in the current area, whether they're rendered on the map or not. This last point is worth repeating: Not every type of feature gets rendered on the two map renderings used on openstreetmap.org, and those that do can take some time to appear if the load on the rendering engines is high. Any feature in the database will always appear in the data overlay. Inspecting a single feature To inspect an individual feature, either click on its entry in the object list, or on its highlight in the map view. Both the object list and the overlay will change to reflect this. Occasionally, an area feature may get drawn on top of other features, preventing you from selecting the ones underneath, but you'll still be able to select them from the list. Let's select The Street and inspect its data. Either click on its name in the object list, or on the way in the map view, and the object list should change to show the tags applied to the feature, and you should see something like the following in the object list: This gives a list of the tags attached to the feature. If you click on Show History, a list of the edits made to the current feature is added to the list. To get more information, click on the Details link next to the feature's name, and you'll be taken to the data browser page for that object, as follows: Here you see far more details about the feature we're inspecting. Apart from the object ID and its name, you can find the time when the object was last edited and by whom, and in which changeset. There are clickable links to any related objects and a map showing the feature's location. At the bottom of the page are links to the raw XML of the feature, the history page of the feature, and a link to launch Potlatch—the online editor—for the area surrounding the feature. Checking a feature's editing history The OpenStreetMap database keeps every version of every feature created, so you can inspect previous versions and see when and how a feature has changed. To look at a feature's history, click on the link at the bottom of its data browser page. For the Watts Gallery in Compton, you should see something like the following: You can see each version of the object listed in full, including which mapper created that version in which changeset, and what the tags for that version were. There's currently no way of showing any previous version or the changes between versions on the map, but third-party tools such as OSM Mapper provide some of these features. Inspecting changesets Along with looking at individual features, you can see how the map gets changed by looking at changesets. Since version 0.6 of the OpenStreetMap API went live in April 2009, every change to the map has to be part of a changeset. A changeset is a list of related edits made to OpenStreetMap data, with its own set of tags. What goes into a changeset is entirely up to the mapper creating it. You can view the list of recent changesets by clicking on the History tab at the top of the map view. This will show a list of the 20 most recent changesets whose bounding box intersects your current map view. Note that this doesn't guarantee that any changesets listed include any edits in your current view, and any changesets covering a large area will be marked with (big) in the list.
Read more
  • 0
  • 0
  • 2691
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-openstreetmap-gathering-data-using-gps
Packt
23 Sep 2010
19 min read
Save for later

OpenStreetMap: Gathering Data using GPS

Packt
23 Sep 2010
19 min read
  OpenStreetMap Be your own cartographer Collect data for the area you want to map with this OpenStreetMap book and eBook Create your own custom maps to print or use online following our proven tutorials Collaborate with other OpenStreetMap contributors to improve the map data Learn how OpenStreetMap works and why it's different to other sources of geographical information with this professional guide Read more about this book (For more resources on OpenStreetMap, see here.) OpenStreetMap is made possible by two technological advances: Relatively affordable, accurate GPS receivers, and broadband Internet access. Without either of these, the job of building an accurate map from scratch using crowdsourcing would be so difficult that it almost certainly wouldn't work. Much of OpenStreetMap's data is based on traces gathered by volunteer mappers, either while they're going about their daily lives, or on special mapping journeys. This is the best way to collect the source data for a freely redistributable map, as each contributor is able to give their permission for their data to be used in this way. The traces gathered by mappers are used to show where features are, but they're not usually turned directly into a map. Instead, they're used as a backdrop in an editing program, and the map data is drawn by hand on top of the traces. This means you don't have to worry about getting a perfect trace every time you go mapping, or about sticking exactly to paths or roads. Errors are canceled out over time by multiple traces of the same features. OpenStreetMap uses other sources of data than mappers' GPS traces, but they each have their own problems: Out-of-copyright maps are out-of-date, and may be less accurate than modern surveying methods. Aerial imagery needs processing before you can trace it, and it doesn't tell you details such as street names. Eventually, someone has to visit locations in person to verify what exists in a particular place, what it's called, and other details that you can't discern from an aerial photograph If you already own a GPS and are comfortable using it to record traces, you can skip the first section of this article and go straight to Techniques. If you want very detailed information about surveying using GPS, you can read the American Society of Civil Engineers book on the subject, part of which is available on Google Books at http://bit.ly/gpssurveying. Some of the details are out-of-date, but the general principles still hold. If you are already familiar with the general surveying techniques, and are comfortable producing information in GPX format, you can skip most of this article and head straight for the section Adding your traces to OpenStreetMap. What is GPS? GPS stands for Global Positioning System, and in most cases this refers to a system run by the US Department of Defense, properly called NAVSTAR. The generic term for such a system is a Global Navigation Satellite System (GNSS), of which NAVSTAR is currently the only fully operational system. Other equivalent systems are in development by the European Union (Galileo), Russian Federation (GLONASS), and the People's Republic of China (Compass). OpenStreetMap isn't tied to any one GNSS system, and will be able to make use of the others as they become available. The principles of operation of all these systems are essentially the same, so we'll describe how NAVSTAR works at present. NAVSTAR consists of three elements: the space segment, the control segment, and the user segment. The space segment is the constellation of satellites orbiting the Earth. The design of NAVSTAR is for 24 satellites, of which 21 are active and three are on standby. However, there are currently 31 satellites in use, as replacements have been launched without taking old satellites out of commission. Each satellite has a highly accurate atomic clock on board, and all clocks in all satellites are kept synchronized. Each satellite transmits a signal containing the time and its own position in the sky. The control segment is a number of ground stations, including a master control station in Colorado Springs. These stations monitor the signal from the satellites and transmit any necessary corrections back to them. The corrections are necessary because the satellites themselves can stray from their predicted paths. The user segment is your GPS receiver. This receives signals from multiple satellites, and uses the information they contain to calculate your position. Your receiver doesn't transmit any information, and the satellites don't know where you are. The receiver has its own clock, which needs to be synchronized with those in the space segment to perform its calculations. This isn't the case when you first turn it on, and is one of the reasons why it can take time to get a fix. Your GPS receiver calculates your position by receiving messages from a number of satellites, and comparing the time included in each message to its own clock. This allows it to calculate your approximate distance from each satellite, and from that, your position on the Earth. If it uses three satellites, it can calculate your position in two dimensions, giving you your latitude (lat) and longitude (long). With signals from four satellites, it can give you a 3D fix, adding altitude to lat and long. The more satellites your receiver can "see", the more accurate the calculated position will be. Some receivers are able to use signals from up to 12 satellites at once, assuming the view of the satellites isn't blocked by buildings, trees, or people. You're obviously very unlikely to get a GPS fix indoors. Many GPS receivers can calculate the amount of error in your position due to the configuration of satellites you're using. Called the Dilution of Precision (DOP), the number produced gives you an idea of how good a fix you have given the satellites you can get a signal from, and where they are in the sky. The higher the DOP, the less accurate your calculated position is. The precision of a GPS fix improves with the distance between the satellites you're using. If they're close together, such as mostly directly overhead, the DOP will be high. Use signals from satellites spread evenly across the sky, and your position will be more accurate. Which satellites your receiver uses isn't something you can control, but more modern GPS chipsets will automatically try to use the best configuration of satellites available, rather than just those with the strongest signals. DOP only takes into account errors caused by satellite geometry, not other sources of error, so a low DOP isn't a guarantee of absolute accuracy. The system includes the capability to introduce intentional errors into the signal, so that only limited accuracy positioning is available to non-military users. This capability, called Selective Availability (SA) was in use until 1990, when President Clinton ordered it to be disabled. Future NAVSTAR satellites will not have SA capabilities, so the disablement is effectively permanent. The error introduced by SA reduced the horizontal accuracy of a civilian receiver, typically to 10m, but the error could be as high as 100m. Had SA still been in place, it's unlikely that OpenStreetMap would have been as successful. NAVSTAR uses a coordinate system known as WGS84, which defines a spheroid representing the Earth, and a fixed line of longitude or datum from which other longitudes are measured. This datum is very close to, but not exactly the same as the Prime Meridian at Greenwich in South East London. The equator of the spheroid is used as the datum for latitude. Other coordinate systems exist, and you should note that no printed maps use WGS84, but instead use a slightly different system that makes maps of a given area easier to use. Examples of other coordinate systems include the OSGB36 system used by British national grid references. When you create a map from raw geographic data, the latitudes and longitudes are converted to the x and y coordinates of a flat plane using an algorithm called a projection. You've probably heard of the Mercator projection, but there are many others, each of which is suitable for different areas and purposes. What's a GPS trace? A GPS trace or tracklog is simply a record of position over time. It shows where you traveled while you were recording the trace. This information is gathered using a GPS receiver that calculates your position and stores it every so many seconds, depending on how you have configured your receiver. If you record a trace while you're walking along a path, what you get is a trace that shows you where that path is in the world. Plot these points on a graph, and you have the start of a map. Walk along any adjoining paths and plot these on the same graph, and you have something you can use to navigate. If many people generate overlapping traces, eventually you have a fully mapped area. This is the general principle of crowdsourcing geographic data. You can see the result of many combined traces in the following image. This is the junction of the M4 and M25 motorways, to the west of London. The motorways themselves and the slip roads joining them are clearly visible. Traces are used in OpenStreetMap to show where geographical features are, but usually only as a source for drawing over, not directly. They're also regarded as evidence that a mapper has actually visited the area in question, and not just copied the details from another copyrighted map. Most raw GPS traces aren't suitable to be made directly into maps, because they contain too many points for a given feature, will drift relative to a feature's true position, and you'll also take an occasional detour. Although consumer-grade GPS receivers are less accurate than those used by professional surveyors, if enough traces of the same road or path are gathered, the average of these traces will be very close to the feature's true position. OpenStreetMap allows mappers to make corrections to the data over time as more accurate information becomes available. In addition to your movements, most GPS receivers allow you to record specific named points, often called waypoints. These are useful for recording the location of point features, such as post boxes, bus stops, and other amenities. We'll cover ways of using waypoints later in the article. What equipment do I need? To collect traces suitable for use in OpenStreetMap, you'll need some kind of GPS receiver that's capable of recording a log of locations over time, known as a track log, trace, or breadcrumb trail. This could be a hand-held GPS receiver, a bicycle-mounted unit, a combination of a GPS receiver and a smartphone, or in some cases a vehicle satellite navigation system. There are also some dedicated GPS logger units, which don't provide any navigation function, but merely record a track log for later processing. You'll also need some way of getting the recorded traces off your receiver and onto your PC. This could be a USB or serial cable, a removable memory card, or possibly a Bluetooth connection. There are reviews of GPS units by mappers in the OpenStreetMap wiki. There are also GPS receivers designed specifically for surveying, which have very sensitive antennas and link directly into geographic information systems (GIS). These tend to be very expensive and less portable than consumer-grade receivers. However, they're capable of producing positioning information accurate to a few centimeters rather than meters. You also need a computer connected to the Internet. A broadband connection is best, as once you start submitting data to OpenStreetMap, you will probably end up downloading lots of map tiles. It is possible to gather traces and create mapping data while disconnected from the Internet, but you will need to upload your data and see the results at some point. OpenStreetMap data itself is usually represented in Extensible Markup Language (XML) format, and can be compressed into small files. The computer itself can be almost any kind, as long as it has a web browser, and can run one of the editors, which Windows, Mac OS X, and Linux all can. You'll probably need some other kit while mapping to record additional information about the features you're mapping. Along with recording the position of each feature you map, you'll need to note things such as street names, route numbers, types of shops, and any other information you think is relevant. While this information won't be included in the traces you upload on openstreetmap.org, you'll need it later on when you're editing the map. Remember that you can't look up any details you miss on another map without breaking copyright, so it's important to gather all the information you need to describe a feature yourself. A paper notebook and pencil is the most obvious way of recording the extra information. They are inexpensive and simple to use, and have no batteries to run out. However, it's difficult to use on a bike, and impossible if you're driving, so using this approach can slow down mapping. A voice recorder is more expensive, but easier to use while still moving. Record a waypoint on your GPS receiver, and then describe what that waypoint represents in a voice recording. If you have a digital voice recorder, you can download the notes onto your PC to make them easier to use, and JOSM—the Java desktop editing application—has a support for audio mapping built-in. A digital camera is useful for capturing street names and other details, such as the layout of junctions. Some recent cameras have their own built-in GPS, and others can support an external receiver, and will add the latitude, longitude, and possibly altitude, often known as geotags, to your pictures automatically. For those that don't, you can still use the timestamp on the photo to match it to a location in your GPS traces. We'll cover this later in the article. Some mappers have experimented with video recordings while mapping, but the results haven't been encouraging so far. Some of the problems with video mapping are: It's difficult to read street signs on zoomed-out video images, and zooming in on signs is impractical. If you're recording while driving or riding a bike, the camera can only point in one direction at once, while the details you want to record may be in a different direction. It's difficult to index recordings when using consumer video cameras, so you need to play the recording back in real time to extract the information, a slow process. Automatic processing of video recordings taken with multiple cameras would make the process easier, but this is currently beyond what volunteer mappers are able to afford. Smartphones can combine several of these functions, and some include their own GPS receiver. For those that don't, or where the internal GPS isn't very good, you can use an external Bluetooth GPS module. Several applications have been developed that make the process of gathering traces and other information on a smartphone easier. Look on the Smartphones page on the OpenStreetMap wiki at http://wiki.openstreetmap.org/wiki/Smartphones. Making your first trace Before you set off on a long surveying trip, you should familiarize yourself with the methods involved in gathering data for OpenStreetMap. This includes the basic operation of your GPS receiver, and the accompanying note-taking. Configuring your GPS receiver The first thing to make sure is that your GPS is using the W GS84 coordinate system. Many receivers also include a local coordinate system in their settings to make them easier to use with printed maps. So check in your settings which system you're getting your location in. OpenStreetMap only uses WGS84, so if you record your traces in the wrong system, you could end up placing features tens or even hundreds of meters away from their true location. Next, you should set the recording frequency as high as it will go. You need your GPS to record as much detail as possible, so setting it to record your location as often as possible will make your traces better. Some receivers can record a point once per second; if yours doesn't, it's not a problem, but use the highest setting (shortest interval) possible. Some receivers also have a "smart" mode that only records points where you've changed direction significantly, which is fine for navigation, but not for turning into a map. If your GPS has this, you'll need to disable it. One further setting on some GPSs is to only record a point every so many metres, irrespective of how much time has elapsed. Turning this on can be useful if you're on foot and taking it easy, but otherwise keep it turned off. Another setting to check, particularly if you're using a vehicle satellite navigation system, is "snap to streets" or a similar name. When your receiver has this setting on, your position will always be shown as being on a street or a path in its database, even if your true position is some way off. This causes two problems for OpenStreetMap: if you travel down a road that isn't in your receiver's database, its position won't be recorded, and the data you do collect is effectively derived from the database, which not only breaks copyright, but also reproduces any errors in that database. Next, you need to know how to start and stop recording. Some receivers can record constantly while they're turned on, but many will need you to start and stop the process. Smartphone-based recorder software will definitely require starting and stopping. If you're using a smartphone with an external Bluetooth GPS module, you may also need to pair the devices and configure the receiver in your software. Once you're happy with your settings, you can have a trial run. Make a journey you have to make anyway, or take a short trip to the shops and back (or some other reasonably close landmark if you don't live near shops). It's important that you're familiar with your test area, as you'll use your local knowledge to see how accurate your results are. Checking the quality of your traces When you return, get the trace you've recorded off your receiver, and take a look at it on your PC using an OpenStreetMap editor or by uploading the trace. Now, look at the quality of the trace. Some things to look out for are, as follows: Are lines you'd expect to be straight actually straight, or do they have curves or deviations in them? A good trace reflects the shape of the area you surveyed, even if the positioning isn't 100% accurate. I f you went a particular way twice during your trip, how well do the two parts of the trace correspond? Ideally, they should be parallel and within a few meters from each other. When you change direction, does the trace reflect that change straight away, or does your recorded path continue in the same direction and gradually turn to your new heading? If you've recorded any waypoints, how close are they to the trace? They should ideally be directly on top of the trace, but certainly no more than a few meters away. The previous image shows a low-quality GPS trace. If you look at the raw trace on the left, you can see a few straight lines and differences in traces of the same area. The right-hand side shows the trace with the actual map data for the area, showing how they differ. In this image, we see a high-quality GPS trace. This trace was taken by walking along each side of the road where possible. Note that the traces are straight and parallel, reflecting the road layout. The quality of the traces makes correctly turning them into data much easier. If you notice these problems in your test trace, you may need to alter where you keep your GPS while you're mapping. Sometimes, inaccuracy is a result of the make-up of the area you're trying to map, and nothing will change that, short of using a more sensitive GPS. For the situations where that's not the case, the following are some tips on improving accuracy. Making your traces more accurate You can dramatically improve the accuracy of your traces by putting your GPS where it can get a good signal. Remember that it needs to have a good signal all the time, so even if you seem to get a good signal while you're looking at your receiver, it could drop in strength when you put it away. If you're walking, the best position is in the top pocket of a rucksack, or attached to the shoulder strap. Having your GPS in a pocket on your lower body will seriously reduce the accuracy of your traces, as your body will block at least half of the sky. If you're cycling, a handlebar mount for your GPS will give it a good view of the sky, while still making it easy to add waypoints. A rucksack is another option. In a vehicle, it's more difficult to place your GPS where it will be able to see most of the sky. External roof-mounted GPS antennas are available, but they're not cheap and involve drilling a hole in the roof of your car. The best location is as far forward on your dashboard as possible, but be aware some modern car windscreens contain metal, and may block GPS signals. In this case, you may be able to use the rear parcel shelf, or a side window providing you can secure your GPS. Don't start moving until you have a good fix. Although most GPS receivers can get a fix while you're moving, it will take longer and may be less accurate. More recent receivers have a "warm start" feature where they can get a fix much faster by caching positioning data from satellites. You also need to avoid bias in your traces. This can occur when you tend to use one side of a road more than the other, either because of the route you normally take, or because there is only a pavement on one side of the road. The result of this is that the traces you collect will be off-center of the road's true position by a few meters. This won't matter at first, and will be less of a problem in less densely-featured areas, but in high-density residential areas, this could end up distorting the map slightly.
Read more
  • 0
  • 0
  • 5363

article-image-getting-started-openstreetmap
Packt
21 Sep 2010
6 min read
Save for later

Getting Started with OpenStreetMap

Packt
21 Sep 2010
6 min read
(For more resources on OpenStreetMap, see here.) Not all the tools and features on the site are obvious from the front page, so we'll go on a tour of the site, and cover some other tools hosted by the project. By the end of the article, you should have a good idea about where to find answers to the questions you have about OpenStreetMap. A quick tour of the front page The project's main "shop front" is www.openstreetmap.org. It's the first impression most people get of what OpenStreetMap does, and is designed to be easy to use, rather than show as much information as possible. In the following diagram, you can see the layout of the front page. We'll be referring to many of the features on the front page, so let's have a look at what's there: Most of the page is taken up by the map viewer, which is nicknamed the slippy map by mappers. This has its own controls, which we'll cover later in the article. Along the top of the map are the navigation tabs, showing most of the data management tools on openstreetmap.org. To the right of these are the user account links. Down the left-hand side of the page is the sidebar, containing links to the wiki, news blog, merchandise page, and map key. The wiki is covered later in this article. The news blog is www.opengeodata.org, and it's an aggregation of many OSM-related blogs. The Shop page is a page on the wiki listing various pieces of OpenStreetMap-related merchandise from several sources. Most merchandise generates income for the OpenStreetMap Foundation or a local group. Clicking on the map key will show the key on the left-hand side of the map. As you'd expect, the key shows what the symbols and shading on the map mean. The key is dynamic, and will change with zoom level and which base layer you're looking at. Not all base layers are supported by the dynamic map key at present. Below this is the search box. The site search uses two separate engines: Nominatim: This is an OpenStreetMap search engine or geocoder. This uses the OpenStreetMap database to find features by name, including settlements, streets, and points of interest. Nominatim is usually fast and accurate, but can only find places that have been mapped in OpenStreetMap. Geonames: This is an external location service that has greater coverage than OpenStreetMap at present, but can sometimes be inaccurate. Geonames contains settlement names and postcodes, but few other features. Clicking on a result from either search engine will center the map on that result and mark it with an arrow. Creating your account To register, go to http://www.openstreetmap.org/, and choose sign up in the top right-hand corner. This will take you to the following registration form: At present, you only really need an account on openstreetmap.org if you're planning to contribute mapping data to the project. Outside the main site and API, only the forums and issue tracker use the same username and password as openstreetmap.org. You don't need to register to download data, export maps, or subscribe to the mailing lists. Conversely, even if you're not planning to do any mapping, there are still good reasons to register at the site, such as the ability to contact and be contacted by other mappers. OpenStreetMap doesn't allow truly anonymous editing of data. T he OSM community decided to disallow this in 2007, so that any contributors could be contacted if necessary. If you're worried about privacy, you can register using a pseudonym, and this will be the only identifying information used for your account. Registering with openstreetmap.org requires a valid e-mail address, but this is never disclosed to any other user under any circumstance, unless you choose to do so. It is possible to change your display name after registration, and this changes it for all current OpenStreetMap data. However, it won't change in any archived data, such as old planet files. Once you've completed the registration form, you'll receive an e-mail asking you to confirm the registration. Your account won't be active until you click on the link in this e-mail. Once you've activated your account, you can change your settings, as follows: You can add a short description of yourself if you like, and add a photo of yourself or some other avatar. You can also set your home location by clicking on it in the small slippy map on your settings page. This allows other mappers nearby to see who else is contributing in their area, and allows you to see them. You don't have to use your house or office as your home location; any place that gives a good idea of where you'll be mapping is enough. Adding a location may lead to you being invited to OpenStreetMap-related events in your area, such as mapping parties or social events. If you do add a location, you get a home link in your user navigation on the home page that will take the map view back to that place. You'll also see a map on your user page showing other nearby mappers limited to the nearest 10 users within 50km. If you know other mappers personally, you can indicate this by adding them as your friend on openstreetmap.org. This is just a convenience to you, and your friends aren't publicly shown on your user page, although anyone you add as a friend will receive an e-mail telling them you've done it. Once you've completed the account settings, you can view your user page (shown in the following screenshot). You can do this at any time by clicking on your display name in the top right-hand corner. This shows the information about yourself that you've just entered, links to your diary and to add a new diary entry, a list of your edits to OpenStreetMap, your GPS traces, and to your settings. These will be useful once you've done some mapping, and when you need to refer to others' activities on the site. Every user on openstreetmap.org has a diary that they can use to keep the community informed of what they've been up to. Each diary entry can have a location attached, so you can see where people have been mapping. There's an RSS feed for each diary, and a combined feed for all diary entries. You can find any mapper's diary using the link on their user page, and you can comment on other mappers' diary entries, and they'll get an e-mail notification when you do.
Read more
  • 0
  • 0
  • 2500

article-image-linking-your-customers-your-sugarcrm
Packt
21 Sep 2010
12 min read
Save for later

Linking Your Customers to Your SugarCRM

Packt
21 Sep 2010
12 min read
(For more resources on SugarCRM, see here.) Surely, the most important goal of any CRM system is to make your customers feel positive about your company and to make them feel that exciting things are happening at your company, such as the following: That the employees they are in contact with are caring and well-informed That new and better information systems are coming into place That your company is responsive to product and service issues, and cares about its customers Limiting CRM system access to only the employees of a business will certainly affect the first of the aforementioned items positively, but not necessarily the other items. To really improve a customer's perception of your organization, one of the biggest improvements you can make is to allow customers to interact almost directly with your CRM system. Some of the activities that make this possible are as follows: Capturing customer leads and requests for information from the public website directly within the CRM system. Efficiently tracking customer service requests and related product/service flaws to help improve your offerings and customer satisfaction. Developing a customer self-service portal in conjunction with the CRM system to allow clients to file their own service cases, check on the latest status of a case, and to update their own customer profile. Most of us in our own lives can forgive or understand if a family member, friend, or supplier lets us down a bit, or makes a mistake—as long as they communicate with us honestly and effectively. In addition, with early detection of any errors, corrective action can always be put in place more quickly. Integrating your CRM system more directly with your customer is no more complicated than this—promoting more effective, more accurate, and timely communications with your customers. The net effect of such actions is that your customers feel informed, valued, and empowered. Capturing leads from your website Capturing leads from your company's website directly into your CRM is one of the greatest early initiatives you can implement in terms of streamlining business processes to save time and effort. This section will guide you through the manner in which this can be accomplished with SugarCRM. In the past, setting up a process similar to the one just described would have required the expertise and assistance of a programmer and your webmaster. Coordinating everyone's efforts to accomplish the goal would sometimes become a task in and of itself. Days may have elapsed before your lead capture form finally made it up to your website. Fortunately, those days are behind us. SugarCRM includes a tool that allows you to quickly and easily create a form that you can use to capture leads from your website. Through this tool, you will be able to select the fields corresponding to the data you wish to capture and also create a ready-to-use web form. Let us set up a web lead capture form through SugarCRM's tool. p style="margin-left:40px;margin-right:40px">The lead capture tool is specifically designed to import data into the Leads module only. Should you choose not to use the Leads module, or you wish to use a similar technique to capture data within a different module, you should use SugarCRM's SOAP API to accomplish the task.   To begin the setup process, hover over the Marketing tab and select Campaigns. On the shortcuts menu on the left-hand side, click on Create Lead Form, as highlighted in the following image: After clicking on it, you will see a screen that permits you to select the fields you wish to capture through your form, as illustrated in the following screenshot: The field selection process is quite simple. On the leftmost column of the three that are presented, you will see a list of all the fields corresponding to the Leads module (including custom fields). To select a field for your form, simply drag-and-drop it from the field listing on the left onto one of the two rightmost columns. It is best to visualize the layout of the form that will be produced as one similar to the edit or detail view layouts. Fields can appear next to each other, horizontally or vertically, but only within one of two columns. Most organizations prefer the vertical approach, which is the technique we will apply. However, feel free to experiment. Proceed to select the fields to match the preceding image, plus any other fields you may wish to include. Note that required fields are marked with an asterisk, as they are within the Edit view screen. You must make sure to include all your required fields to ensure that the process will work as expected. In addition, you will notice that we have selected the Lead Source field. Doing so will allow website visitors to make the appropriate selection corresponding to what drove them to your site. Click on Next once you are satisfied with your field selection. Now you need to set some final parameters, as illustrated in the following image: You will undoubtedly want to modify the Form Header. This value corresponds to the title of the page that website visitors will see in their browser, so you will want to tailor it to reflect something a bit friendlier than the generic text. The form we are building is no different than any other web form you may have encountered in your day-to-day web browsing. As such, it too will include a button for visitors to click and send the data they typed in. If you prefer the label of the button to read as something other than the default label of Submit, change the Submit Button Label accordingly. The Redirect URL and Related Campaign fields are also quite important. The former is used to specify a URL that a visitor will be sent to after clicking on the Submit button on your lead capture form, while the latter is used to associate a particular marketing campaign to the form. Establishing this relationship is critical as it will help you properly measure the effectiveness of your marketing efforts. Lastly, the Assigned To option allows you to define a user to whom the Leads will be assigned upon being entered into SugarCRM. You may want to consider creating a specific user, such as WebCapture, and assigning the Leads to that user. Doing so will permit you to quickly identify records that entered your system through the web lead capture tool versus other means. Click on Generate Form after you have applied your edits and you should see something similar to the following: The default form should now be presented within SugarCRM's HTML editor. This is a handy capability as it allows you to manipulate the look and feel of the form to make it conform to the already existing look and feel of your website. However, you may wish to ignore that, as additional options allow you to more easily integrate it into your website. To access those features and save the form, click on the Save Web To Lead Form button. SugarCRM provides the convenience of a fully formatted, ready-to-use web form which can be downloaded by clicking on the Web To Lead Form link. However, if you prefer, you may copy the code displayed in the box and then embed it into one of your already existing pages. The second approach would save you the hassle of having to modify the cosmetic aspects of the default page to match your site. To start receiving data into your SugarCRM system, simply place the form on your web server, fill out the fields and submit the form. Make sure that the server on which it is placed is able to access your SugarCRM system or it will not function. You can test it by opening the form in your web browser and submitting data, as shown in the following image: Assuming everything is working as expected, the records will automatically appear within the leads module of your SugarCRM system without any intervention on your part or that of other users. In addition, e-mail notifications of new records will automatically be sent to the defined assigned user to inform them of the new entry so they may act upon it. Through the use of add-on modules, like SierraCRM's Process Manager, further actions, like the scheduling of follow up calls, can also be automated. Remember, all of this can happen automatically and herein we begin to see the real benefits of a CRM system. There are few things quite as satisfying as driving along in the car, and receiving an e-mail on your BlackBerry telling you that a new lead has been received. Especially, when you know that it all happened automatically! From a process perspective, the concept of having every new lead automatically entered into the CRM system makes it quick and easy to convert that lead into a contact, enter details of new sales opportunities, or include them in e-mail marketing campaigns—all without any data transcription errors, or lost leads, due to human errors. One note of caution: most lead capture sites capture as much as 50% bad data. Some visitors to your site will enter anything they fancy in the form; potentially polluting your database. This highlights another reason why it is beneficial to enter them by utilizing a username such as WebCapture. Doing so would allow you to easily filter leads to only show those created by WebCapture and in turn allow you to cleanse them, either by deleting them or performing other data integrity checks. Customer self-service portals After automating the lead capture process, the most common step that follows in linking your customers into your CRM system is the self-service portal. Just as it sounds, this i s a software system that enables your customers to exchange information with your organization in a completely autonomous manner. In this initial implementation, we will show you how to implement a system that allows customers to submit and manage service cases directly within your CRM system. Most of us have had the experience of needing to contact a call center to address a customer service issue or other matters. Usually, that process involves staying on hold for some time time. If you are lucky, the time that you stay on hold is not long, but at the same time, spending 30 to 45 minutes on hold or being transferred around is not unheard of. To make matters worse, you usually need to make these calls during normal business hours, meaning you are not able to tend to your normal work while you are burning time on hold. The fundamental capability that the self-service portal provides is empowering customers by allowing them to contact you at a time that is most convenient to them. Customers are no longer bound to specific business hours, nor must they wait in a call queue or navigate a maze of phone options. If they need your company's help to resolve an issue, they simply go to your website and submit their issue. Likewise, customers do not need to contact you directly to check in on their previously submitted cases. They simply visit your website again and they will be able to review their cases. This functionality works hand-in-hand with the Cases module that is built into SugarCRM. Typically, users would leverage this module to track service calls that they receive from customers. Through this functionality, all members of the organization are kept up-to-date on any issues that a customer may be experiencing at any given time. The Bug Tracker module complements the Cases module quite well by providing a central repository where all known product flaws can be tracked. In turn, all cases resulting from any of these flaws can be related to a given bug, allowing you to measure the impact it is having on your customers. Together, they can be used as very effective tools for not only providing customer service, but also prioritizing product development needs and improving customer satisfaction. However, that process can be inefficient, as it relies on a user to enter the data to produce a case in the first place. Empowering the customer in such a way that allows them to directly interact with the Cases module not only makes it easier for you to get feedback and become aware of problems, but it also gives customers the feeling that you care to hear what they have to say about their problems. That is the goal that the self-service portal hopes to accomplish. Self-service portal configuration Before we get too deep into the specifics of configuring and using the self-service portal, you must first understand some important boundaries. First, although this is a built-in feature of the Enterprise Edition of SugarCRM, it is not a feature of Community Edition. To obtain this functionality, we must use the combination of a SugarCRM add-on available on SugarExchange.com, plus an open source CMS (Content Management System) named Joomla! If you are already using another CMS package or cannot use Joomla! for other reasons, you will not be able to utilize the functionality described in this exercise. The second and last important note is that, at the time of this writing, the add-on did not support versions of SugarCRM Community Edition higher than 5.2. Now that we have a clear understanding of some important limitations, let us begin the process of deploying this feature. Installing Joomla! Assuming you have already installed SugarCRM Community Edition on the target server, you have already established the perfect environment for installing the Joomla! CMS package. Like SugarCRM, it too leverages the LAMP or WAMP system software platforms. Just like SugarCRM, it is also an open source application. You can download Joomla! from the project's site, located at http://www.joomla.org. Our exercise will use version 1.5 of Joomla! (Full Package). It is assumed that you have already successfully downloaded and installed it onto your server. If you require help with the process, visit the Joomla! website to review its documentation and obtain further assistance. Assuming Joomla! is operational, proceed to access the administrator page. It should resemble the following: Let us leave it at the admin page for now.
Read more
  • 0
  • 0
  • 3116

article-image-installing-and-setting-javafx-netbeans-and-eclipse-ide
Packt
17 Sep 2010
7 min read
Save for later

Installing and Setting up JavaFX for NetBeans and Eclipse IDE

Packt
17 Sep 2010
7 min read
(For more resources on JavaFX, see here.) Introduction Today, in the age of Web 2.0, AJAX, and the iPhone, users have come to expect their applications to provide a dynamic and engaging user interface that delivers rich graphical content, audio, and video, all wrapped in GUI controls with animated cinematic-like interactions. They want their applications to be connected to the web of information and social networks available on the Internet. Developers, on the other hand, have become accustomed to tools such as AJAX/HTML5 toolkits, Flex/Flash, Google Web Toolkit, Eclipse/NetBeans RCP, and others that allow them to build and deploy rich and web-connected client applications quickly. They expect their development languages to be expressive (either through syntax or specialized APIs) with features that liberate them from the tyranny of verbosity and empower them with the ability to express their intents declaritively. The Java proposition During the early days of the Web, the Java platform was the first to introduce rich content and interactivity in the browser using the applet technology (predating JavaScript and even Flash). Not too long after applets appeared, Swing was introduced as the unifying framework to create feature-rich applications for the desktop and the browser. Over the years, Swing matured into an amazingly robust GUI technology used to create rich desktop applications. However powerful Swing is, its massive API stack lacks the lightweight higher abstractions that application and content developers have been using in other development environments. Furthermore, the applet's plugin technology was (as admitted by Sun) neglected and failed in the browser-hosted rich applications against similar technologies such as Flash. Enter JavaFX The JavaFX is Sun's (now part of Oracle) answer to the next generation of rich, web-enabled, deeply interactive applications. JavaFX is a complete platform that includes a new language, development tools, build tools, deployment tools, and new runtimes to target desktop, browser, mobile, and entertainment devices such as televisions. While JavaFX is itself built on the Java platform, that is where the commonalities end. The new JavaFX scripting language is designed as a lightweight, expressive, and a dynamic language to create web-connected, engaging, visually appealing, and content-rich applications. The JavaFX platform will appeal to both technical designers and developers alike. Designers will find JavaFX Script to be a simple, yet expressive language, perfectly suited for the integration of graphical assets when creating visually-rich client applications. Application developers, on the other hand, will find its lightweight, dynamic type inference system, and script-like feel a productivity booster, allowing them to express GUI layout, object relationship, and powerful two-way data bindings all using a declarative and easy syntax. Since JavaFX runs on the Java Platform, developers are able to reuse existing Java libraries directly from within JavaFX, tapping into the vast community of existing Java developers, vendors, and libraries. This is an introductory article to JavaFX. Use its recipes to get started with the platform. You will find instructions on how to install the SDK and directions on how to set up your IDE. Installing the JavaFX SDK The JavaFX software development kit (SDK) is a set of core tools needed to compile, run, and deploy JavaFX applications. If you feel at home at the command line, then you can start writing code with your favorite text editor and interact with the SDK tools directly. However, if you want to see code-completion hints after each dot you type, then you can always use an IDE such as NetBeans or Eclipse to get you started with JavaFX (see other recipes on IDEs). This section outlines the necessary steps to set up the JavaFX SDK successfully on your computer. These instructions apply to JavaFX SDK version 1.2.x; future versions may vary slightly. Getting ready Before you can start building JavaFX applications, you must ensure that your development environment meets the minimum requirements. As of this writing, the following are the minimum requirements to run the current released version of JavaFX runtime 1.2. Minimum system requirements How to do it... The first step for installing the SDK on you machine is to download it from http://javafx.com/downloads/. Select the appropriate SDK version as shown in the next screenshot. Once you have downloaded the SDK for your corresponding system, follow these instructions for installation on Windows, Mac, Ubuntu, or OpenSolaris. Installation on Windows Find and double-click on the newly downloaded installation package (.exe file) to start. Follow the directions from the installer wizard to continue with your installation. Make sure to select the location for your installation. The installer will run a series of validations on your system before installation starts. If the installer finds no previously installed SDK (or the incorrect version), it will download a SDK that meets the minimum requirements (which lengthens your installation). Installation on Mac OS Prior to installation, ensure that your Mac OS meets the minimum requirements. Find and double-click on the newly downloaded installation package (.dmg file) to start. Follow the directions from the installer wizard to continue your installation. The Mac OS installer will place the installed files at the following location:/Library/Frameworks/JavaFX.framework/Versions/1.2. Installation on Ubuntu Linux and OpenSolaris Prior to installation, ensure that your Ubuntu or OpenSolaris environment meets the minimum requirements. Locate the newly downloaded installation package to start installation. For Linux, the file will end with *-linux-i586.sh. For OpenSolaris, the installation file will end with *-solaris-i586.sh. Move the file to the directory where you want to install the content of the SDK. Make the file executable (chmod 755) and run it. This will extract the content of the SDK in the current directory. The installation will create a new directory, javafx-sdk1.2, which is your JavaFX home location ($JAVAFX_HOME). Now add the JavaFX binaries to your system's $PATH variable, (export PATH=$PATH:$JAVAFX_HOME/bin). When your installation steps are completed, open a command prompt and validate your installation by checking the version of the SDK. $> javafx -version$> javafx 1.2.3_b36 You should get the current version number for your installed JavaFX SDK displayed. How it works... Version 1.2.x of the SDK comes with several tools and other resources to help developers get started with JavaFX development right away. The major (and more interesting) directories in the SDK include: Setting up JavaFX for the NetBeans IDE The previous recipe shows you how to get started with JavaFX using the SDK directly. However if you are more of a syntax-highlight, code-completion, click-to-build person, you will be delighted to know that the NetBeans IDE fully supports JavaFX development. JavaFX has first-class support within NetBeans, with functionalities similar to those found in Java development including: Syntax highlighting Code completion Error detection Code block formatting and folding In-editor API documentation Visual preview panel Debugging Application profiling Continuous background build And more... This recipe shows how to set up the NetBeans IDE for JavaFX development. You will learn how to configure NetBeans to create, build, and deploy your JavaFX projects. Getting ready Before you can start building JavaFX applications in the NetBeans IDE, you must ensure that your development environment meets the minimum requirements for JavaFX and NetBeans (see previous recipe Installing the JavaFX SDK for minimum requirements). Version 1.2 of the JavaFX SDK requires NetBeans version 6.5.1 (or higher) to work properly. How to do it... As a new NetBeans user (or first-time installer), you can download NetBeans and JavaFX bundled and ready to use. The bundle contains the NetBeans IDE and all other required JavaFX SDK dependencies to start development immediately. No additional downloads are required with this option. To get started with the bundled NetBeans, go to http://javafx.com/downloads/ and download the NetBeans + JavaFX bundle as shown in the next screenshot (versions will vary slightly as newer software become available).
Read more
  • 0
  • 0
  • 12331
article-image-building-content-based-routing-solution-microsoft-platform
Packt
16 Sep 2010
7 min read
Save for later

Building the Content Based Routing Solution on Microsoft Platform

Packt
16 Sep 2010
7 min read
The flow of the solution looks like the following: An order comes from a customer to a single endpoint at McKeever Technologies. This single endpoint then routes the order based on the content of the order (that is, the value of the Product ID element). The router sends requests to WCF Workflow Services, which can provide us durability and persistence when talking to the backend order management systems. If an order system is down, then the workflow gets suspended and will be capable of resuming once the system comes back online. Setup First, create a new database named Chapter8Db in your local SQL Server 2008 instance. Then locate the database script named Chapter8Db.sql in the folder <Installation Directory>Chapter8Begin and install the tables into your new database. When completed, your configuration should look like the following screenshot: Next, open Chapter8.sln in the <Installation Directory>Chapter8Begin folder. In this base solution you will find two WCF services that represent the interfaces in front of the two order management systems at McKeever Technologies. Build the services and then add both of them as applications in IIS. Make sure you select the .NET 4.0 application pool. If you choose, you can test these services using the WCF Test Client application that comes with the .NET 4.0 framework. If your service is configured correctly, an invocation of the service should result in a new record in the corresponding SQL Server database table. Building the workflow Now that our base infrastructure is in place, we can construct the workflows that will execute these order system services. Launch Visual Studio.NET 2010 and open Chapter8.sln in the &ltInstallation Directory>Chapter8Begin folder. You should see two WCF services. We now must add a new workflow project to the solution. Recall that this workflow will sit in front of our order service and give us a stronger quality of service, thanks to the persistence capability of AppFabric. In Visual Studio .NET 2010, go to File and select New Project. Select the WCF Workflow Service project type under the Workflow category and add the project named Chapter8.SystemA.WorkflowService to our existing solution. This project is now part of the solution and has a default workflow named Service1.xamlx. Rename the Service1.xamlx file to SystemAOrderService.xamlx from within the Solution Explorer. Also click the whitespace within the workflow to change both the ConfigurationName and Name properties. We want all our service-fronting workflows to have the same external-facing contract interface so that we can effectively abstract the underlying service contracts or implementation nuances. Hence, we add a new class file named OrderDataContract.cs to this workflow project. This class will hold the data contracts defining the input and output for all workflows that sit in front of order systems. Make sure the project itself has a reference to System.Runtime.Serialization, and then add a using statement for System.Runtime.Serialization to the top of the OrderDataContract.cs class. Add the following code to the class: namespace Chapter8.WorkflowService{ [DataContract( Namespace = "http://Chapter8/OrderManagement/DataContract")] public class NewOrderRequest { [DataMember] public string OrderId { get; set; } [DataMember] public string ProductId { get; set; } [DataMember] public string CustomerId { get; set; } [DataMember] public int Quantity { get; set; } [DataMember] public DateTime DateOrdered { get; set; } [DataMember] public string ContractId { get; set; } [DataMember] public string Status { get; set; } } [DataContract( Namespace = "http://Chapter8/OrderManagement/DataContract")] public class OrderAckResponse { [DataMember] public string OrderId { get; set; } }} Open the SystemAOrderService.xamlx workflow, click on the top ReceiveRequest shape, and set the following property values. Note that we will use the same values for all workflows so that the external-facing contract of each workflow appears the same. Property Value DisplayName ReceiveOrderRequest OperationName SubmitOrder ServiceContractName {http://Chapter8/OrderManagement} ServiceContract Action http://Chapter8/OrderManagement/SubmitOrder CanCreateInstance True Click the Variables tab at the bottom of the workflow designer to show the default variables added to the workflow. Delete the data variable. Create a new variable named OrderReq. For the variable type, choose Browse for Types and choose the NewOrderRequest type we defined earlier in the OrderDataContract.cs class. Add another variable named OrderResp and choose the previously defined OrderAckResponse .NET type. The OrderReq variable gets instantiated by the initial request, but we need to explicitly set the OrderResp variable. In the Default column within the Variables window, set the value to New OrderAckResponse(). Set a proper variable for the initial receive shape by clicking on the ReceiveOrderRequest shape and click on the View Message link. Choose OrderReq as the Message data and set the type as NewOrderRequest. Now we do the same for the response shape. Select the SendResponse shape and click on the View Message link. Choose the OrderResp variable as the Message data and OrderAckResponse as the Message type. Keep the SendResponse shape selected and set its PersistBeforeSend property to On. This forces a persistence point into our workflow and ensures that any errors that occur later in the workflow will lead to a suspended/resumable instance. We can test our workflow service prior to completing it. We want to populate our service response object, so go to the Workflow toolbox, view the Primitives tab, and drag an Assign shape in between the existing receive and send shapes. In the Assign shape, set the To value to OrderResp.OrderID and the right side of the equation to System.GUID.NewGUID().ToString(). This sets the single attribute of our response node to a unique tracking value. Build the workflow and if no errors exist, right-click the SystemAOrderSystem.xamlx workflow in the Solution Explorer and choose View in Browser. Open the WCF Test Client application and point it to our Workflow Service endpoint. Double-click the Submit Order operation, select the datatype in the Value column, and enter test input data. Click on the Invoke button and observe the response object coming back with a GUID value returned in the OrderId attribute. Now we're ready to complete our workflow by actually calling our target WCF service that adds a record to the database table. Return to Visual Studio. NET, right-click the Chapter8.SystemA.WorkflowService project, and choose Add Service Reference. Point to the service located at http://localhost/Chapter8.OrderManagement.SystemA/OrderIntakeService.svc and type SystemASvcRef as the namespace. If the reference is successfully added and the project is rebuilt, then a new custom workflow activity should be added to the workflow toolbox. This activity encapsulates everything needed to invoke our system service. Add variables to the workflow that represent the input and output of our system service. Create a variable named ServiceRequest and browse for the type Order, which can be found under the service reference. Set the default value of this variable to New Order(). Create another variable named ServiceResponse and pick the same order object but do not set a default value. Drag the custom AddOrder activity from the workflow toolbox and place it after the SendResponse shape. This sits after the workflow service response is sent, so that if errors occur the caller will not be impacted. Click the AddOrder shape and set its NewOrder property to the ServiceRequest variable and its AddOrderResult property to ServiceResponse. Now we have to populate the service request object. Drag a Sequence workflow activity from the Control Flow tab and drop it immediately before the AddOrder shape. Add six Assign shapes to the Sequence and set each activity's left and right fields as follows: Left Side Right Side ServiceRequest.ContractId OrderReq.ContractId ServiceRequest.CustomerId OrderReq.CustomerId ServiceRequest.DateOrdered OrderReq.DateOrdered ServiceRequest.OrderNumber OrderResp.OrderId ServiceRequest.ProductId OrderReq.ProductId ServiceRequest.Quantity OrderReq.Quantity Note that the OrderNumber value of the request is set using the OrderResp object as that is the one to which we added the GUID value. Our final workflow should look like the following:
Read more
  • 0
  • 0
  • 1346

article-image-debatching-bulk-data-microsoft-platform
Packt
16 Sep 2010
14 min read
Save for later

Debatching Bulk Data on Microsoft Platform

Packt
16 Sep 2010
14 min read
(For more resources on Microsoft Platform, see here.) Why is it better to shovel one ton of data using two thousand, one pound shovels instead of one big load from a huge power shovel? After all, large commercial databases and the attendant bulk loader or SQL Loader programs are designed to do just that: insert huge loads of data in a single shot. The bulk load approach works under certain tightly constrained circumstances. They are as follows: The "bulk" data comes to you already matching the table structure of the destination system. Of course, this may mean that it was debatched before it gets to your system. The destination system can accept some, potentially significant, error rate when individual rows fail to load. There are no updates or deletes, just inserts. Your destination system can handle bulk loads. Certain systems (for example, some legacy medical systems or other proprietary systems) cannot handle bulk operations. As the vast majority of data transfer situations will not meet these criteria, we must consider various options. First, one must consider which side of the database event horizon one should perform these tasks. One could, for example, simply dump an entire large file into a staging table on SQL Server, and then debatch using SQL to move the data to the "permanent" tables. Use case Big Box Stores owns and operates retail chains that include huge Big Box warehouse stores, large retail operations in groceries and general department stores and small convenience stores that sell gasoline, beverages, and fast food. The company has added brands and stores over the past few years through a series of mergers. Each brand has its own unique point of sale system. The stores operate in the United States, Canada, Mexico, and Western Europe. The loss prevention department has noticed that a number of store sales and clerical staff are helping themselves to "five-finger bonuses." The staff members use various ruses to take money from cash registers, obtain goods without paying for them, or otherwise embezzle money or steal goods from Big Box. These patterns typically unfold over periods of several days or weeks. For example, employees will make purchases using their employee discount at the store where they work, then return the product for full price at another store where they are not known or they have an accomplice return the goods to the employee for a full refund. The various methods used to steal from Big Box fall into these recognized patterns and a good deal of this theft can be uncovered by analyzing patterns of sales transactions. Standard ETL techniques will be used to import data concerning the stores, products, and employees to a database where we can analyze these patterns and detect employee theft. We have been tasked with designing a system that will import comma-delimited files exported by the point of sales (POS) systems into a SQL Server database that will then perform the analysis. Data concerning each sale will be sent from each of the point of sale systems. The files will hold all or part of the prior day's sales and will range from 30,000 to over 2.5 million rows of data per file. For stores that have "regular" business hours, files will become available approximately one hour after the stores close. This time will vary based on the day of the week and the time of year. During "normal" operations, stores typically close at 9:00 PM local time. During certain peak shopping periods (for example, Christmas or local holiday periods) stores remain open until midnight, local time. Convenience stores are opened 24 hours per day, 7 days per week. Data will be sent for these stores after the POS system has closed the books on the prior day, typically at 1:00 AM local time. The POS systems can be extended to periodically expose "final" sales to the system throughout the business day via a web service. The impact of using this method during a peak sales period is unknown, and performance of the POS may degrade. A full day's data may also be extracted from the POS system in the comma-delimited format discussed as follows. The web service would expose the data using the natural hierarchy of "sales header" and "sales detail." All data must be loaded and available to the loss prevention department by 9 AM CET for European stores and 9 AM EST for North American stores. It should be noted that the different POS use different data types to identify stores, employees, products, and sales transactions. The load job must account for this and properly relate the data from the store to the master data loaded in a separate application. The data will be sent in two comma-delimited files, one containing the "Sales Header" data and one containing the sales details. The data will be in the following format: Sales Header SalesID, StoreID, EmployeeID, EmployeeFirstName, EmployeeLastName, RegisterID, RegisterLocation, storeAddress, StoreCity, StoreProvince, StorePostalCode, CustomerID, CustomerFirstName, CustomerLastName, CustomerPostalCode, Date, Time, Method of Payment, CreditCardNumber, TotalSales, Amount Tendered, Change, PriorSalesID, Return Sales Detail SalesID, ProductID, Quantity, markedPrice, ActualPrice, ReturnItem, DiscountCode, DiscountPercent, DiscountDescription, OriginalPurchaseDate, OriginalPurchaseStore, OriginalPurchaseSalesID, originalCustomerID, OriginalFirstName, OriginalLastName, OriginalStoreID, OriginalRegisterID, OriginalEmployeeID Key requirements Our mission is to move this data into a data mart that will use a standard star schema for this analysis. Big Box intended to prosecute employees for larceny or theft offences based on evidence this system gathers. Given the legal requirements that evidence gathered through this process must stand up in court, it is vital that this data be correct and loaded with a minimal number of errors or issues. Additional facts As is fairly typical, the use case above does not contain information on all of the facts we would need to consider when designing a solution. Every company has operating assumptions that the enterprise takes as a "given" and others we learn through our own involvement with the enterprise. These "facts" are so ingrained into an organization's culture that people may not even recognize the need to explicitly state these requirements. For example, if a consultant arrives at a company in Denver, CO that only does business in the United States, then he or she can expect that the business language will be English with US spelling. The exact same company in Calgary, doing business in Canada will need both English with British spelling and French. It is doubtful one would ever see such "requirements" stated explicitly, but anyone designing a solution would do well to keep them in mind. Other facts may be extrapolated or derived from the given requirements. When you are designing a solution you must take these criteria into account as well. It would be at best unwise to design a solution that was beyond the skill set for the IT staff, for example. In this case, it is probably safe to say the following: Fact or Extrapolation Reason Big Box has a very sophisticated IT staff that can handle any advanced and sophisticated technologies. They are currently handling multiple POS systems on 2 continents and already do sophisticated ETL work from these systems to existing BI systems. Getting the deliverable "right" is more important than getting it done "fast". Legal requirements for using data as evidence. Data must be secure during movement to avoid allegations of evidence tampering. Legal requirements for using data as evidence. Some level of operational control and monitoring must be built into the application we will design. Common courtesy to the Network Operations Center (NOC) staff who will deal with this, if nothing else. Candidate architectures We can tackle this problem from multiple angles, so let us take a look at the available options. Candidate architecture #1–SSIS First, we will explore the pros and cons of using SSIS for our solution platform. Solution design aspects This scenario is the sweet spot for SSIS. SSIS is, first and foremost, an ETL and batch data processing tool. SSIS can easily read multiple files from a network drive and has the tools out of the box that can debatch, either before or after loading to a database. Nonetheless, we are faced with certain hurdles that will need to be accounted for in our design. We do not control precisely when the POS data will be made available. There are a number of variables that influence that timing, not the least of which is the potential need for human intervention in closing books for the day and variable times throughout the year and across the globe when a particular store's books will be closed. We need to expect that files will be delivered over a time range. In some ways this is helpful, as it spreads some of the load over time. One of the great things about SSIS in this situation is the flexibility it provides. We can load all of the data in a single batch to a staging table then move it (debatch) to its final destinations using SQL, or we can debatch on the application side and load directly to the final tables, or any combination that suits us and the strengths of the development team. SSIS can also be extended to monitor directories and load data when it becomes available. Finally, SSIS integrates easily into NOC monitoring systems and provides the ability to guarantee data security and integrity as required for this application. Moreover, SSIS does not incur any additional licensing costs, as it ships with SQL Server out of the box. Solution delivery aspects It is not clear from our use case what depth of experience Big Box staff has with SSIS. However, they certainly have experience with database technologies, SQL queries, and with other advance technologies associated with data transfer and integration, given the size of the enterprise operations. We can reasonably expect them to pick up any unfamiliar technologies quickly and easily. This application will require some extensions to the typical ETL paradigm. Here data must go through some amount of human intervention through the daily "closing" before it is made available. This will involve tasks such as physically counting cash to make sure it matches the records in the POS system. Any number of factors can accelerate or delay the completion of this task. SSIS will therefore need to monitor the directories where data are delivered to ensure the data is available. Also, we will need to design the system so that it does not attempt to load partially completed files. This is a classic ETL problem with many potential solutions and certainly does not present insurmountable issues. Solution operations aspects In this case, we have one vitally important operational requirement; the solution must guarantee data integrity and security so that the data can be used to prosecute thieves or otherwise stand up to evidentiary rules. SSIS and SQL Server 2008 Enterprise Edition can handle these requirements. SQL Server 2008 security and data access auditing features will meet chain of custody requirements and ensure that no data tampering occurred. SSIS can enforce business rules programmatically to ensure the precise and accurate transfer of the data sent by the POS systems. Many of these requirements will be filled with the design of the database itself. We would use, for example, the data access auditing now available with SQL Server 2008 to monitor who has been working with data. The database would use only Windows-based security, not SQL Server based security. Other steps to harden SQL Server against attack should be taken. All the previously mentioned features secure the data while at rest. We will need to focus on how to ensure data integrity during the transfer of the data—while the data is in motion. SSIS has logging tools that will be used to monitor unsuccessful data transfers. Moreover, we can extend these tools to ensure either a complete data load or that we will have an explanation for any failure to load. It should be noted that the loss prevention staff is interested in outliers, so they will want to carefully examine data that fails to meet business requirements (and therefore fails to load to our target system) to look for patterns of theft. Organizational aspects We understand that Big Box staff has the technical wherewithal to handle this relatively simple extension to existing SQL Server technologies. This is a group of database professionals who deal with multiple stores performing over 2 million transactions per day. They support the POS, financial, inventory, and other systems required to handle this volume on two continents. This is a small step for them in terms of their ability to live with this solution. Solution evaluation Candidate architecture #2–BizTalk Server While not primarily targeted at bulk data solutions, BizTalk Server can parse large inbound data sets, debatch the individual records, and insert them into a target system. Solution design aspects The POS systems that provide sales data to the Big Box data hub typically produce comma-delimited files. Using BizTalk Server, we can define the document structure of delimited files and natively accept and parse them. The requirements earlier also stated that the POS systems could be extended to publish a more real-time feed via web services as opposed to the daily file drop of data. This is more in tune with how BizTalk does standard processing (real-time data feeds) and would be a preferred means to distribute data through the BizTalk bus. BizTalk Server's SQL Server adapter is built to insert a record at a time into a database. This means that the BizTalk solution needs to break apart these large inbound data sets and insert each record individually into the final repository. Messages are debatched automatically in BizTalk via pipeline components and specially defined schemas, but this is a CPU-intensive process. We would want to isolate the servers that receive and parse these data sets so that the high CPU utilization doesn't impede other BizTalk-based solutions from running. Solution delivery aspects Big Box leverages SQL Server all across the organization, but does not currently have a BizTalk footprint. This means that they'll need to set up a small infrastructure to host this software platform. They do have developers well-versed in .NET development and have typically shown a penchant for utilizing external consultants to design and implement large enterprise solutions. It would be critical for them to build up a small center of excellence in BizTalk to ensure that maintenance of this application and the creation of new ones can progress seamlessly. Solution operations aspects BizTalk Server provides strong operational support through tooling, scripting, and monitoring. If the downstream database becomes unavailable, BizTalk will queue up the messages that have yet to be delivered. This ensures that no sales information gets lost in transit and provides a level of guarantee that the data mart is always accurate. Given the relatively large sets of data, the operations team will need to configure a fairly robust BizTalk environment, which can handle the CPU-intensive debatching and perform the database inserts in a timely fashion. Organizational aspects Big Box would be well served by moving to a more real-time processing solution in the near future. This way, they can do more live analysis and not have to wait until daily intervals to acquire the latest actionable data. A messaging-based solution that relies on BizTalk Server is more in tune with that vision. However, this is a critical program and speed to market is a necessity. Big Box accepts a high level of risk in procuring a new enterprise software product and getting the environments and resources in place to design, develop, and support solutions built upon it. Solution evaluation Architecture selection SQL Server and SSIS Benefits Risks&#x95; Easily deployed and extensible ETL tool Need to build sophisticated error handling systems &#x95;Designed to handle batch processing of large files, exactly the task at hand   &#x95;No additional licensing costs - comes with SQL Server   &#x95;Can be built and maintained by current staff   BizTalk Server Benefits Risks Provides for live, real-time analysis &#x95;CPU-intensive processes Can leverage BizTalk capability to send events to downstream transactional systems High database process overhead &#x95;Enterprise-class hosting infrastructure Additional licensing and capital costs   Not clear if staff has the skills to support product When all is said and done, this is exactly the scenario that SSIS was designed to handle, a batch load to a data mart. Moreover, the selection of SSIS entails no additional licensing costs, as might be the case with BizTalk.
Read more
  • 0
  • 0
  • 1364

article-image-content-based-routing-microsoft-platform
Packt
16 Sep 2010
9 min read
Save for later

Content Based Routing on Microsoft Platform

Packt
16 Sep 2010
9 min read
Use case McKeever Technologies is a medium-sized business, which manufactures latex products. They have recently grown in size through a series of small acquisitions of competitor companies. As a result, the organization has a mix of both home-grown applications and packaged line-of-business systems. They have not standardized their order management software and still rely on multiple systems, each of which houses details about a specific set of products. Their developers are primarily oriented towards .NET, but there are some parts of the organization that have deep Java expertise. Up until now, orders placed with McKeever Technologies were faxed to a call center and manually entered into the order system associated with the particular product. Also, when customers want to discover the state of their submitted order, they are forced to contact McKeever Technologies' call center and ask an agent to look up their order. The company realizes that in order to increase efficiency, reduce data entry error, and improve customer service they must introduce some automation to their order intake and query processes. McKeever Technologies receives less than one thousand orders per day and does not expect this number to increase exponentially in the coming years. Their current order management systems have either Oracle or SQL Server database backends and some of them offer SOAP service interfaces for basic operations. These systems do not all maintain identical service-level agreements; so the solution must be capable of handling expected or unexpected downtime of the target system gracefully. The company is looking to stand up a solution in less than four months while not introducing too much additional management overhead to an already over-worked IT maintenance organization. The solution is expected to live in production for quite some time and may only be revisited once a long-term order management consolidation strategy can be agreed upon. Key requirements The following are key requirements for a new software solution: Accept inbound purchase requests and determine which system to add them to based on which product has been ordered Support a moderate transaction volume and reliable delivery to target systems Enable communication with diverse systems through either web or database protocols. Additional facts The technology team has acquired the following additional facts that will shape their proposed solution: The number of order management systems may change over time as consolidation occurs and new acquisitions are made. A single customer may have orders on multiple systems. For example, a paint manufacturer may need different types of latex for different products. The customers will want a single view of all orders notwithstanding which order entry system they reside on. The lag between entry of an order and its appearance on a customer-facing website should be minimal (less than one hour). All order entry systems are on the same network. There are no occasionally connected systems (for example, remote locations that may potentially lose their network connectivity). Strategic direction is to convert Oracle systems to Microsoft SQL Server and Java to C#. The new order tracking system does not need to integrate with order fulfillment or other systems at launch. There are priorities for orders (for example, "I need it tomorrow" requires immediate processing and overnight shipment versus "I need it next week"). Legacy SQL Servers are SQL Server 2005 or 2008. No SQL Server 2000 systems. Pattern description The organization is trying to streamline data entry into multiple systems that perform similar functions. They wish to take in the same data (an order), but depending on attributes of the order, it should be loaded into one system or another. This looks like a content-based routing scenario. What is content-based routing? In essence, it is distributing data based on the values it contains. You would typically use this sort of pattern when you have a single capability (for example, ADD ORDER, LOOKUP EMPLOYEE, DELETE RESERVATION) spread across multiple systems. Unlike a publish/subscribe pattern where multiple downstream systems may all want the same message (that is, one-to-many), a content-based routing solution typically helps you steer a message to the system that can best handle the request. What is an alternative to implementing this routing pattern? You could define distinct channels for each downstream system and force the caller to pick the service they wish to consume. That is, for McKeever Technologies, the customer would call one service if they were ordering products A, B, or C, and use another service for products D, E, or F. This clearly fails the SOA rules of abstraction or encapsulation and forces the clients to maintain knowledge of the backend processing. The biggest remaining question is what is the best way to implement this pattern. We would want to make sure that the routing rules were easily maintained and could be modified without expensive redeployments or refactoring. Our routing criteria should be rich enough so that we can make decisions based on the content itself, header information, or metadata about the transmission. Candidate architectures A team of technologists have reviewed the use case and drafted three candidate solutions. Each candidate has its own strengths and weaknesses, but one of them will prove to be the best choice. Candidate architecture #1–BizTalk Server A BizTalk Server-based solution seems to be a good fit for this customer scenario. McKeever Technologies is primarily looking to automate existing processes and communicate with existing systems, which are both things that BizTalk does well. Solution design aspects We are dealing with a fairly low volume of data (1000 orders per day, and at most, 5000 queries of order status) and small individual message size. A particular order or status query should be no larger than 5KB in size, meaning that this falls right into the sweet spot of BizTalk data processing. This proposed system is responsible for accepting and processing new orders, which means that reliable delivery is critical. BizTalk can provide built-in quality of service, guaranteed through its store-and-forward engine, which only discards a message after it has successfully reached its target endpoint. Our solution also needs to be able to communicate with multiple line-of-business systems through a mix of web service and database interfaces. BizTalk Server offers a wide range of database adapters and natively communicates with SOAP-based endpoints. We are building a new solution which automates a formerly manual process, so we should be able to design a single external interface for publishing new orders and querying order status. But, in the case that we have to support multiple external-facing contracts, BizTalk Server makes it very easy to transform data to canonical messages at the point of entry into the BizTalk engine. This means that the internal processing of BizTalk can be built to support a single data format, while we can still enable slight variations of the message format to be transmitted by clients. Similarly, each target system will have a distinct data format that its interface accepts. Our solution will apply all of its business logic on the canonical data format and transform the data to the target system format at the last possible moment. This will make it easier to add new downstream systems without unsettling the existing endpoints and business logic. From a security standpoint, BizTalk allows us to secure the inbound transport channel and message payload on its way into the BizTalk engine. If transport security is adequate for this customer, then an SSL channel can be set up on the external facing interface. To assuage any fears of the customer that system or data errors can cause messages to get lost or "stuck", it is critical to include a proactive exception handling aspect. BizTalk Server surfaces exceptions through an administrator console. However, this does not provide a business-friendly way to discover and act upon errors. Fortunately for us, BizTalk enables us to listen for error messages and either re-route those messages or spin up an error-specific business process. For this customer, we could recommend either logging errors to a database where business users leverage a website interface to view exceptions, or, we can publish messages to a SharePoint site and build a process around fixing and resubmitting any bad orders. For errors that require immediate attention, we can also leverage BizTalk's native capability to send e-mail messages. We know that McKeever Technologies will eventually move to a single order processing system, so this solution will undergo changes at some point in the future. Besides this avenue of change, we could also experience changes to the inbound interfaces, existing downstream systems, or even the contents of the messages themselves. BizTalk has a strong "versioning" history that allows us to build our solution in a modular fashion and isolate points of change. Solution delivery aspects McKeever Technologies is not currently a BizTalk shop, so they will need to both acquire and train resources to effectively build their upcoming solution. Their existing developers, who are already familiar with Microsoft's .NET Framework, can learn how to construct BizTalk solutions in a fairly short amount of time. The tools to build BizTalk artifacts are hosted within Visual Studio.NET and BizTalk projects can reside alongside other .NET project types. Because the BizTalk-based messaging solution has a design paradigm (for example, publish/subscribe, distributed components to chain together) different from that of a typical custom .NET solution, understanding the toolset alone will not ensure delivery success. If McKeever Technologies decides to bring in a product like BizTalk Server, it will be vital for them to engage an outside expert to act as a solution architect and leverage their existing BizTalk experience when building this solution. Solution operation aspects Operationally, BizTalk Server provides a mature, rich interface for monitoring solution health and configuring runtime behavior. There is also a strong underlying set of APIs that can be leveraged using scripting technologies so that automation of routine tasks can be performed. While BizTalk Server has tools that will feel familiar to a Windows Administrator, the BizTalk architecture is unique in the Microsoft ecosystem and will require explicit staff training. Organizational aspects BizTalk Server would be a new technology for McKeever technologies so definitely there is risk involved. It becomes necessary to purchase licenses, provision environments, train users, and hire experts. While these are all responsible things to do when new technology is introduced, this does mean a fairly high startup cost to implement this solution. That said, McKeever technologies will need a long term integration solution as they attempt to modernize their IT landscape and be in better shape to absorb new organizations and quickly integrate with new systems. An investment in an enterprise service bus like BizTalk Server will pay long term dividends even if initial costs are high. Solution evaluation
Read more
  • 0
  • 0
  • 1334
article-image-python-unit-testing-doctest
Packt
15 Sep 2010
12 min read
Save for later

Python: Unit Testing with Doctest

Packt
15 Sep 2010
12 min read
What is Unit testing and what it is not? The title of this section, begs another question: "Why do I care?" One answer is that Unit testing is a best practice that has been evolving toward its current form over most of the time that programming has existed. Another answer is that the core principles of Unit testing are just good sense; it might actually be a little embarrassing to our community as a whole that it took us so long to recognize them. Alright, so what is Unit testing? In its most fundamental form, Unit testing can be defined as testing the smallest meaningful pieces of code (such pieces are called units), in such a way that each piece's success or failure depends only on itself. For the most part, we've been following this principle already. There's a reason for each part of this definition: we test the smallest meaningful pieces of code because, when a test fails, we want that failure to tell where the problem is us as specifically as possible. We make each test independent because we don't want a test to make any other test succeed, when it should have failed; or fail when it should have succeeded. When tests aren't independent, you can't trust them to tell you what you need to know. Traditionally, automated testing is associated with Unit testing. Automated testing makes it fast and easy to run unit tests, which tend to be amenable to automation. We'll certainly make heavy use of automated testing with doctest and later with tools such as unittest and Nose as well. Any test that involves more than one unit is automatically not a unit test. That matters because the results of such tests tend to be confusing. The effects of the different units get tangled together, with the end result that not only do you not know where the problem is (is the mistake in this piece of code, or is it just responding correctly to bad input from some other piece of code?), you're also often unsure exactly what the problem is this output is wrong, but how does each unit contribute to the error? Empirical scientists must perform experiments that check only one hypothesis at a time, whether the subject at hand is chemistry, physics, or the behavior of a body of program code. Time for action – identifying units Imagine that you're responsible for testing the following code: class testable: def method1(self, number): number += 4 number **= 0.5 number *= 7 return number def method2(self, number): return ((number * 2) ** 1.27) * 0.3 def method3(self, number): return self.method1(number) + self.method2(number) def method4(self): return 1.713 * self.method3(id(self)) In this example, what are the units? Is the whole class a single unit, or is each method a separate unit. How about each statement, or each expression? Keep in mind that the definition of a unit is somewhat subjective (although never bigger than a single class), and make your own decision. Think about what you chose. What would the consequences have been if you chose otherwise? For example, if you chose to think of each method as a unit, what would be different if you chose to treat the whole class as a unit? Consider method4. Its result depends on all of the other methods working correctly. On top of that, it depends on something that changes from one test run to another, the unique ID of the self object. Is it even possible to treat method4 as a unit in a self-contained test? If we could change anything except method4, what would we have to change to enable method4 to run in a self-contained test and produce a predictable result? What just happened? By answering those three questions, you thought about some of the deeper aspects of unit testing. The question of what constitutes a unit, is fundamental to how you organize your tests. The capabilities of the language affects this choice. C++ and Java make it difficult or impossible to treat methods as units, for example, so in those languages each class is usually treated as a single unit. C, on the other hand, doesn't support classes as language features at all, so the obvious choice of unit is the function. Python is flexible enough that either classes or methods could be considered units, and of course it has stand-alone functions as well, which are also natural to think of as units. Python can't easily handle individual statements within a function or method as units, because they don't exist as separate objects when the test runs. They're all lumped together into a single code object that's part of the function. The consequences of your choice of unit are far-reaching. The smaller the units are, the more useful the tests tend to be, because they narrow down the location and nature of bugs more quickly. For example, one of the consequences of choosing to treat the testable class as a single unit is that tests of the class will fail if there is a mistake in any of the methods. That tells you that there's a mistake in testable, but not (for example) that it's in method2. On the other hand, there is a certain amount of rigmarole involved in treating method4 and its like as units. Even so, I recommend using methods and functions as units most of the time, because it pays off in the long run. In answering the third question, you probably discovered that the functions id and self.method3 would need to have different definitions, definitions that produced a predictable result, and did so without invoking code in any of the other units. In Python, replacing the real function with such stand-ins is fairly easy to do in an ad hoc manner. Unit testing throughout the development process We'll walk through the development of a single class, treating it with all the dignity of a real project. We'll be strictly careful to integrate unit testing into every phase of the project. This may seem silly at times, but just play along. There's a lot to learn from the experience. The example we'll be working with is a PID controller. The basic idea is that a PID controller is a feedback loop for controlling some piece of real-world hardware. It takes input from a sensor that can measure some property of the hardware, and generates a control signal that adjusts that property toward some desired state. The position of a robot arm in a factory might be controlled by a PID controller. If you want to know more about PID controllers, the Internet is rife with information. The Wikipedia entry is a good place to start: http://en.wikipedia.org/wiki/PID_controller. Design phase Our notional client comes to us with the following (rather sparse) specification: We want a class that implements a PID controller for a single variable. The measurement, setpoint, and output should all be real numbers. We need to be able to adjust the setpoint at runtime, but we want it to have a memory, so that we can easily return to the previous setpoint. Time for action - unit testing during design Time to make that specification a bit more formal—and complete—by writing unit tests that describe the desired behavior. We need to write a test that describes the PID constructor. After checking our references, we determine that a PID controller is defined by three gains, and a setpoint. The controller has three components: proportional, integral and derivative (hence the name PID). Each gain is a number that determines how much one of the three parts of the controller has on the final result. The setpoint determines what the goal of the controller is; in other words, to where it's trying to move the controlled variable. Looking at all that, we decide that the constructor should just store the gains and the setpoint, along with initializing some internal state that we know we'll need due to reading up on the workings of a PID controller: >>> import pid>>> controller = pid.PID(P=0.5, I=0.5, D=0.5, setpoint=0)>>> controller.gains(0.5, 0.5, 0.5)>>> controller.setpoint[0.0]>>> controller.previous_time is NoneTrue>>> controller.previous_error0.0>>> controller.integrated_error0.0 We need to write tests that describe measurement processing. This is the controller in action, taking a measured value as its input and producing a control signal that should smoothly move the measured variable to the setpoint. For this to work correctly, we need to be able to control what the controller sees as the current time. After that, we plug our test input values into the math that defines a PID controller, along with the gains, to figure out what the correct outputs would be: >>> import time>>> real_time = time.time>>> time.time = (float(x) for x in xrange(1, 1000)).next>>> pid = reload(pid)>>> controller = pid.PID(P=0.5, I=0.5, D=0.5, setpoint=0)>>> controller.measure(12)-6.0>>> controller.measure(6)-3.0>>> controller.measure(3)-4.5>>> controller.measure(-1.5)-0.75>>> controller.measure(-2.25)-1.125>>> time.time = real_time We need to write tests that describe setpoint handling. Our client asked for a setpoint stack, so we write tests that check such stack behavior. Writing code that uses this stack behavior brings to our attention that fact that a PID controller with no setpoint is not a meaningful entity, so we add a test that checks that the PID class rejects that situation by raising an exception. >>> pid = reload(pid)>>> controller = pid.PID(P = 0.5, I = 0.5, D = 0.5, setpoint = 0)>>> controller.push_setpoint(7)>>> controller.setpoint[0.0, 7.0]>>> controller.push_setpoint(8.5)>>> controller.setpoint[0.0, 7.0, 8.5]>>> controller.pop_setpoint()8.5>>> controller.setpoint[0.0, 7.0]>>> controller.pop_setpoint()7.0>>> controller.setpoint[0.0]>>> controller.pop_setpoint()Traceback (most recent call last):ValueError: PID controller must have a setpoint What just happened? Our clients gave us a pretty good initial specification, but it left a lot of details to assumption. By writing these tests, we've codified exactly what our goal is. Writing the tests forced us to make our assumptions explicit. Additionally, we've gotten a chance to use the object, which gives us an understanding of it that would otherwise be hard to get at this stage. Normally we'd place the doctests in the same file as the specification, and in fact that's what you'll find in the book's code archive. In the book format, we used the specification text as the description for each step of the example. You could ask how many tests we should write for each piece of the specification. After all, each test is for certain specific input values, so when code passes it, all it proves is that the code produces the right results for that specific input. The code could conceivably do something entirely wrong, and still pass the test. The fact is that it's usually a safe assumption that the code you'll be testing was supposed to do the right thing, and so a single test for each specified property fairly well distinguishes between working and non-working code. Add to that tests for any boundaries specified—for "The X input may be between the values 1 and 7, inclusive" you might add tests for X values of 0.9 and 7.1 to make sure they weren't accepted—and you're doing fine. There were a couple of tricks we pulled to make the tests repeatable and independent. In every test after the first, we called the reload function on the pid module, to reload it from the disk. That has the effect of resetting anything that might have changed in the module, and causes it to re-import any modules that it depends on. That latter effect is particularly important, since in the tests of measure, we replaced time.time with a dummy function. We want to be sure that the pid module uses the dummy time function, so we reload the pid module. If the real time function is used instead of the dummy, the test won't be useful, because there will be only one time in all of history at which it would succeed. Tests need to be repeatable. The dummy time function is created by making an iterator that counts through the integers from 1 to 999 (as floating point values), and binding time.time to that iterator's next method. Once we were done with the time-dependent tests, we replaced the original time.time. Right now, we have tests for a module that doesn't exist. That's good! Writing the tests was easier than writing the module will be, and it gives us a stepping stone toward getting the module right, quickly and easily. As a general rule, you always want to have tests ready before the code that they test is written. Have a go hero Try this a few times on your own: Describe some program or module that you'd enjoy having access to in real life, using normal language. Then go back through it and try writing tests, describing the program or module. Keep an eye out for places where writing the test makes you aware of ambiguities in your prior description, or makes you realize that there's a better way to do something.
Read more
  • 0
  • 0
  • 2653

article-image-using-oracle-service-bus-console
Packt
15 Sep 2010
9 min read
Save for later

Using Oracle Service Bus Console

Packt
15 Sep 2010
9 min read
(For more resources on BPEL, SOA and Oracle see here.) To log into Oracle Service Bus Console, we have to open a web browser and access the following URL: http://host_name:port/sbconsole, where host_name is the name of the host on which OSB is installed and port is a number that is set during the installation process. We log in as user weblogic. The Oracle Service Bus Console opens, as shown in the following screenshot: The Dashboard page is opened by default, displaying information about alerts. We will show how to defne and monitor alerts later in this article. In the upper-left corner, we can see the Change Center. Change Center is key to making confguration changes in OSB. Before making any changes, we have to create a new session by clicking the Create button. Then, we are able to make different changes without disrupting existing services. When fnished, we activate all changes by clicking Activate. If we want to roll back the changes, we can click the Discard button. We can also view all changes before activating them and write a comment. Creating a project and importing resources from OSR First, we have to create a new session, by clicking the Create button in the Change Center. Next, we will create a new project. OSB uses projects to allow logical grouping of resources and to better organize related parts of large development projects. We click on the Project Explorer link in the main menu. In the Projects page, we enter the name of the project (TravelApproval) and click Add Project. The new project is now shown in the projects list on the left side in the Project Explorer. We click on the project. Next, we add folders to the project, as we want to group resources by type. To create a folder, we enter the folder name in the Enter New Folder Name field and click Add folder. We add six folders: BusinessServices, ProxyServices, WSDL, XSD, XSLT, and AlertDestinations. Next, we have to create resources. We will show how to import service and all related resources from the UDDI registry. Before creating a connection to the UDDI registry, we will activate the current session. First, we review all changes. We click the View Changes link in the Change Center. We can see the list of all changes in the current session. We can also undo changes by clicking the undo link in the last column. Now, we activate the session by clicking on the Activate button. The Activate Session page opens. We can add a description to the session and click Submit. Now, all changes made are activated. Creating connection to Oracle Service Registry First, we start a new session in the Change Center. Then we click on the System Administration link in the main menu. We click on the UDDI Registries and then Add registry on the right side of the page. We enter connection parameters and click Save. Now, the registry is listed in the UDDI Registries list, as shown next: We can optionally activate a current session. In that case, we have to create a new session before importing resources from UDDI. Importing resources from Oracle Service Registry We click on the Import from UDDI link on the left-hand side. As there is only one connection to the registry, this connection is selected by default. First, we have to select the Business Entity. We select Packt Publishing. Then we click on the Search button to display all services of the selected business entity. In the next screenshot, we can see that currently there is only one service published. We select the service and click Next. In the second step, we select the project and folder, where we want to save the resources. We select the TravelApproval project and the folder BusinessServices and click Next. On the fnal screen, we just click the Import button. Now we can see that a business service, a WSDL, and three XSD resources have been created. All resources have been created automatically, as we imported a service from the UDDI registry. If we create resources by hand, we frst have to create an XML Schema in WSDL resources, and then the Business service. As all resources have been saved to the BusinessServices folder, we have to move them to appropriate folders based on their type. We go back to the Project Explorer and click on the BusinessServices folder in the TravelApproval project. We can see all imported resources in the Resources list at the bottom of the page. We can move resources by clicking on the Move Resource icon and then selecting the target folder. We move the WSDL resource to the WSDL folder and the XML Schemas to the XSD folder. Configuring a business service If we want to monitor service metrics, such as average response time, number of messages, and number of errors, we have to enable monitoring of the business service. We will also show how to improve performances by enabling service result caching, which is a new feature in OSB 11g PS2. Enabling service result caching OSB supports service result caching through the use of Oracle Coherence, which is an in-memory data grid solution. In this way, we can dramatically improve performances if the response of the business service is relatively static. To enable the use of service result caching globally, we have to open the Operations | Global Settings and set Enable Result Caching to true.. In the Project Explorer, we click on our Business service. On the Confguration Details tab, we will enable service result caching. We scroll-down and edit the Message Handling Confguration. Then we expand the Advanced Settings. We select the Result Caching checkbox. Next, we have to specify the cache token, which uniquely identifes a single cache result. This is usually an ID field. In our simplifed example, we do not have an ID field; therefore, we will use the employee last name for testing purposes. We enter the following cache token expression: $body/emp:employee/LastName. Then we set the expiration time to 20 minutes. Then, we click Next and Save. Now, if the business service locates cached results through a cache key, it returns those cached results to the client instead of invoking the external service. If the result is not cached, the business service invokes the external service, returns the result to the client, and stores the result in cache. Service result caching works only when the business service is invoked from a proxy service. Enabling service monitoring Again, we click on our Business service and then click on the Operational Settings tab. We select the Enabled checkbox next to the Monitoring and set the Aggregation Interval to 20 minutes. The aggregation interval is the sliding window of time over which metrics are computed. We can also defne SLA alerts which are based on these metrics. We click Update to save the changes. Then, we activate the changes by clicking on the Activate button in the Change Center. Testing a business service After activating the changes, we can test the business service using the Test Console. To open the console, we select the BusinessServices folder and then click on the bug icon next to the Business service. The Test Console opens. We set the XML payload and click the Execute button. After executing the Business service, we can see the response message as shown in the next screenshot: Creating an Alert destination Before creating a proxy service, we will create an Alert Destination resource, which will be later used for sending e-mail alerts to the administrator. Remember, that we have already created the AlertDestinations folder. To be able to send e-mail alerts, we have to frst confgure the SMTP server on the System Administration page. To create an Alert destination, we navigate to the AlertDestinations folder and then select the Alert Destination from the Create Resource drop-down. We set the name to Administrator and add an e-mail recipient by clicking the Add button. We enter the recipient e-mail address (we can add more recipients) and select the SMTP server.   Then we click Save twice. Creating a proxy service Although at the frst sight it might seem redundant, using a proxy service instead of calling the original business service directly has several advantages. If we add a proxy service between the service consumer and the original service, we gain transparency. Through OSB, we can monitor and supervise the service and control the inbound and outbound messages. This becomes important when changes happen. For example, when a service interface or the payload changes, the proxy service can mask the changes to all service consumers that have not yet been upgraded to use the new version. This is, however, not the only beneft. A proxy service can enable authentication and authorization when accessing a service. It can provide a means to monitor service SLAs, and much more. Therefore, it often makes sense to consider using proxy services. We will show an example to demonstrate the capabilities of proxy services. We will create a proxy service, which will contain the message processing logic and will be used to decouple service clients from the service provider. Our proxy service will validate the request against the corresponding XML schema. It will also perform error handling and alert the service administrator of any problems with the service execution. First, we start a new session (if there is no active session) by clicking the Create button in the Change Center. Then we navigate to the ProxyServices folder in the Project Explorer. We click on the Create Resources drop-down and select Proxy Service. The General Confguration page opens. We set the name of the proxy service to EmployeeTravelStatusServiceProxy. We also have to defne the interface of the service. We select the Business service, as we want the proxy service to use the same interface as the business service. We click the Browse button and select the EmployeeTravelStatusService business service. Then we click Next. On the Transport Configuration screen, we can change the transport Protocol and Endpoint URI. We use the defaults values and click Next. The HTTP Transport Confguration screen opens. We click Next on the remaining confguration screens. On the Summary page, we click the Save button at the bottom of the page.
Read more
  • 0
  • 0
  • 3725