Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-posting-reviews-ratings-and-photos
Packt
17 Nov 2014
15 min read
Save for later

Posting Reviews, Ratings, and Photos

Packt
17 Nov 2014
15 min read
In this article, by Hussein Nasser, the author of the book Building Web Applications with ArcGIS, we will learn how to perform editing on services by adding three features, posting reviews, ratings, and uploading pictures for the restaurant. (For more resources related to this topic, see here.) Configuring enterprise Geodatabase Unfortunately, editing process requires some more configurations to be done, and the current service, wouldn't do the trick. The reason is that the service is using a local database file; however, editing GIS data on the Web requires either an enterprise database running on a database management server such as the SQL server, or Oracle, or an ArcGIS Online feature service. Setting up the enterprise geodatabase server is out of the scope of this book. However, you can grab my book Learning ArcGIS Geodatabase, Packt Publishing, and walk through the step-by-step full guide to set up your own enterprise geodatabase with Microsoft SQL Server Express. If you have an existing enterprise geodatabase server, you can use it. I will be using SQL Server Express 2012 SP1. Connecting to the Geodatabase First, we need to establish a connection to the enterprise server. For that we will use a username that has full editing capabilities. I will be using the system administrator (SA) user sa for the SQL server. Note that the SA user has full access to the database; we have used it in this book for simplicity to avoid assigning privileges. In an ideal situation, you would create a user and assign him the correct permissions to read or write into the subject tables. My book, Learning ArcGIS Geodatabase, Packt Publishing, addresses all these issues thoroughly. Follow these steps to establish a connection to your enterprise geodatabase: Open ArcCatalog and expand Database Connections in the Catalog Tree panel. Double-click on Add Database Connection. In the Database Platform option select SQL Server (or your database provider). Type the name of the server hosting the database; in my case, it is the same server arcgismachine. Select Database authentication in Authentication Type and provide the database credentials for the sa user or any user who has administrator privileges on the database. You can use the SDE user as well. Select your Database from the drop-down list and click on OK as shown in the following screenshot: Rename the connection to [email protected]. We are going to reference this later in the article. Don't close ArcCatalog yet, we will be needing it. You can learn more about ArcGIS database connections and how to create them against different databases from the link http://qr.net/arcgisdb. Copying Bestaurants' data to the server Now that we have our connection ready, we need to copy the Bestaurants data into our new database. Follow these steps: From Catalog Tree, right-click on the Folder connection and select Connect to Folder. Browse to C:2955OT, the folder we created in the book Building Web Application with ArcGIS, and click on OK. This will allow us to access our Bestaurants data. From Folder Connection, expand C:2955OT and browse to 2955OT_01_FilesBestaurants.gdb. Click on Bestaurants.gdb, use the Shift key to select Belize_Landbase and Food_and_Drinks. Then right-click and select Copy as shown in the following screenshot: Double-click on the [email protected] connection to open it. Right-click on the connection and select Paste. You will be prompted with the following dialog box that will show you what will be copied. Note that related data was also imported. This will paste the data to our new database. After the copying is completed, close ArcCatalog. Publishing feature service Our old service won't work in this article; the reason is that it was pointing to a local database which does not support editing on ArcGIS for Server. That is why we migrated our Bestaurants data to the enterprise geodatabase. It is time to publish a brand new service; it will look the same but will just behave differently. First, we need to open our Belize.mxd map document and point our new database. Second, we will register the database with ArcGIS for Server; finally, we will publish the service. Setting the Source to the Enterprise Geodatabase In order to publish the new service, we have to first create a map document which points to the enterprise geodatabase. Follow these steps to do so: Browse to and open 2955OT_05_FilesBelize.mxd with ArcMap. You can simply double-click on the file. Next, we set the source of our layers from the Table of Contents in ArcMap. Click on List by Source as shown in this screenshot: Double-click on the Food_and_Drinks layer to open the Layer properties. In the Layer Properties window, click on Set Data Source. From the Data Source dialog, browse to the [email protected] connection and select the sdedb.DBO.Food_and_Drinks object, and then click on Add as illustrated in this screenshot: Click OK to apply the changes. Do the same for the rest of the objects, Landbase and VENUES_REVIEW, in your map selecting the matching objects in the destination connection. The final source list should look like the following, nothing pointing to the local data. Keep ArcMap open for the next step. You can also modify the data sources in an ArcMap document using ArcCatalog. In ArcCatalog, navigate to the map document in Catalog Tree, right-click on the map document, and choose the Set Data Sources option from the pop-up menu. Using this allows you to set the data source for individual and/or all layers at one time. Publishing the map document Now that we have our map document ready, it is time for us to publish it. However, we still need to perform one more step, which is database registration. This step will cause the data to persist in the server, which makes it ready for editing. Follow these steps to publish the map document: From the File menu, point to Share As and click on Service. Select Overwrite an existing service, because we want to replace our old Bestaurants service. Click on Next. Select the Bestaurants service and click on Continue. From the Service Editor window, click on Capabilities. Check the Feature Access capability to enable editing on this service as shown in the following screenshot: Click on Analyze. This will show an error that we have to fix. The error is Feature service requires a registered database. This error can be solved by right-clicking on it and selecting Show Data Store Registration Page, as shown in the following screenshot: In the Data Store window, click on the plus sign to add a new database. In the Register Database window, enter in Bestaurants in the Name textbox. Click on Import and select the sa@arcgismachine connection. Make sure the Same as publisher database connection option is checked as shown in the following screenshot: Click on OK in the Register Database window and the Data Store window. Click on Analyse again; this time you shouldn't get any errors. Go ahead and Publish your service. Close ArcMap. You can read more about registering databases at: http://qr.net/registerdb. Testing the web application with the new service We have updated our Bestaurants service, but it should work fine so far. Hence, just make sure that we run your application under http://arcgismachine/mybestaurants.html. You can find the latest code at 2955OT_05_FilesCodebestaurants01_beforeediting.html, copy it to c:inetpubwwwroot and rename it accordingly. Adding ArcGIS's editing capabilities Currently, the food and drinks layer is being fetched as a read-only map service located at the URL: http://arcgismachine:6080/arcgis/rest/services/Bestaurants/MapServer/0. The only difference is that it points to the enterprise geodatabase. You can see the following code in mybestaurants.html which confirms that: //load the food and drinks layer into an objectlyr_foodanddrinks = new esri.layers.FeatureLayer("http://arcgismachine:6080/arcgis/rest/services/Bestaurants/MapServer/0", { outFields: ["*"] }); We won't be able to edit MapServerlayer, we have to use FeatureServer. We will show how to change this in the next section. Adding or updating records in a service with ArcGIS JavaScript API is simple. We only need to use the applyEdits method on the FeatureLayer object. This method accepts a record and some parameters for the response; what we are interested in is to insert a record. The following code shows how to prepare a record with three fields for this function: varnewrecord = {attributes:{   FIELD1: VALUE1,   FIELD2: VALUE2,   FIELD3: VALUE3}}; For instance, if I want to create a new record that has rating and review, I populate them as follows: varnewrecord = {attributes:{   REVIEW: "This is a sample review",   RATING: 2,   USER: "Hussein"}}; And to add this record, we simply call applyEdits on the corresponding layer and pass the newrecord object as shown in the following code snippet: var layer = new esri.layers.FeatureLayer("URL");layer.applyEdits([newrecord], null, null, null, null); Posting reviews and ratings The first thing we have to add is to point our food and drinks layer to its feature server instead of the map server to allow editing. This can be achieved by using the following URL instead, simply replace MapServer with FeatureServer: http://arcgismachine:6080/arcgis/rest/services/Bestaurants/FeatureServer/0 Follow these steps to perform the first change towards editing our service: Edit mybestaurants.html. Find the food and drinks layer initialization and point it to the feature service instead using the following code snippet: //load the food and drinks layer into an objectlyr_foodanddrinks = new esri.layers.FeatureLayer ("http://arcgismachine:6080/arcgis/rest/services /Bestaurants/FeatureServer/0", { outFields: ["*"] }); The review and rating fields can be found in the VENUES_REVIEW table. So we can add a single record that has a review and a rating, and send it to the service. However, we need to prepare the necessary controls that will eventually populate and add a record to the reviews table. Let's modify the ShowResults function of our query so that each restaurant shows two textboxes: one for review and one for the rating. We will also add a button so that we can call the addnewreview()function that will add the review. Each control will be identified with the object ID of the restaurant as shown in the following code: //display the reatingresulthtml = resulthtml + "<b>Rating:</b> " + record.attributes["RATING"];//create a place holder for each review to be populated laterresulthtml = resulthtml + "<div id = 'review" + record.attributes["OBJECTID"] + "'></div>";//create a place holder for each attachment picture to be populated laterresulthtml = resulthtml + "<div id = 'picture" + record.attributes["OBJECTID"] + "'></div>";//create text box for the review marked with the objectidresulthtml = resulthtml + "<br>Review: <input type = 'text' id = 'txtreview" + record.attributes["OBJECTID"] + "'>";//another one for the rating.resulthtml = resulthtml + "<br>Rating: <input type = 'text' id = 'txtrating" + record.attributes["OBJECTID"] + "'>"; //and a button to call the function addnewreviewresulthtml = resulthtml + "<br><input type = 'button' value = 'Add' onclick = 'addnewreview(" + record.attributes["OBJECTID"] + ")'>"; Now, we need to write the addnewreview function. This function accepts an object ID and adds a record matching that object to the reviews table. I have placed the three empty variables: object ID, review, and rating, and prepared the template to write a new record. I also created a feature layer of our VENUES_REVIEW table: functionaddnewreview(oid){varobjectid;var review;var rating;varnewReview ={   attributes:   {     VENUE_OBJECTID: objectid,     REVIEW: review,     RATING: rating   }};//open the review tablevarreviewtable = new esri.layers.FeatureLayer("http://arcgismachine:6080/arcgis/rest/services/Bestaurants/FeatureServer/2");//apply edits and pass the review recordreviewtable.applyEdits([newReview], null, null, null, null);} You might have guessed how to obtain the review, rating, and object ID. The object ID is passed, so that is easy. The review can be obtained by searching for txtreview + objectid. A similar search can be used for rating. Let's also add a message to see if things went fine: functionaddnewreview(oid){varobjectid = oid;var review =document.getElementById('txtreview' +    oid).value;var rating = document.getElementById('txtrating' +    oid).value;varnewReview ={   attributes:   {     VENUE_OBJECTID: objectid,     REVIEW: review,     RATING: rating   }};//open the review tablevarreviewtable = new esri.layers.FeatureLayer   ("http://arcgismachine:6080/arcgis/rest/services   /Bestaurants/FeatureServer/2");//apply edits and pass the review recordreviewtable.applyEdits([newReview], null, null,   null, null);alert("Review has been added");} You can also add the user to your record in a similar way: varnewReview ={attributes:   OBJECTID: objectid,   REVIEW: review,   USER_: "Hussein"   RATING: rating}}; It is time for us to save and run our new application. Do a search on Fern, then write a review, add a rating, and then click on Add. This should add a record to Fern Diner. You can always check if your record is added or not from ArcCatalog. This can be seen in the following screenshot: You can find the latest code at 2955OT_05_FilesCodebestaurants02_addreviews.html. Uploading pictures Uploading attachments to a service can be achieved by calling the addAttachment method on the feature layer object. However, we have to make some changes in our ShowResults function to ask the user to browse for a file. For that, we will need to use the file's HTML object, but we have to encapsulate it in a form tagged by the object ID of the restaurant we want to upload the pictures for. The file object should be named attachment so that the addAttachment method can find it. Follow these steps to add the upload pictures' logic: Edit the mybestaurants.html file and add the following code to your ShowResults function: //browse for a picture for this restaurantresulthtml = resulthtml + "<form id = 'frm" + record.attributes["OBJECTID"] + "'><input type = 'file' name = 'attachment'/></form>";//and a button to call the function addnewreviewresulthtml = resulthtml + "<br><input type = 'button' value = 'Add' onclick = 'addnewreview(" + record.attributes["OBJECTID"] + ")'>";    //new lineresulthtml = resulthtml + "<br><br>"; We will make it so that when the user clicks on Add, the attachment is also added along with the review for simplicity. The AddAttachment function takes the object ID of the restaurant which you want to upload the picture to, and a form HTML element which contains the file element named attachment: functionaddnewreview(oid){varobjectid = oid;var review = document.getElementById('txtreview' +    oid).value;var rating = document.getElementById('txtrating' +    oid).value;varnewReview ={   attributes:   {     VENUE_OBJECTID: objectid,     USER_: "Hussein",     REVIEW: review,     RATING: rating   }};//open the review tablevarreviewtable = new esri.layers.FeatureLayer("http://arcgismachine:6080/arcgis/rest/services   /Bestaurants/FeatureServer/2");//apply edits and pass the review recordreviewtable.applyEdits   ([newReview], null, null, null, null);//add attachmentlyr_foodanddrinks.addAttachment(oid,    document.getElementById("frm" + oid) , null, null);alert("Review and picture has been added");} Save and run the code. Search for Haulze Restaurant. This one doesn't have an attachment, so go ahead, write a review, and upload a picture. Run the query again and you should see your picture. This is shown in the following screenshot: The final code can be found at 2955OT_05_FilesCodebestaurants03_uploadpicture.html. The final touches This is where we add some web enhancements to the application, things that you as a web developer can do. We will update the status bar, make part of the page scrollable, change the rating into star icons, and do some fine-tuning of the interface. I have already implemented these changes in the final version of the application, which can be found at 2955OT_05_FilesCodebestaurantsfinal.html. These changes don't have anything to do with ArcGIS development; it is pure HTML and JavaScript. The final application should look as shown in the following screenshot: Summary In this article, we have put the final touches to the Bestaurants ArcGIS web application by adding rating, reviews, and uploading pictures to the ArcGIS service. We have learned that editing can only be done for the feature services on the data hosted on enterprise geodatabases that is why we had to set up our own. We have copied the data to a new server, and modified the source document to point to that server. Then we republished the service with feature-access capability to enable editing. We finally added the necessary JavaScript API code to write reviews and upload pictures to the website. With these features, we have completed the Bestaurants project requirements. This is the end of this book but it is only the beginning of great potential applications that you will be developing using the skill set you acquired in the course of this journey. You can now be confident in pursuing more advanced ArcGIS JavaScript APIs from the Esri website (resources.arcgis.com) which is a good place to start. There are hundreds of methods and functions in the API. However, keep in mind that you only need to learn what you really need and require to. We have managed to complete an entire website with a handful of APIs. Take the next project, analyze the requirements, see what APIs you need, and learn them. That is my advice. My inbox is always ready for suggestions and thoughts, and of course, questions. Resources for Article: Further resources on this subject: Enterprise Geodatabase [article] Server Logs [article] Adding Graphics to the Map [article]
Read more
  • 0
  • 0
  • 1741

article-image-configuring-distributed-rails-applications-chef-part-2
Rahmal Conda
07 Nov 2014
9 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 2

Rahmal Conda
07 Nov 2014
9 min read
In my Part 1 post, I gave you the low down about Chef. I covered what it’s for and what it’s capable of. Now let’s get into some real code and take a look at how you install and run Chef Solo and Chef Server. What we want to accomplish First let’s make a list of some goals. What are we trying to get out of deploying and provisioning with Chef? Once we have it set up, provisioning a new server should be simple; no more than a few simple commands. We want it to be platform-agnostic so we can deploy any VPS provider we choose with the same scripts. We want it to be easy to follow and understand. Any new developer coming later should have no problem figuring out what’s going on. We want the server to be nearly automated. It should take care of itself as much as possible, and alert us if anything goes wrong. Before we start, let’s decide on a stack. You should feel free to run any stack you choose. This is just what I’m using for this post setup: Ubuntu 12.04 LTS RVM Ruby 1.9.3+ Rails 3.2+ Postgres 9.3+ Redis 3.1+ Chef Git Now that we’ve got that out of the way, let’s get started! Step 1: Install the tools First, make sure that all of the packages we download to our VPS are up to date: ~$ sudo apt-get update Next, we'll install RVM (Ruby Version Manager). RVM is a great tool for installing Ruby. It allows you to use several versions of Ruby on one server. Don't get ahead of yourself though; at this point, we only care about one version. To install RVM, we’ll need curl: ~$ sudo apt-get install curl We also need to install Git. Git is an open source distributed version control system, primarily used to maintain software projects. (If you didn't know that much, you're probably reading the wrong post. But I digress!): ~$ sudo apt-get install git Now install RVM with this curl command: ~$ curl -sSL https://get.rvm.io | bash -s stable You’ll need to source RVM (you can add this to your bash profile): ~$ source ~/.rvm/scripts/rvm In order for it to work, RVM has some of its own dependencies that need to be installed. To automatically install them, use the following command: ~$ rvm requirements Once we have RVM set up, installing Ruby is simple: ~$ rvm install 1.9.3 Ruby 1.9.3 is now installed! Since we'll be accessing it through a tool that can potentially have a variety of Ruby versions loaded, we need to tell the system to use this version as the default: ~$ rvm use 1.9.3 --default Next we'll make sure that we can install any Ruby Gem we need into this new environment. We'll stick with RVM for installing gems as well. This'll ensure they get loaded into our Ruby version properly. Run this command: ~$ rvm rubygems current Don’t worry if it seems like you’re setting up a lot of things manually now. Once Chef is set up, all of this will be part of your cookbooks, so you’ll only have to do this once. Step 2: Install Chef and friends First, we'll start off by cloning the Opscode Chef repository: ~$ git clone git://github.com/opscode/chef-repo.git chef With Ruby and RubyGems set up, we can install some gems! We’ll start with a gem called Librarian-Chef. Librarian-Chef is sort of a Rails Bundler for Chef cookbooks. It'll download and manage cookbooks that you specify in Cheffile. Many useful cookbooks are published by different sources within the Chef community. You'll want to make use of them as you build out your own Chef environment. ~$ gem install librarian-chef  Initialize Librarian in your Chef repository with this command: ~$ cd chef ~/chef$ librarian-chef init This command will create a Cheffile in your Chef repository. All of your dependencies should be specified in that file. To deploy the stack we just built, your Cheffile should look like this: 1 site 'http://community.opscode.com/api/v1' 2 cookbook 'sudo' 3 cookbook 'apt' 4 cookbook 'user' 5 cookbook 'git' 6 cookbook 'rvm' 7 cookbook 'postgresql' 8 cookbook 'rails' ~ Now use Librarian to pull these community cookbooks: ~/chef$ librarian-chef install Librarian will pull the cookbooks you specify, along with their dependencies, to the cookbooks folder and create a Cheffile.lock file. Commit both Cheffile and Cheffile.lock to your repo: ~/chef$ git add Cheffile Cheffile.lock ~/chef$ git commit -m “updated cookbooks list” There is no need to commit the cookbooks folder, because you can always use the install command and Librarian will pull the same group of cookbooks with the correct versions. You should not touch the cookbooks folder—let Librarian manage it for you. Librarian will overwrite any changes you make inside that folder. If you want to manually create and manage cookbooks, outside of Librarian, add a new folder, like local-cookbooks, for instance. Step 3: Cooking up somethin’ good! Now that you see how to get the cookbooks, you can create your roles. You use roles to determine what role a server instance would have in you server stack, and you specify what that role would need. For instance, your Database Server role would most likely need a Postgresql server (or you DB of choice), a DB client, user authorization and management, while your Web Server would need Apache (or Nginx), Unicorn, Passenger, and so on. You can also make base roles, to have a basic provision that all your servers would have. Given what we’ve installed so far, our basic configuration might look something like this: name "base" description "Basic configuration for all nodes" run_list( 'recipe[git]', 'recipe[sudo]', 'recipe[apt]', 'recipe[rvm::user]', 'recipe[postgresql::client]' ) override_attributes( authorization: { sudo: { users: ['ubuntu'], passwordless: true } }, rvm: { rubies: ['ruby-1.9.3-p125'], default_ruby: 'ruby-1.9.3-p125', global_gems: ['bundler', 'rake'] } ) ~ Deploying locally with Chef Solo: Chef Solo is a Ruby gem that runs a self-contained Chef instance. Solo is great for running your recipes locally to test them, or to provision development machines. If you don’t have a hosted Chef Server set up, you can use Chef Solo to set up remote servers too. If your architecture is still pretty small, this might be just what you need. We need to create a Chef configuration file, so we’ll call it deploy.rb: root = File.absolute_path(File.dirname(__FILE__)) roles = File.join(root, 'cookbooks') books = File.join(root, 'roles') file_cache_path root cookbook_path books role_path roles ~ We’ll also need a JSON-formatted configuration file. Let’s call this one deploy.json: { "run_list": ["recipe[base]"] } ~ Now run Chef with this command: ~/chef$ sudo chef-solo -j deploy.json -c deploy.rb Deploying to a new Amazon EC2 instance: You’ll need the Chef server for this step. First you need to create a new VPS instance for your Chef server and configure it with a static IP or a domain name, if possible. We won’t go through that here, but you can find instructions for setting up a server instance on EC2 with a public IP and configuring a domain name in the documentation for your VPS. Once you have your server instance set up, SSH onto the instance and install Chef server. Start by downloading the dep package using the wget tool: ~$ wget https://opscode-omnibus-packages.s3.amazonaws.com/ ubuntu/12.04/x86_64/chef-server_11.0.10-1.ubuntu.12.04_amd64.deb Once the dep package has downloaded, install Chef server like so: ~$ sudo dpkg -i chef-server* When it completes, it will print to the screen an instruction that you need to run this next command to actually configure the service for your specific machine. This command will configure everything automatically: ~$ sudo chef-server-ctl reconfigure Once the configuration step is complete, the Chef server should be up and running. You can access the web interface immediately by browsing to your server's domain name or IP address. Now that you’ve got Chef up and running, install the knife EC2 plugin. This will also install the knife gem as a dependency: ~$ gem install knife-ec2 You now have everything you need! So create another VPS to provision with Chef. Once you do that, you’ll need to copy your SSH keys over: ~$ ssh-copy-id root@yourserverip You can finally provision your server! Start by installing Chef on your new machine: ~$ knife solo prepare root@yourserverip This will generate a file, nodes/yourserverip.json. You need to change this file to add your own environment settings. For instance, you will need to add username and password for monit. You will also need to add a password for postgresql to the file. Run the openssl command again to create a password for postgresql. Take the generated password, and add it to the file. Now, you can finally provision your server! Start the Chef command: ~$ knife solo cook root@yourserverip Now just sit back, relax and watch Chef cook up your tasty app server. This process may take a while. But once it completes, you’ll have a server ready for a Rails, Postgres, and Redis! I hope these posts helped you get an idea of how much Chef can simplify your life and your deployments. Here’s a couple of links with more information and references about Chef: Chef community site:http://cookbooks.opscode.com/ Chef Wiki:https://wiki.opscode.com/display/chef/Home Chef Supermarket:https://community.opscode.com/cookbooks?utf8=%E2%9C%93&q=user Chef cookbooks for busy Ruby developers:http://teohm.com/blog/2013/04/17/chef-cookbooks-for-busy-ruby-developers/ Deploying Rails apps with Chef and Capistrano:http://www.slideshare.net/SmartLogic/guided-exploration-deploying-rails-apps-with-chef-and-capistrano About the author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 1470

article-image-distributed-rails-applications-with-chef
Rahmal Conda
31 Oct 2014
4 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 1

Rahmal Conda
31 Oct 2014
4 min read
Since the Advent of Rails (and Ruby by extension), in the period between 2005 and 2010, Rails went from a niche Web Application Framework to being the center of a robust web application platform. To do this it needed more than Ruby and a few complementary gems. Anyone who has ever tried to deploy a Rails application into a production environment knows that Rails doesn’t run in a vacuum. Rails still needs a web server in front of it to help manage requests like Apache or Nginx. Oops, you’ll need unicorn or Passenger too. Almost all of the Rails apps are backed by some sort of data persistence layer. Usually that is some sort of relational database. More and more it’s a NoSQL DB like MongoDB or depending on the application, you’re probably going to deploy a caching strategy at some point: Memcached, Redis, the list goes on. What about background jobs? You’ll need another server instance for that too, and not just one either. High availability systems need to be redundant. If you’re lucky enough to get a lot of traffic, you’ll need a way to scale all of this. Why Chef? Chances are that you’re managing all of this traffic manually. Don’t feel bad, everyone starts out that way. But as you grow, how do you manage all of this without going insane? Most Rails developers start off with Capistrano, which is a great choice. Capistrano is a remote server automation tool. It’s used most often as a deployment tool for Rails. For the most part it’s a great solution for managing multiple servers that make up your Rails stack. It’s only when your architecture reaches a certain size that I’d recommend choosing Chef over Capistrano. But really, there’s no reason to choose one over the other since they actually work pretty well together, and they are both similar regarding deployment. Where Chef excels, however, is when you need to provision multiple servers with different roles, and changing software stacks. This is what I’m going to focus on in this post. But let’s introduce Chef first. What is Chef anyway? Basically, Chef is a Ruby-based configuration management engine. It is a software configuration management tool, used for provisioning servers for certain roles within a platform stack, and deploying applications to those servers. It is used to automate server configuration and integration into your infrastructure. You define your infrastructure in configuration files written in Chef’s Ruby DSL and Chef takes care of setting up individual machines and linking them together. Chef server You set up one of your server instances (virtual or otherwise) as the server and all your other instances are clients that communicate with the Chef "server" via REST over HTTPS. The server is an application that stores cookbooks for your nodes. Recipes and cookbooks Recipes are files that contain sets of instructions written in Chef’s Ruby DSL. These instructions perform some kind of procedure, usually installing software and configuring some service. These recipes are bound together along with configuration file templates, resources, and helper scripts as cookbooks. Cookbooks generally correspond to a specific server configuration. For instance, a Postgres cookbook might contain a recipe for Postgres Server, Postgres Client, maybe PostGIS, and some configuration files for how the DB instance should be provisioned. Chef Solo For stacks that don’t necessarily need a full Chef server setup, but use cookbooks to set up Rails and DB servers, there’s Chef Solo. Chef Solo is a local standalone Chef application that can be used to remotely deploy servers and applications. Wait, where is the code? In Part 2 of this post I’m going to walk you through the setting up of a Rails application with Chef Solo, then I’ll expand to show a full Chef server configuration management engine. While Chef can be used for many different application stacks, I’m going to focus on Rails configuration and deployment, provisioning and deploying the entire stack. See you next time! About the Author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 2255
Banner background image

article-image-execution-test-plans
Packt
28 Oct 2014
23 min read
Save for later

Execution of Test Plans

Packt
28 Oct 2014
23 min read
In this article by Bayo Erinle, author of JMeter Cookbook, we will cover the following recipes: Using the View Results Tree listener Using the Aggregate Report listener Debugging with Debug Sampler Using Constant Throughput Timer Using the JSR223 postprocessor Analyzing Response Times Over Time Analyzing transactions per second (For more resources related to this topic, see here.) One of the critical aspects of performance testing is knowing the right tools to use to attain your desired targets. Even when you settle on a tool, it is helpful to understand its features, component sets, and extensions, and appropriately apply them when needed. In this article, we will go over some helpful components that will aid you in recording robust and realistic test plans while effectively analyzing reported results. We will also cover some components to help you debug test plans. Using the View Results Tree listener One of the most often used listeners in JMeter is the View Results Tree listener. This listener shows a tree of all sample responses, giving you quick navigation of any sample's response time, response codes, response content, and so on. The component offers several ways to view the response data, some of which allow you to debug CSS/jQuery, regular expressions, and XPath queries, among other things. In addition, the component offers the ability to save responses to file, in case you need to store them for offline viewing or run some other processes on them. Along with the various bundled testers, the component provides a search functionality that allows you to quickly search for the responses of relevant items. How to do it… In this recipe, we will cover how to add the View Results Tree listener to a test plan and then use its in-built testers to test the response and derive expressions that we can use in postprocessor components. Perform the following steps: Launch JMeter. Add Thread Group to the test plan by navigating to Test Plan | Add | Threads (Users) | Thread Group. Add HTTP Request to the thread group by navigating to Thread Group | Add | Sampler | HTTP Request. Fill in the following details:    Server Name or IP: dailyjs.com Add the View Results Tree listener to the test plan by navigating to Test Plan | Add | Listener | View Results Tree. Save and run the test plan. Once done, navigate to the View Results Tree component and click on the Response Data tab. Observe some of the built-in renders. Switch to the HTML render view by clicking on the dropdown and use the search textbox to search for any word on the page. Switch to the HTML (download resources) render view by clicking on the dropdown. Switch to the XML render view by clicking on the dropdown. Notice the entire HTML DOM structure is presented as the XML node elements. Switch to the RegExp Tester render view by clicking on the dropdown and try out some regular expression queries. Switch to the XPath Query Tester render view and try out some XPath queries. Switch to the CSS/jQuery Tester render view and try out some jQuery queries, for example, selecting all links inside divs marked with a class preview (Selector: div.preview a, Attribute: href, CSS/jQuery Implementation: JSOUP). How it works… As your test plans execute, the View Result Tree listener reports each sampler in your test plans individually. The Sampler Result tab of the component gives you a summarized view of the request and response including information such as load time, latency, response headers, body content sizes, response code and messages, response header content, and so on. The Request tab shows the actual request that got fulfilled by the sampler, which could be any of the acceptable requests the server can fulfill (for example, GET, POST, PUT, DELETE, and so on) along with details of the request headers. Finally, the Response Data tab gives the rendered view of the response received back from the server. The component includes several built-in renders along with tester components (CSS/JQuery, RegExp, and XPath) that allow us to test and come up with the right expressions or queries needed to use in postprocessor components within our test plans. This is a huge time saver as it means we don't have to exercise the same tests repeatedly to nail down such expressions. There's more… As with most things bundled with JMeter, additional view renders can be added to the View Result Tree component. The defaults included are Document, HTML, HTML (download resources), JSON, Text, and XML. Should any of these not suit your needs, you can create additional ones by implementing org.apache.jmeter.visualizers.ResultRender interface and/or extending org.apache.jmeter.visualizers.SamplerResultTab abstract class, bundling up the compiled classes as a JAR file and placing them in the $JMETER_HOME/lib/ext directory to make them available for JMeter. The View Result Tree listener consumes a lot of memory and CPU resources, and should not be used during load testing. Use it only to debug and validate the test plans. See also The Debugging with Debug Sampler recipe The detailed component reference for the View Results Tree listener can be found at http://jmeter.apache.org/usermanual/component_reference.html#View_Results_Tree Using the Aggregate Report listener Another often used listener in JMeter is the Aggregate Report listener. This listener creates a row for each uniquely named request in the test plan. Each row gives a summarized view of useful information including Request Count, Average, Median, Min, Max, 90% Line, Error Rate, Throughput, Requests/second, and KB/sec. The 90% Line column is particularly worth paying close attention to as you execute your tests. This figure gives you the time it takes for the majority of threads/users to execute a particular request. It is measured in milliseconds. Higher numbers here are indicative of slow requests and/or components within the application under test. Equally important is the Error % column, which reports the failure rate of each sampled request. It is reasonable to have some level of failure when exercising test runs, but too high a number is an indication of either errors in scripts or certain components in the application under test. Finally, of interest to stack holders might be the number of requests per second, which the Throughput column reports. The throughput values are approximate and let you know just how many requests per second the server is able to handle. How to do it… In this recipe, we will cover how to add an Aggregate Report listener to a test plan and then see the summarized view of our execution: Launch JMeter. Open the ch7_shoutbox.jmx script bundled with the code samples. Alternatively, you can download it from https://github.com/jmeter-cookbook/bundled-code/scripts/ch7/ch7_shoutbox.jmx. Add the Aggregate Report listener to Thread Group by navigating to Thread Group | Add | Listener | Aggregate Report. Save and run the test plan. Observe the real-time summary of results in the listener as the test proceeds. How it works… As your test plans execute, the Aggregate Report listener reports each sampler in your test plan on a separate row. Each row is packed with useful information. The Label column reflects the sample name, # Samples gives a count of each sampler, and Average, Mean, Min, and Max all give you the respective times of each sampler. As mentioned earlier, you should pay close attention to the 90% Line and Error % columns. This can help quickly pinpoint problematic components within the application under test and/or scripts. The Throughput column gives an idea of the responsiveness of the application under test and/or server. This can also be indicative of the capacity of the underlying server that the application under test runs on. This entire process is demonstrated in the following screenshot: Using the Aggregate Report listener See also http://jmeter.apache.org/usermanual/component_reference.html#Summary_Report Debugging with Debug Sampler Often, in the process of recording a new test plan or modifying an existing one, you will need to debug the scripts to finally get your desired results. Without such capabilities, the process will be a mix of trial and error and will become a time-consuming exercise. Debug Sampler is a nifty little component that generates a sample containing the values of all JMeter variables and properties. The generated values can then be seen in the Response Data tab of the View Results Tree listener. As such, to use this component, you need to have a View Results Tree listener added to your test plan. This component is especially useful when dealing with postprocessor components as it helps to verify the correct or expected values that were extracted during the test run. How to do it… In this recipe, we will see how we can use Debug Sampler to debug a postprocessor in our test plans. Perform the following steps: Launch JMeter. Open the prerecorded script ch7_debug_sampler.jmx bundled with the book. Alternatively, you can download it from http://git.io/debug_sampler. Add Debug Sampler to the test Thread Group by navigating to Thread Group | Add | Sampler | Debug Sampler. Save and run the test. Navigate to the View Results Tree listener component. Switch to RegExp Tester by clicking on the dropdown. Observe the response data of the Get All Requests sampler. What we want is a regular expression that will help us extract the ID of entries within this response. After a few attempts, we settle at "id":(d+). Enable all the currently disabled samplers, that is, Request/Create Holiday Request, Modify Holiday, Get All Requests, and Delete Holiday Request. You can achieve this by selecting all the disabled components, right-clicking on them, and clicking on Enable. Add the Regular Expression Extractor postprocessor to the Request/Create Holiday Request sampler by navigating to Request/Create Holiday Request | Add | Post Processors | Regular Expression Extractor. Fill in the following details:    Reference Name: id    Regular Expression: "id":(d+)    Template: $1$    Match No.: 0    Default Value: NOT_FOUND Save and rerun the test. Observe the ID of the newly created holiday request and whether it was correctly extracted and reported in Debug Sampler. How it works… Our goal was to test a REST API endpoint that allows us to list, modify, and delete existing resources or create new ones. When we create a new resource, the identifier (ID) is autogenerated from the server. To perform any other operations on the newly created resource, we need to grab its autogenerated ID, store that in a JMeter variable, and use it further down the execution chain. In step 7, we were able to observe the format of the server response for the resource when we executed the Get All Requests sampler. With the aid of RegExp Tester, we were able to nail down the right regular expression to use to extract the ID of a resource, that is, "id":(d+). Armed with this information, we added a Regular Expression Extractor postprocessor component to the Request/Create Holiday Request sampler and used the derived expression to get the ID of the newly created resource. We then used the ID stored in JMeter to modify and delete the resource down the execution chain. After test completion, with the help of Debug Sampler, we were able to verify whether the resource ID was properly extracted by the Regular Expression Extractor component and stored in JMeter as an ID variable. Using Constant Throughput Timer While running test simulations, it is sometimes necessary to be able to specify the throughput in terms of the number of requests per minute. This is the function of Constant Throughput Timer. This component introduces pauses to the test plan in such a way as to keep the throughput as close as possible to the target value specified. Though the name implies it is constant, various factors affect the behavior, such as server capacity, other timers or time-consuming elements in the test plan, and so on. As a result, the targeted throughput could be lowered. How to do it… In this recipe, we will add Constant Throughput Timer to our test plan and see how we can specify the expected throughput with it. Perform the following steps: Launch JMeter. Open the prerecorded script ch7_constant_throughput.jmx bundled with the book. Alternatively, you can download it from http://git.io/constant_throughput. Add Constant Throughput Timer to Thread Group by navigating to Thread Group | Add | Timer | Constant Throughput Timer. Fill in the following details:    Target throughput (in samples per minute): 200    Calculate Throughput based on: this thread only Save and run the test plan. Allow the test to run for about 5 minutes. Observe the result in the Aggregate Result listener as the test is going on. Stop the test manually as it is currently set to run forever. How it works… The goal of the Constant Throughput Timer component is to get your test plan samples as close as possible to a specified desired throughput. It achieves this by introducing variable pauses to the test plan in such a manner that will keep numbers as close as possible to the desired throughput. That said, throughput will be lowered if the server resources of the system under test can't handle the load. Also, other elements (for example, other timers, the number of specified threads, and so on) within the test plan can affect attaining the desired throughput. In our recipe, we have specified the throughput rate to be calculated based on a single thread, but Constant Throughput Timer also allows throughput to be calculated based on all active threads and all active threads in the current thread group. Each of these settings can be used to alter the behavior of the desired throughput. As a rule of thumb, avoid using other timers at the same time you use Constant Throughput Timer, since you'll not achieve the desired throughput. See also The Using Throughput Shaping Timer recipe http://jmeter.apache.org/usermanual/component_reference.html#timers Using the JSR223 postprocessor The JSR223 postprocessor allows you to use precompiled scripts within test plans. The fact that the scripts are compiled before they are actually used brings a significant performance boost compared to other postprocessors. This also allows a variety of programming languages to be used, including Java, Groovy, BeanShell, JEXL, and so on. This allows us to harness the powerful language features in those languages within our test plans. JSR223 components, for example, could help us tackle preprocessor or postprocessor elements and samplers, allowing us more control over how elements are extracted from responses and stored as JMeter variables. How to do it… In this recipe, we will see how to use a JSR223 postprocessor within our test plan. We have chosen Groovy (http://groovy.codehaus.org/) as our choice of scripting language, but any of the other supporting languages will do: Download the standard set of plugins from http://jmeter-plugins.org/. Install the plugins by doing the following:    Extract the ZIP archive to the location of your chosen directory    Copy the lib folder in the extracted directory into the $JMETER_HOME directory Download the groovy-all JAR file from http://devbucket-afriq.s3.amazonaws.com/jmeter-cookbook/groovy-all-2.3.3.jar and add it to the $JMETER_HOME/lib directory. Launch JMeter. Add Thread Group by navigating to Test Plan | Add | Threads(Users) | Thread Group. Add Dummy Sampler to Thread Group by navigating to Thread Group | Add | Sampler | jp@gc - Dummy Sampler. In the Response Data text area, add the following content: <records>   <car name='HSV Maloo' make='Holden' year='2006'>       <country>Australia</country>       <record type='speed'>Production Pickup Truck with speed of 271kph</record>   </car>   <car name='P50' make='Peel' year='1962'>       <country>Isle of Man</country>       <record type='size'>Smallest Street-Legal Car at 99cm wide and 59 kg in weight</record>   </car>   <car name='Royale' make='Bugatti' year='1931'>       <country>France</country>       <record type='price'>Most Valuable Car at $15 million</record>   </car></records> Download the Groovy script file from http://git.io/8jCXMg to any location of your choice. Alternatively, you can get it from the code sample bundle accompanying the book (ch7_jsr223.groovy). Add JSR223 PostProcessor as a child of Dummy Sampler by navigating to jp@gc - Dummy Sampler | Add | Post Processors | JSR223 PostProcessor. Select Groovy as the language of choice in the Language drop-down box. In the File Name textbox, put in the absolute path to where the Groovy script file is, for example, /tmp/scripts/ch7/ch7_jsr223.groovy. Add the View Results Tree listener to the test plan by navigating to Test Plan | Add | Listener | View Results Tree. Add Debug Sampler to Thread Group by navigating to Thread Group | Add | Sampler | Debug Sampler. Save and run the test. Observe the Response Data tab of Debug Sampler and see how we now have the JMeter variables car_0, car_1, and car_2, all extracted from the Response Data tab and populated by our JSR223 postprocessor component. How it works… JMeter exposes certain variables to the JSR223 component, allowing it to get hold of sample details and information, perform logic operations, and store the results as JMeter variables. The exposed attributes include Log, Label, Filename, Parameters, args[], ctx, vars, props, prev, sampler, and OUT. Each of these allows access to important and useful information that can be used during the postprocessing of sampler responses. The log gives access to Logger (an instance of an Apache Commons Logging log instance; see http://bit.ly/1xt5dmd), which can be used to write log statements to the logfile. The Label and Filename attributes give us access to the sample label and script file name respectively. The Parameters and args[] attributes give us access to parameters sent to the script. The ctx attribute gives access to the current thread's JMeter context (http://bit.ly/1lM31MC). vars gives access to write values into JMeter variables (http://bit.ly/1o5DDBr), exposing them to the result of the test plan. The props attribute gives us access to JMeterProperties. The sampler attribute gives us access to the current sampler while OUT allows us to write log statements to the standard output, that is, System.out. Finally, the prev sample gives access to previous sample results (http://bit.ly/1rKn8Cs), allowing us to get useful information such as the response data, headers, assertion results, and so on. In our script, we made use of the prev and vars attributes. With prev, we were able to get hold of the XML response from the sample. Using Groovy's XmlSlurper (http://bit.ly/1AoRMnb), we were able to effortlessly process the XML response and compose the interesting bits, storing them as JMeter variables using the vars attribute. Using this technique, we are able to accomplish tasks that might have otherwise been cumbersome to achieve using any other postprocessor elements we have seen in other recipes. We are able to take full advantage of the language features of any chosen scripting language. In our case, we used Groovy, but any other supported scripting languages you are comfortable with will do as well. See also http://jmeter.apache.org/api http://jmeter.apache.org/usermanual/component_reference.html#BSF_PostProcessor http://jmeter.apache.org/api/org/apache/jmeter/threads/JMeterContext.html http://jmeter.apache.org/api/org/apache/jmeter/threads/JMeterVariables.html http://jmeter.apache.org/api/org/apache/jmeter/samplers/SampleResult.html Analyzing Response Times Over Time An important aspect of performance testing is the response times of the application under test. As such, it is often important to visually see the response times over a duration of time as the test plan is executed. Out of the box, JMeter comes with the Response Time Graph listener for this purpose, but it is limited and lacks some features. Such features include the ability to focus on a particular sample when viewing chat results, controlling the granularity of timeline values, selectively choosing which samples appear or not in the resulting chart, controlling whether to use relative graphs or not, and so on. To address all these and more, the Response Times Over Time listener extension from the JMeter plugins project comes to the rescue. It shines in areas where the Response Time Graph falls short. How to do it… In this recipe, we will see how to use the Response Times Over Time listener extension in our test plan and get the response times of our samples over time. Perform the following steps: Download the standard set of plugins from http://jmeter-plugins.org/. Install the plugins by doing the following:    Extract the ZIP archive to the location of your chosen directory    Copy the lib folder in the extracted directory into the $JMETER_HOME directory Launch JMeter. Open any of your existing prerecorded scripts or record a new one. Alternatively, you can open the ch7_response_times_over_time.jmx script accompanying the book or download it from http://git.io/response_times_over_time. Add the Response Times Over Time listener to the test plan by navigating to Test Plan | Add | Listener | jp@gc - Response Times Over Time. Save and execute the test plan. View the resulting chart in the tab by clicking on the Response Times Over Time component. Observe the time elapsed on the x axis and the response time in milliseconds on the y axis for all samples contained in the test plan. Navigate to the Rows tab and exclude some of the samples from the chart by unchecking the selection boxes next to the samples. Switch back to the Chart tab and observe that the chart now reflects your changes, allowing you to focus in on interested samples. Switch to the Settings tab and see all the available configuration options. Change some options and repeat the test execution. This is shown in the following screenshot: Analyzing Response Times Over Time How it works… Just like its name implies, the Response Times Over Time listener extension displays the average response time in milliseconds for each sampler in the test plan. It comes with various configuration options that allow you to customize the resulting graph to your heart's content. More importantly, it allows you to focus in on specific samples in your test plan, helping you pinpoint potential bottlenecks or problematic modules within the application under test. For graphs to be more meaningful, it helps to give samples sensible descriptive names and tweak the granularity of the elapsed time to a higher number in the Settings tab if you have long running tests. After test execution, data of any chart can also be exported to a CSV file for further analysis or use as you desire. Any listener that charts results will have some impact on performance and shouldn't be used during high volume load testing. Analyzing transactions per second Sometimes we are tasked with testing backend services, application program interfaces (APIs), or some other components that may not necessarily have a graphical user interface (GUI) attached to it, for example, a classic web application. At such times, the measure of the responsiveness of the module, for example, will be how many transactions per second it can withstand before slowness is observed. For example, Transactions Per Second (TPS) is useful information for stakeholders who are providing services that can be consumed by various third-party components or other services. Good examples of these include the Google search engine, which can be consumed by third-parties, and the Twitter and Facebook APIs, which allow developers to integrate their application with Twitter and Facebook respectively. The Transactions Per Second listener extension component from the JMeter plugins project allows us to measure the transactions per second. It plots a chart of the transactions per second over an elapsed duration of time. How to do it… In this recipe, we will see how to use the Transactions Per Second listener extension in our test plan and get the transactions per second for a test API service: Download the standard set of plugins from http://jmeter-plugins.org/. Install the plugins by doing the following:    Extract the ZIP archive to the location of your chosen directory    Copy the lib folder in the extracted directory into the $JMETER_HOME directory Launch JMeter. Open the ch7_transaction_per_sec.jmx script accompanying the book or download it from http://git.io/trans_per_sec. Add the Transactions Per Second listener to the test plan by navigating to Test Plan | Add | Listener | jp@gc - Transactions per Second. Save and execute the test plan. View the resulting chart in the tab by clicking on the Transactions Per Second component. Observe the time elapsed on the x axis and the transactions/sec on the y axis for all samples contained in the test plan. Navigate to the Rows tab and exclude some of the samples from the chart by unchecking the selection boxes next to the samples. Switch back to the Chart tab and observe that the chart now reflects your changes, allowing you to focus in on interesting samples. Switch to the Settings tab and see all the available configuration options. Change some options and repeat the test execution. How it works… The Transactions Per Second listener extension displays the transactions per second for each sample in the test plan by counting the number of successfully completed transactions each second. It comes with various configuration options that allow you to customize the resulting graph. Such configurations allow you to focus in on specific samples of interest in your test plan, helping you to get at impending bottlenecks within the application under test. It is helpful to give your samples sensible descriptive names to help make better sense of the resulting graphs and data points. This is shown in the following screenshot: Analyzing Transactions per Second Summary In this article, you learned how to build a test plan using the steps mentioned in the recipe. Furthermore, you saw how to debug and analyze the result of a test plan after building it. Resources for Article: Further resources on this subject: Functional Testing with JMeter [article] Performance Testing Fundamentals [article] Common performance issues [article]
Read more
  • 0
  • 0
  • 2434

article-image-understanding-context-bdd
Packt
22 Oct 2014
6 min read
Save for later

Understanding the context of BDD

Packt
22 Oct 2014
6 min read
In this article by Sujoy Acharya, author of Mockito Essentials, you will learn about the BDD concepts and BDD examples. You will also learn about how BDD can help you minimize project failure risks. (For more resources related to this topic, see here.) This section of the article deals with the software development strategies, drawbacks, and conquering the shortcomings of traditional approaches. The following strategies are applied to deliver software products to customers: Top-down or waterfall approach Bottom-up approach We'll cover these two approaches in the following sections. The following key people/roles/stakeholders are involved in software development: Customers: They explore the concept and identify the high-level goal of the system, such as automating the expense claim process Analysts: They analyze the requirements, work with the customer to understand the system, and build the system requirement specifications Designers/architects: They visualize the system, design the baseline architecture, identify the components, interact and handle the nonfunctional requirements, such as scalability and availability Developers: They construct the system from the design and specification documents Testers: They design test cases and verify the implementation Operational folks: They install the software as per the customer's environment Maintenance team: They handle bugs and monitor the system's health Managers: They act as facilitators and keep track of the progress and schedule Exploring the top-down strategy In the top-down strategy, analysts analyze the requirements and hand over the use cases / functional specifications to the designers and architects for designing the system. The architects/designers design the baseline architecture, identify the system components and interactions, and then pass the design over to the developers for implementation. The testers then verify the implementation (might report bugs for fixing), and finally, the software is deployed to the customer's environment. The following diagram depicts the top-down flow from requirement engineering to maintenance: The biggest drawback of this approach is the cost of rework. For instance, if the development team finds that a requirement is not feasible, they consult the design or analysis team. Then the architects or analysts look at the issue and rework the analysis or design. This approach has a cascading effect; the cost of rework is very high. Customers rarely know what they want before they see the system in action. Building everything all at once is a quick way to cause your requirements to change. Even without the difference in cost of requirement changes, you'll have fewer changes if you write the requirements later in the process, when you have a partially working product that the customer can see and everybody has more information about how the product will work. Exploring the bottom-up strategy In the bottom-up strategy, the requirement is broken into small chunks and each chunk is designed, developed, and unit tested separately, and finally, the chunks are integrated. The individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked until a complete top-level system is formed. Each subsystem is developed in isolation from the other subsystems, so integration is very important in the bottom-up approach. If integration fails, the cost and effort of building the subsystems gets jeopardized. Suppose you are building a healthcare system with three subsystems, namely, patient management, receivable management, and the claims module. If the patient module cannot talk to the claims module, the system fails. The effort of building the patient management and claims management subsystems is just wasted. Agile development methodology would suggest building the functionality feature by feature across subsystems, that is, building a very basic patient management and claims management subsystem to make the functionality work initially, and then adding more to both simultaneously, to support each new feature that is required. Finding the gaps In real-life projects, the following is the percentage of feature usage: 60 percent of features are never used 30 percent of features are occasionally used 10 percent of features are frequently used However, in the top-down approach, the analyst pays attention and brainstorms to create system requirements for all the features. In the top-down approach, time is spent to build a system where 90 percent of features are either not used or occasionally used. Instead, we can identify the high-value features and start building the features instead of paying attention to the low priority features, by using the bottom-up approach. In the bottom-up approach, subsystems are built in isolation from each other, and this causes integration problems. If we prioritize the requirements and start with the highest priority feature, design the feature, build it, unit test it, integrate it, and then show a demo to the stakeholders (customers, analysts, product managers, and so on), we can easily identify the gaps and reduce the risk of rework. We can then pick the next feature and follow the steps (designing, coding, testing, and getting feedback from the customers), and finally integrate the feature with the existing system. This reduces the integration issues of the bottom-up approach. The following figure represents the approach. Each feature is analyzed, designed, coded, tested, and integrated separately. An example of a requirement could be login failure error messages appear red and in bold, while a feature could be incorrect logins are rejected. Typically, it should be a little larger and a useful standalone bit of functionality, rather than a specific single requirement for that functionality. Another problem associated with software development is communication; each stakeholder has a different vocabulary and this causes issues for common understanding. The following are the best practices to minimize software delivery risks: Focus on high-value, frequently used features. Build a common vocabulary for the stakeholders; a domain-specific language that anybody can understand. No more big-fat upfront designing. Evolve the design with the requirements, iteratively. Code to satisfy the current requirement. Don't code for a future requirement, which may or may not be delivered. Follow the YAGNI (You Aren't Going to Need It) principle. Build test the safety net for each requirement. Integrate the code with the system and rerun the regression test. Get feedback from the stakeholders and make immediate changes. BDD suggests the preceding best approaches. Summary This article covered and taught you about the BDD concepts and BDD examples. Resources for Article: Further resources on this subject: Important features of Mockito [article] Progressive Mockito [article] Getting Started with Mockito [article]
Read more
  • 0
  • 0
  • 1019

article-image-creating-jsf-composite-component
Packt
22 Oct 2014
9 min read
Save for later

Creating a JSF composite component

Packt
22 Oct 2014
9 min read
This article by David Salter, author of the book, NetBeans IDE 8 Cookbook, explains how to create a JSF composite component in NetBeans. (For more resources related to this topic, see here.) JSF is a rich component-based framework, which provides many components that developers can use to enrich their applications. JSF 2 also allows composite components to be easily created, which can then be inserted into other JSF pages in a similar way to any other JSF components such as buttons and labels. In this article, we'll see how to create a custom component that displays an input label and asks for corresponding input. If the input is not validated by the JSF runtime, we'll show an error message. The component is going to look like this: The custom component is built up from three different standard JSF components. On the left, we have a <h:outputText/> component that displays the label. Next, we have a <h:inputText /> component. Finally, we have a <h:message /> component. Putting these three components together like this is a very useful pattern when designing input forms within JSF. Getting ready To create a JSF composite component, you will need to have a working installation of WildFly that has been configured within NetBeans. We will be using the Enterprise download bundle of NetBeans as this includes all of the tools we need without having to download any additional plugins. How to do it… First of all, we need to create a web application and then create a JSF composite component within it. Perform the following steps: Click on File and then New Project…. Select Java Web from the list of Categories and Web Application form the list of Projects. Click on Next. Enter the Project Name value as CompositeComp. Click on Next. Ensure that Add to Enterprise Application is set to <None>, Server is set to WildFly Application Server, Java EE Version is set to Java EE 7 Web, and Context Path is set to /CompositeComp. Click on Next. Click on the checkbox next to JavaServer Faces as we are using this framework. All of the default JSF configurations are correct, so click on the Finish button to create the project. Right-click on the CompositeComp project within the Projects explorer and click on New and then Other…. In the New File dialog, select JavaServer Faces from the list of Categories and JSF Composite Component from the list of File Types. Click on Next. On the New JSF Composite Component dialog, enter the File Name value as inputWithLabel and change the folder to resourcescookbook. Click on Finish to create the custom component. In JSF, custom components are created as Facelets files that are stored within the resources folder of the web application. Within the resources folder, multiple subfolders can exist, each representing a namespace of a custom component. Within each namespace folder, individual custom components are stored with filenames that match the composite component names. We have just created a composite component within the cookbook namespace called inputWithLabel. Within each composite component file, there are two sections: an interface and an implementation. The interface lists all of the attributes that are required by the composite component and the implementation provides the XHTML code to represent the component. Let's now define our component by specifying the interface and the implementation. Perform the following steps: The inputWithLabel.xhtml file should be open for editing. If not, double–click on it within the Projects explorer to open it. For our composite component, we need two attributes to be passed into the component. We need the text for the label and the expression language to bind the input box to. Change the interface section of the file to read:    <cc:attribute name="labelValue" />   <cc:attribute name="editValue" /></cc:interface> To render the component, we need to instantiate a <h:outputText /> tag to display the label, a <h:inputText /> tag to receive the input from the user, and a <h:message /> tag to display any errors that are entered for the input field. Change the implementation section of the file to read: <cc:implementation>   <style>   .outputText{width: 100px; }   .inputText{width: 100px; }   .errorText{width: 200px; color: red; }   </style>   <h:panelGrid id="panel" columns="3" columnClasses="outputText, inputText, errorText">       <h:outputText value="#{cc.attrs.labelValue}" />       <h:inputText value="#{cc.attrs.editValue}" id="inputText" />       <h:message for="inputText" />   </h:panelGrid></cc:implementation> Click on the lightbulb on the left-hand side of the editor window and accept the fix to add the h=http://><html       > We can now reference the composite component from within the Facelets page. Add the following code inside the <h:body> code on the page: <h:form id="inputForm">   <cookbook:inputWithLabel labelValue="Forename" editValue="#{personController.person.foreName}"/>   <cookbook:inputWithLabel labelValue="Last Name" editValue="#{personController.person.lastName}"/>   <h:commandButton type="submit" value="Submit" action="#{personController.submit}"/></h:form> This code instantiates two instances of our inputWithLabel composite control and binds them to personController. We haven't got one of those yet, so let's create one and a class to represent a person. Perform the following steps: Create a new Java class within the project. Enter Class Name as Person and Package as com.davidsalter.cookbook.compositecomp. Click on Finish. Add members to the class to represent foreName and lastName: private String foreName;private String lastName; Use the Encapsulate Fields refactoring to generate getters and setters for these members. To allow error messages to be displayed if the foreName and lastName values are inputted incorrectly, we will add some Bean Validation annotations to the attributes of the class. Annotate the foreName member of the class as follows: @NotNull@Size(min=1, max=25)private String foreName; Annotate the lastName member of the class as follows: @NotNull@Size(min=1, max=50)private String lastName; Use the Fix Imports tool to add the required imports for the Bean Validation annotations. Create a new Java class within the project. Enter Class Name as PersonController and Package as com.davidsalter.cookbook.compositecomp. Click on Finish. We need to make the PersonController class an @Named bean so that it can be referenced via expression language from within JSF pages. Annotate the PersonController class as follows: @Named@RequestScopedpublic class PersonController { We need to add a Person instance into PersonController that will be used to transfer data from the JSF page to the named bean. We will also need to add a method onto the bean that will redirect JSF to an output page after the names have been entered. Add the following to the PersonController class: private Person person = new Person();public Person getPerson() {   return person;}public void setPerson(Person person) {   this.person = person;}public String submit() {   return "results.xhtml";} The final task before completing our application is to add a results page so we can see what input the user entered. This output page will simply display the values of foreName and lastName that have been entered. Create a new JSF page called results that uses the Facelets syntax. Change the <h:body> tag of this page to read: <h:body>   You Entered:   <h:outputText value="#{personController.person.foreName}" />&nbsp;   <h:outputText value="#{personController.person.lastName}" /></h:body> The application is now complete. Deploy and run the application by right-clicking on the project within the Projects explorer and selecting Run. Note that two instances of the composite component have been created and displayed within the browser. Click on the Submit button without entering any information and note how the error messages are displayed: Enter some valid information and click on Submit, and note how the information entered is echoed back on a second page. How it works… Creating composite components was a new feature added to JSF 2. Creating JSF components was a very tedious job in JSF 1.x, and the designers of JSF 2 thought that the majority of custom components created in JSF could probably be built by adding different existing components together. As it is seen, we've added together three different existing JSF components and made a very useful composite component. It's useful to distinguish between custom components and composite components. Custom components are entirely new components that did not exist before. They are created entirely in Java code and build into frameworks such as PrimeFaces and RichFaces. Composite components are built from existing components and their graphical view is designed in the .xhtml files. There's more... When creating composite components, it may be necessary to specify attributes. The default option is that the attributes are not mandatory when creating a custom component. They can, however, be made mandatory by adding the required="true" attribute to their definition, as follows: <cc:attribute name="labelValue" required="true" /> If an attribute is specified as required, but is not present, a JSF error will be produced, as follows: /index.xhtml @11,88 <cookbook:inputWithLabel> The following attribute(s) are required, but no values have been supplied for them: labelValue. Sometimes, it can be useful to specify a default value for an attribute. This is achieved by adding the default="…" attribute to their definition: <cc:attribute name="labelValue" default="Please enter a value" /> Summary In this article, we have learned to create a JSF composite component using NetBeans. Resources for Article: Further resources on this subject: Creating a Lazarus Component [article] Top Geany features you need to know about [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 4508
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-introduction-s4-classes
Packt
22 Oct 2014
36 min read
Save for later

Introduction to S4 Classes

Packt
22 Oct 2014
36 min read
In this article, by Kelly Black, the author of the R Object-oriented Programming book, will examine S4 classes. The approach associated with S3 classes is more flexible, and the approach associated with S4 classes is a more formal and structured definition. This article is roughly divided into four parts: Class definition: This section gives you an overview of how a class is defined and how the data (slots) associated with the class are specified Class methods: This section gives you an overview of how methods that are associated with a class are defined Inheritance: This section gives you an overview of how child classes that build on the definition of a parent class can be defined Miscellaneous commands: This section explains four commands that can be used to explore a given object or class (For more resources related to this topic, see here.) Introducing the Ant class We will introduce the idea of S4 classes, which is a more formal way to implement classes in R. One of the odd quirks of S4 classes is that you first define the class along with its data, and then, you define the methods separately. As a result of this separation in the way a class is defined, we will first discuss the general idea of how to define a class and its data. We will then discuss how to add a method to an existing class. Next, we will discuss how inheritance is implemented. Finally, we will provide a few notes about other options that do not fit nicely in the categories mentioned earlier. The approach associated with an S4 class is less flexible and requires a bit more forethought in terms of how a class is defined. We will take a different approach and create a complete class from the beginning. In this case, we will build on an idea proposed by Cole and Cheshire. The authors proposed a cellular automata simulation to mimic how ants move within a colony. As part of a simulation, we will assume that we need an Ant class. We will depart from the paper and assume that the ants are not homogeneous. We will then assume that there are male (drones) and female ants, and the females can be either workers or soldiers. We will need an ant base class, which is discussed in the first two sections of this article as a means to demonstrate how to create an S4 class. In the third section, we will define a hierarchy of classes based on the original Ant class. This hierarchy includes male and female classes. The worker class will then inherit from the female class, and the soldier class will inherit from the worker class. Defining an S4 class We will define the base Ant class called Ant. The class is represented in the following figure. The class is used to represent the fundamental aspects that we need to track for an ant, and we focus on creating the class and data. The methods are constructed in a separate step and are examined in the next section. A class is created using the setClass command. When creating the class, we specify the data in a character vector using the slots argument. The slots argument is a vector of character objects and represents the names of the data elements. These elements are often referred to as the slots within the class. Some of the arguments that we will discuss here are optional, but it is a good practice to use them. In particular, we will specify a set of default values (the prototype) and a function to check whether the data is consistent (a validity function). Also, it is a good practice to keep all of the steps necessary to create a class within the same file. To that end, we assume that you will not be entering the commands from the command line. They are all found within a single file, so the formatting of the examples will reflect the lack of the R workspace markers. The first step is to define the class using the setClass command. This command defines a new class by name, and it also returns a generator that can be used to construct an object for the new class. The first argument is the name of the class followed by the data to be included in the class. We will also include the default initial values and the definition of the function used to ensure that the data is consistent. The validity function can be set separately using the setValidity command. The data types for the slots are character values that match the names of the R data types which will be returned by the class command: # Define the base Ant class. Ant <- setClass(    # Set the name of the class    "Ant",    # Name the data types (slots) that the class will track    slots = c(        Length="numeric",           # the length (size) of this ant.               Position="numeric",         # the position of this ant.                                    # (a 3 vector!)               pA="numeric",               # Probability that an ant will                                    # transition from active to                                     # inactive.        pI="numeric",               # Probability that an ant will                                    # transition from inactive to                                    # active.          ActivityLevel="numeric"     # The ant's current activity                            # level.        ),    # Set the default values for the slots. (optional)    prototype=list(        Length=4.0,        Position=c(0.0,0.0,0.0),        pA=0.05,        pI=0.1,        ActivityLevel=0.5        ),    # Make a function that can test to see if the data is consistent.    # (optional)    validity=function(object)    {        # Check to see if the activity level and length is        # non-negative.        # See the discussion on the @ notation in the text below.        if(object@ActivityLevel<0.0) {            return("Error: The activity level is negative")        } else if (object@Length<0.0) {            return("Error: The length is negative")        }        return(TRUE)  }    ) With this definition, there are two ways to create an Ant object: one is using the new command and the other is using the Ant generator, which is created after the successful execution of the setClass command. Note that in the following examples, the default values can be overridden when a new object is created: > ant1 <- new("Ant") > ant1 An object of class "Ant" Slot "Length": [1] 4 Slot "Position": [1] 0 0 0 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 We can specify the default values when creating a new object. > ant2 <- new("Ant",Length=4.5) > ant2 An object of class "Ant" Slot "Length": [1] 4.5 Slot "Position": [1] 0 0 0 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 The object can also be created using the generator that is defined when creating the class using the setClass command. > ant3 <- Ant(Length=5.0,Position=c(3.0,2.0,1.0)) > ant3 An object of class "Ant" Slot "Length": [1] 5 Slot "Position": [1] 3 2 1 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 > class(ant3) [1] "Ant" attr(,"package") [1] ".GlobalEnv" > getClass(ant3) An object of class "Ant" Slot "Length": [1] 5 Slot "Position": [1] 3 2 1 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 When the object is created and a validity function is defined, the validity function will determine whether the given initial values are consistent: > ant4 <- Ant(Length=-1.0,Position=c(3.0,2.0,1.0)) Error in validObject(.Object) : invalid class “Ant” object: Error: The length is negative > ant4 Error: object 'ant4' not found In the last steps, the attempted creation of ant4, an error message is displayed. The new variable, ant4, was not created. If you wish to test whether the object was created, you must be careful to ensure that the variable name used does not exist prior to the attempted creation of the new object. Also, the validity function is only executed when a request to create a new object is made. If you change the values of the data later, the validity function is not called. Before we move on to discuss methods, we need to figure out how to get access to the data within an object. The syntax is different from other data structures, and we use @ to indicate that we want to access an element from within the object. This can be used to get a copy of the value or to set the value of an element: > adomAnt <- Ant(Length=5.0,Position=c(-1.0,2.0,1.0)) > adomAnt@Length [1] 5 > adomAnt@Position [1] -1 2 1 > adomAnt@ActivityLevel = -5.0 > adomAnt@ActivityLevel [1] -5 Note that in the preceding example, we set a value for the activity level that is not allowed according to the validity function. Since it was set after the object was created, no check is performed. The validity function is only executed during the creation of the object or if the validObject function is called. One final note: it is generally a bad form to work directly with an element within an object, and a better practice is to create methods that obtain or change an individual element within an object. It is a best practice to be careful about the encapsulation of an object's slots. The R environment does not recognize the idea of private versus public data, and the onus is on the programmer to maintain discipline with respect to this important principle. Defining methods for an S4 class When a new class is defined, the data elements are defined, but the methods associated with the class are defined on a separate stage. Methods are implemented in a manner similar to the one used for S3 classes. A function is defined, and the way the function reacts depends on its arguments. If a method is used to change one of the data components of an object, then it must return a copy of the object, just as we saw with S3 classes. The creation of new methods is discussed in two steps. We will first discuss how to define a method for a class where the method does not yet exist. Next, we will discuss some predefined methods that are available and how to extend them to accommodate a new class. Defining new methods The first step to create a new method is to reserve the name. Some functions are included by default, such as the initialize, print or show commands, and we will later see how to extend them. To reserve a new name, you must first use the setGeneric command. At the very least, you need to give this command the name of the function as a character string. As in the previous section, we will use more options as an attempt to practice safe programming. The methods to be created are shown in preceding figure. There are a number of methods, but we will only define four here. All of the methods are accessors; they are used to either get or set values of the data components. We will only define the methods associated with the length slot in this text, and you can see the rest of the code in the examples available on the website. The other methods closely follow the code used for the length slot. There are two methods to set the activity level, and those codes are examined separately to provide an example of how a method can be overloaded. First, we will define the methods to get and set the length. We will first create the method to get the length, as it is a little more straightforward. The first step is to tell R that a new function will be defined, and the name is reserved using the setGeneric command. The method that is called when an Ant object is passed to the command is defined using the setMethod command: setGeneric(name="GetLength",            def=function(antie)            {                standardGeneric("GetLength")            }            ) setMethod(f="GetLength",          signature="Ant",          definition=function(antie)          {              return(antie@Length)          }          ) Now that the GetLength function is defined, it can be used to get the length component for an Ant object: > ant2 <- new("Ant",Length=4.5) > GetLength(ant2) [1] 4.5 The method to set the length is similar, but there is one difference. The method must return a copy of the object passed to it, and it requires an additional argument: setGeneric(name="SetLength",            def=function(antie,newLength)            {                standardGeneric("SetLength")            }            ) setMethod(f="SetLength",          signature="Ant",          definition=function(antie,newLength)          {              if(newLength>0.0) {                  antie@Length = newLength              } else {                  warning("Error - invalid length passed");              }              return(antie)           }          ) When setting the length, the new object must be set using the object that is passed back from the function: > ant2 <- new("Ant",Length=4.5) > ant2@Length [1] 4.5 > ant2 <- SetLength(ant2,6.25) > ant2@Length [1] 6.25 Polymorphism The definition of S4 classes allows methods to be overloaded. That is, multiple functions that have the same name can be defined, and the function that is executed is determined by the arguments' types. We will now examine this idea in the context of defining the methods used to set the activity level in the Ant class. Two or more functions can have the same name, but the types of the arguments passed to them differ. There are two methods to set the activity level. One takes a floating point number and sets the activity level based to the value passed to it. The other takes a logical value and sets the activity level to zero if the argument is FALSE; otherwise, it sets it to a default value. The idea is to use the signature option in the setMethod command. It is set to a vector of class names, and the order of the class names is used to determine which function should be called for a given set of arguments. An important thing to note, though, is that the prototype defined in the setGeneric command defines the names of the arguments, and the argument names in both methods must be exactly the same and in the same order: setGeneric(name="SetActivityLevel",            def=function(antie,activity)            {                standardGeneric("SetActivityLevel")            }          ) setMethod(f="SetActivityLevel",          signature=c("Ant","logical"),          definition=function(antie,activity)          {              if(activity) {                  antie@ActivityLevel = 0.1              } else {                  antie@ActivityLevel = 0.0              }              return(antie)          }          ) setMethod(f="SetActivityLevel",          signature=c("Ant","numeric"),          definition=function(antie,activity)          {              if(activity>=0.0) {                  antie@ActivityLevel = activity              } else {                  warning("The activity level cannot be negative")              }              return(antie)          }          ) Once the two methods are defined, R will use the class names of the arguments to determine which function to call in a given context: > ant2 <- SetActivityLevel(ant2,0.1) > ant2@ActivityLevel [1] 0.1 > ant2 <- SetActivityLevel(ant2,FALSE) > ant2@ActivityLevel [1] 0 There are two additional data types recognized by the signature option: ANY and missing. These can be used to match any data type or a missing value. Also note that we have left out the use of ellipses (…) for the arguments in the preceding examples. The … argument must be the last argument and is used to indicate that any remaining parameters are passed as they appear in the original call to the function. Ellipses can make the use of the overloaded functions in a more flexible way than indicated. More information can be found using the help(dotsMethods) command. Extending the existing methods There are a number of generic functions defined in a basic R session, and we will examine how to extend an existing function. For example, the show command is a generic function whose behavior depends on the class name of the object passed to it. Since the function name is already reserved, the setGeneric command is not used to reserve the function name. The show command is a standard example. The command takes an object and converts it to a character value to be displayed. The command defines how other commands print out and express an object. In the preceding example, a new class called coordinate is defined; this keeps track of two values, x and y, for a coordinate, and we will add one method to set the values of the coordinate: # Define the base coordinates class. Coordinate <- setClass(    # Set the name of the class    "Coordinate",    # Name the data types (slots) that the class will track    slots = c(        x="numeric", # the x position      y="numeric"   # the y position        ),    # Set the default values for the slots. (optional)    prototype=list(        x=0.0,        y=0.0        ),    # Make a function that can test to see if the data    # is consistent.    # (optional)    # This is not called if you have an initialize    # function defined!    validity=function(object)    {        # Check to see if the coordinate is outside of a circle of        # radius 100        print("Checking the validity of the point")        if(object@x*object@x+object@y*object@y>100.0*100.0) {        return(paste("Error: The point is too far ",        "away from the origin."))       }        return(TRUE)    }    ) # Add a method to set the value of a coordinate setGeneric(name="SetPoint",            def=function(coord,x,y)            {                standardGeneric("SetPoint")            }            ) setMethod(f="SetPoint",          signature="Coordinate",          def=function(coord,x,y)          {              print("Setting the point")              coord@x = x              coord@y = y              return(coord)          }          ) We will now extend the show method so that it can properly react to a coordinate object. As it is reserved, we do not have to use the setGeneric command but can simply define it: setMethod(f="show",          signature="Coordinate",          def=function(object)          {              cat("The coordinate is X: ",object@x," Y: ",object@y,"n")          }          ) As noted previously, the signature option must match the original definition of a function that you wish to extend. You can use the getMethod('show') command to examine the signature for the function. With the new method in place, the show command is used to convert a coordinate object to a string when it is printed: > point <- Coordinate(x=1,y=5) [1] "Checking the validity of the point" > print(point) The coordinate is X: 1 Y: 5 > point The coordinate is X: 1 Y: 5 Another import predefined method is the initialize command. If the initialize command is created for a class, then it is called when a new object is created. That is, you can define an initialize function to act as a constructor. If an initialize function is defined for a class, the validator is not called. You have to manually call the validator using the validObject command. Also note that the prototype for the initialize command requires the name of the first argument to be an object, and the default values are given for the remaining arguments in case a new object is created without specifying any values for the slots: setMethod(f="initialize",          signature="Coordinate",          def=function(.Object,x=0.0,y=0.0)          {              print("Checking the point")              .Object = SetPoint(.Object,x,y)              validObject(.Object) # you must explicitly call              # the inspector              return(.Object)          }          ) Now, when you create a new object, the new initialize function is called immediately: > point <- Coordinate(x=2,y=3) [1] "Checking the point" [1] "Setting the point" [1] "Checking the validity of the point" > point The coordinate is X: 2 Y: 3 Using the initialize and validity functions together can result in surprising code paths. This is especially true when inheriting from one class and calling the initialize function of a parent class from the child class. It is important to test codes to ensure that the code is executing in the order that you expect. Personally, I try to use either validator or constructor, but not both. Inheritance The Ant class discussed in the first section of this article provided an example of how to define a class and then define the methods associated with the class. We will now extend the class by creating new classes that inherit from the base class. The original Ant class is shown in the preceding figure, and now, we will propose four classes that inherit from the base class. Two new classes that inherit from Ant are the Male and Female classes. The Worker class inherits from the Female class, while the Soldier class inherits from the Worker class. The relationships are shown in the following figure. The code for all of the new classes is included in our example codes available at our website, but we will only focus on two of the new classes in the text to keep our discussion more focused. Relationships between the classes that inherit from the base Ant class When a new class is created, it can inherit from an existing class by setting the contains parameter. This can be set to a vector of classes for multiple inheritance. However, we will focus on single inheritance here to avoid discussing the complications associated with determining how R finds a method when there are collisions. Assuming that the Ant base class given in the first section has already been defined in the current session, the child classes can be defined. The details for the two classes, Female and Worker, are discussed here. First, the FemaleAnt class is defined. It adds a new slot, Food, and inherits from the Ant class. Before defining the FemaleAnt class, we add a caveat about the Ant class. The base Ant class should have been a virtual class. We would not ordinarily create an object of the Ant class. We did not make it a virtual class in order to simplify our introduction. We are wiser now and wish to demonstrate how to define a virtual class. The FemaleAnt class will be a virtual class to demonstrate the idea. We will make it a virtual class by including the VIRTUAL character string in the contains parameter, and it will not be possible to create an object of the FemaleAnt class: # Define the female ant class. FemaleAnt <- setClass(    # Set the name of the class    "FemaleAnt",    # Name the data types (slots) that the class will track    slots = c(        Food ="numeric"     # The number of food units carried        ),    # Set the default values for the slots. (optional)    prototype=list(        Food=0        ),    # Make a function that can test to see if the data is consistent.    # (optional)    # This is not called if you have an initialize function defined!    validity=function(object)    {        print("Validity: FemaleAnt")        # Check to see if the number of offspring is non-negative.        if(object@Food<0) {            return("Error: The number of food units is negative")        }        return(TRUE)    },    # This class inherits from the Ant class    contains=c("Ant","VIRTUAL")    ) Now, we will define a WorkerAnt class that inherits from the FemaleAnt class: # Define the worker ant class. WorkerAnt <- setClass(    # Set the name of the class    "WorkerAnt",    # Name the data types (slots) that the class will track    slots = c(        Foraging ="logical",   # Whether or not the ant is actively                                # looking for food        Alarm = "logical"       # Whether or not the ant is actively                                # announcing an alarm.               ),    # Set the default values for the slots. (optional)    prototype=list(        Foraging = FALSE,        Alarm   = FALSE        ),    # Make a function that can test to see if the data is consistent.    # (optional)    # This is not called if you have an initialize function defined!    validity=function(object)    {        print("Validity: WorkerAnt")        return(TRUE)    },    # This class inherits from the FemaleAnt class    contains="FemaleAnt"    ) When a new worker is created, it inherits from the FemaleAnt class: > worker <- WorkerAnt(Position=c(-1,3,5),Length=2.5) > worker An object of class "WorkerAnt" Slot "Foraging": [1] FALSE Slot "Alarm": [1] FALSE Slot "Food": [1] 0 Slot "Length": [1] 2.5 Slot "Position": [1] -1 3 5 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 > worker <- SetLength(worker,3.5) > GetLength(worker) [1] 3.5 We have not defined the relevant methods in the preceding examples. The code is available in our set of examples, and we will not discuss most of it to keep this discussion more focused. We will examine the initialize method, though. The reason to do so is to explore the callNextMethod command. The callNextMethod command is used to request that R searches for and executes a method of the same name that is a member of a parent class. We chose the initialize method because a common task is to build a chain of constructors that initialize the data associated for the class associated with each constructor. We have not yet created any of the initialize methods and start with the base Ant class: setMethod(f="initialize",          signature="Ant",          def=function(.Object,Length=4,Position=c(0.0,0.0,0.0))          {              print("Ant initialize")              .Object = SetLength(.Object,Length)              .Object = SetPosition(.Object,Position)              #validObject(.Object) # you must explicitly call the inspector              return(.Object)          }          ) The constructor takes three arguments: the object itself (.Object), the length, and the position of the ant, and default values are given in case none are provided when a new object is created. The validObject command is commented out. You should try uncommenting the line and create new objects to see whether the validator can in turn call the initialize method. Another important feature is that the initialize method returns a copy of the object. The initialize command is created for the FemaleAnt class, and the arguments to the initialize command should be respected when the request to callNextMethod for the next function is made: setMethod(f="initialize",          signature="FemaleAnt",          def=function(.Object,Length=4,Position=c(0.0,0.0,0.0))          {              print("FemaleAnt initialize ")              .Object <- callNextMethod(.Object,Length,Position)              #validObject(.Object) # you must explicitly call              #the inspector              return(.Object)          }          ) The callNextMethod command is used to call the initialize method associated with the Ant class. The arguments are arranged to match the definition of the Ant class, and it returns a new copy of the current object. Finally, the initialize function for the WorkerAnt class is created. It also makes use of callNextMethod to ensure that the method of the same name associated with the parent class is also called: setMethod(f="initialize",          signature="WorkerAnt",          def=function(.Object,Length=4,Position=c(0.0,0.0,0.0))          {              print("WorkerAnt initialize")             .Object <- callNextMethod(.Object,Length,Position)              #validObject(.Object) # you must explicitly call the inspector              return(.Object)          }          ) Now, when a new object of the WorkerAnt class is created, the initialize method associated with the WorkerAnt class is called, and each associated method for each parent class is called in turn: > worker <- WorkerAnt(Position=c(-1,3,5),Length=2.5) [1] "WorkerAnt initialize" [1] "FemaleAnt initialize " [1] "Ant initialize" Miscellaneous notes In the previous sections, we discussed how to create a new class as well as how to define a hierarchy of classes. We will now discuss four commands that are helpful when working with classes: the slotNames, getSlots, getClass, and slot commands. Each command is briefly discussed in turn, and it is assumed that the Ant, FemaleAnt, and WorkerAnt classes that are given in the previous section are defined in the current workspace. The first command, the slotnames command, is used to list the data components of an object of some class. It returns the names of each component as a vector of characters: > worker <- WorkerAnt(Position=c(1,2,3),Length=5.6) > slotNames(worker) [1] "Foraging"     "Alarm"         "Food"         "Length"       [5] "Position"     "pA"           "pI"           "ActivityLevel" The getSlots command is similar to the slotNames command. The difference is that the argument is a character variable which is the name of the class you want to investigate: > getSlots("WorkerAnt")      Foraging         Alarm         Food       Length     Position    "logical"     "logical"     "numeric"     "numeric"     "numeric"            pA           pI ActivityLevel    "numeric"     "numeric"     "numeric" The getClass command has two forms. If the argument is an object, the command will print out the details for the object. If the argument is a character string, then it will print out the details for the class whose name is the same as the argument: > worker <- WorkerAnt(Position=c(1,2,3),Length=5.6) > getClass(worker) An object of class "WorkerAnt" Slot "Foraging": [1] FALSE Slot "Alarm": [1] FALSE Slot "Food": [1] 0 Slot "Length": [1] 5.6 Slot "Position": [1] 1 2 3 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 > getClass("WorkerAnt") Class "WorkerAnt" [in ".GlobalEnv"] Slots:                                                                            Name:       Foraging         Alarm         Food       Length     Position Class:       logical      logical       numeric       numeric       numeric                                                Name:             pA           pI ActivityLevel Class:       numeric       numeric       numeric Extends: Class "FemaleAnt", directly Class "Ant", by class "FemaleAnt", distance 2 Known Subclasses: "SoldierAnt" Finally, we will examine the slot command. The slot command is used to retrieve the value of a slot for a given object based on the name of the slot: > worker <- WorkerAnt(Position=c(1,2,3),Length=5.6) > slot(worker,"Position") [1] 1 2 3 Summary We introduced the idea of an S4 class and provided several examples. The S4 class is constructed in at least two stages. The first stage is to define the name of the class and the associated data components. The methods associated with the class are then defined in a separate step. In addition to defining a class and its method, the idea of inheritance was explored. A partial example was given in this article; it built on a base class defined in the first section of the article. Additionally, the method to call-associated methods in parent classes was also explored, and the example made use of the constructor (or initialize method) to demonstrate how to build a chain of constructors. Finally, four useful commands were explained. The four commands offered different ways to get information about a class or about an object of a given class. For more information, you can refer to Mobile Cellular Automata Models of Ant Behavior: Movement Activity of Leptothorax allardycei, Blaine J. Cole and David Cheshire, The American Naturalist. Resources for Article: Further resources on this subject: Using R for Statistics, Research, and Graphics [Article] Learning Data Analytics with R and Hadoop [Article] First steps with R [Article]
Read more
  • 0
  • 0
  • 1865

article-image-implementing-stacks-using-javascript
Packt
22 Oct 2014
10 min read
Save for later

Implementing Stacks using JavaScript

Packt
22 Oct 2014
10 min read
 In this article by Loiane Groner, author of the book Learning JavaScript Data Structures and Algorithms, we will discuss the stacks. (For more resources related to this topic, see here.) A stack is an ordered collection of items that follows the LIFO (short for Last In First Out) principle. The addition of new items or the removal of existing items takes place at the same end. The end of the stack is known as the top and the opposite is known as the base. The newest elements are near the top, and the oldest elements are near the base. We have several examples of stacks in real life, for example, a pile of books, as we can see in the following image, or a stack of trays from a cafeteria or food court: A stack is also used by compilers in programming languages and by computer memory to store variables and method calls. Creating a stack We are going to create our own class to represent a stack. Let's start from the basics and declare our class: function Stack() {   //properties and methods go here} First, we need a data structure that will store the elements of the stack. We can use an array to do this: Var items = []; Next, we need to declare the methods available for our stack: push(element(s)): This adds a new item (or several items) to the top of the stack. pop(): This removes the top item from the stack. It also returns the removed element. peek(): This returns the top element from the stack. The stack is not modified (it does not remove the element; it only returns the element for information purposes). isEmpty(): This returns true if the stack does not contain any elements and false if the size of the stack is bigger than 0. clear(): This removes all the elements of the stack. size(): This returns how many elements the stack contains. It is similar to the length property of an array. The first method we will implement is the push method. This method will be responsible for adding new elements to the stack with one very important detail: we can only add new items to the top of the stack, meaning at the end of the stack. The push method is represented as follows: this.push = function(element){   items.push(element);}; As we are using an array to store the elements of the stack, we can use the push method from the JavaScript array class. Next, we are going to implement the pop method. This method will be responsible for removing the items from the stack. As the stack uses the LIFO principle, the last item that we added is the one that is removed. For this reason, we can use the pop method from the JavaScript array class. The pop method is represented as follows: this.pop = function(){   return items.pop();}; With the push and pop methods being the only methods available for adding and removing items from the stack, the LIFO principle will apply to our own Stack class. Now, let's implement some additional helper methods for our class. If we would like to know what the last item added to our stack was, we can use the peek method. This method will return the item from the top of the stack: this.peek = function(){   return items[items.length-1];}; As we are using an array to store the items internally, we can obtain the last item from an array using length - 1 as follows: For example, in the previous diagram, we have a stack with three items; therefore, the length of the internal array is 3. The last position used in the internal array is 2. As a result, the length - 1 (3 - 1) is 2! The next method is the isEmpty method, which returns true if the stack is empty (no item has been added) and false otherwise: this.isEmpty = function(){   return items.length == 0;}; Using the isEmpty method, we can simply verify whether the length of the internal array is 0. Similar to the length property from the array class, we can also implement length for our Stack class. For collections, we usually use the term "size" instead of "length". And again, as we are using an array to store the items internally, we can simply return its length: this.size = function(){   return items.length;}; Finally, we are going to implement the clear method. The clear method simply empties the stack, removing all its elements. The simplest way of implementing this method is as follows: this.clear = function(){   items = [];}; An alternative implementation would be calling the pop method until the stack is empty. And we are done! Our Stack class is implemented. Just to make our lives easier during the examples, to help us inspect the contents of our stack, let's implement a helper method called print that is going to output the content of the stack on the console: this.print = function(){   console.log(items.toString());}; And now we are really done! The complete Stack class Let's take a look at how our Stack class looks after its full implementation: function Stack() {    var items = [];    this.push = function(element){       items.push(element);   };    this.pop = function(){       return items.pop();   };    this.peek = function(){       return items[items.length-1];   };    this.isEmpty = function(){       return items.length == 0;   };    this.size = function(){       return items.length;   };    this.clear = function(){       items = [];   };    this.print = function(){       console.log(items.toString());   };} Using the Stack class Before we dive into some examples, we need to learn how to use the Stack class. The first thing we need to do is instantiate the Stack class we just created. Next, we can verify whether it is empty (the output is true because we have not added any elements to our stack yet): var stack = new Stack();console.log(stack.isEmpty()); //outputs true Next, let's add some elements to it (let's push the numbers 5 and 8; you can add any element type to the stack): stack.push(5);stack.push(8); If we call the peek method, the output will be the number 8 because it was the last element that was added to the stack: console.log(stack.peek()); // outputs 8 Let's also add another element: stack.push(11);console.log(stack.size()); // outputs 3console.log(stack.isEmpty()); //outputs false We added the element 11. If we call the size method, it will give the output as 3, because we have three elements in our stack (5, 8, and 11). Also, if we call the isEmpty method, the output will be false (we have three elements in our stack). Finally, let's add another element: stack.push(15); The following diagram shows all the push operations we have executed so far and the current status of our stack: Next, let's remove two elements from the stack by calling the pop method twice: stack.pop();stack.pop();console.log(stack.size()); // outputs 2stack.print(); // outputs [5, 8] Before we called the pop method twice, our stack had four elements in it. After the execution of the pop method two times, the stack now has only two elements: 5 and 8. The following diagram exemplifies the execution of the pop method: Decimal to binary Now that we know how to use the Stack class, let's use it to solve some Computer Science problems. You are probably already aware of the decimal base. However, binary representation is very important in Computer Science as everything in a computer is represented by binary digits (0 and 1). Without the ability to convert back and forth between decimal and binary numbers, it would be a little bit difficult to communicate with a computer. To convert a decimal number to a binary representation, we can divide the number by 2 (binary is base 2 number system) until the division result is 0. As an example, we will convert the number 10 into binary digits: This conversion is one of the first things you learn in college (Computer Science classes). The following is our algorithm: function divideBy2(decNumber){    var remStack = new Stack(),       rem,       binaryString = '';    while (decNumber > 0){ //{1}       rem = Math.floor(decNumber % 2); //{2}       remStack.push(rem); //{3}       decNumber = Math.floor(decNumber / 2); //{4} }    while (!remStack.isEmpty()){ //{5}       binaryString += remStack.pop().toString();   }    return binaryString;} In this code, while the division result is not zero (line {1}), we get the remainder of the division (mod) and push it to the stack (lines {2} and {3}), and finally, we update the number that will be divided by 2 (line {4}). An important observation: JavaScript has a numeric data type, but it does not distinguish integers from floating points. For this reason, we need to use the Math.floor function to obtain only the integer value from the division operations. And finally, we pop the elements from the stack until it is empty, concatenating the elements that were removed from the stack into a string (line {5}). We can try the previous algorithm and output its result on the console using the following code: console.log(divideBy2(233));console.log(divideBy2(10));console.log(divideBy2(1000)); We can easily modify the previous algorithm to make it work as a converter from decimal to any base. Instead of dividing the decimal number by 2, we can pass the desired base as an argument to the method and use it in the divisions, as shown in the following algorithm: function baseConverter(decNumber, base){    var remStack = new Stack(),        rem,       baseString = '',       digits = '0123456789ABCDEF'; //{6}    while (decNumber > 0){       rem = Math.floor(decNumber % base);       remStack.push(rem);       decNumber = Math.floor(decNumber / base);   }    while (!remStack.isEmpty()){       baseString += digits[remStack.pop()]; //{7}   }    return baseString;} There is one more thing we need to change. In the conversion from decimal to binary, the remainders will be 0 or 1; in the conversion from decimal to octagonal, the remainders will be from 0 to 8; but in the conversion from decimal to hexadecimal, the remainders can be 0 to 8 plus the letters A to F (values 10 to 15). For this reason, we need to convert these values as well (lines {6} and {7}). We can use the previous algorithm and output its result on the console as follows: console.log(baseConverter(100345, 2));console.log(baseConverter(100345, 8));console.log(baseConverter(100345, 16)); Summary In this article, we learned about the stack data structure. We implemented our own algorithm that represents a stack and we learned how to add and remove elements from it using the push and pop methods. We also covered a very famous example of how to use a stack. Resources for Article: Further resources on this subject: Organizing Backbone Applications - Structure, Optimize, and Deploy [article] Introduction to Modern OpenGL [article] Customizing the Backend Editing in TYPO3 Templates [article]
Read more
  • 0
  • 0
  • 6369

article-image-handle-web-applications
Packt
20 Oct 2014
13 min read
Save for later

Handle Web Applications

Packt
20 Oct 2014
13 min read
In this article by Ivo Balbaert author of Dart Cookbook, we will cover the following recipes: Sanitizing HTML Using a browser's local storage Using an application cache to work offline Preventing an onSubmit event from reloading the page (For more resources related to this topic, see here.) Sanitizing HTML We've all heard of (or perhaps even experienced) cross-site scripting (XSS) attacks, where evil minded attackers try to inject client-side script or SQL statements into web pages. This could be done to gain access to session cookies or database data, or to get elevated access-privileges to sensitive page content. To verify an HTML document and produce a new HTML document that preserves only whatever tags are designated safe is called sanitizing the HTML. How to do it... Look at the web project sanitization. Run the following script and see how the text content and default sanitization works: See how the default sanitization works using the following code: var elem1 = new Element.html('<div class="foo">content</div>'); document.body.children.add(elem1); var elem2 = new Element.html('<script class="foo">evil content</script><p>ok?</p>'); document.body.children.add(elem2); The text content and ok? from elem1 and elem2 are displayed, but the console gives the message Removing disallowed element <SCRIPT>. So a script is removed before it can do harm. Sanitize using HtmlEscape, which is mainly used with user-generated content: import 'dart:convert' show HtmlEscape; In main(), use the following code: var unsafe = '<script class="foo">evil   content</script><p>ok?</p>'; var sanitizer = const HtmlEscape(); print(sanitizer.convert(unsafe)); This prints the following output to the console: &lt;script class=&quot;foo&quot;&gt;evil   content&lt;&#x2F;script&gt;&lt;p&gt;ok?&lt;&#x2F;p&gt; Sanitize using node validation. The following code forbids the use of a <p> tag in node1; only <a> tags are allowed: var html_string = '<p class="note">a note aside</p>'; var node1 = new Element.html(        html_string,        validator: new NodeValidatorBuilder()          ..allowElement('a', attributes: ['href'])      ); The console prints the following output: Removing disallowed element <p> Breaking on exception: Bad state: No elements A NullTreeSanitizer for no validation is used as follows: final allHtml = const NullTreeSanitizer(); class NullTreeSanitizer implements NodeTreeSanitizer {      const NullTreeSanitizer();      void sanitizeTree(Node node) {} } It can also be used as follows: var elem3 = new Element.html('<p>a text</p>'); elem3.setInnerHtml(html_string, treeSanitizer: allHtml); How it works... First, we have very good news: Dart automatically sanitizes all methods through which HTML elements are constructed, such as new Element.html(), Element.innerHtml(), and a few others. With them, you can build HTML hardcoded, but also through string interpolation, which entails more risks. The default sanitization removes all scriptable elements and attributes. If you want to escape all characters in a string so that they are transformed into HTML special characters (such as ;&#x2F for a /), use the class HTMLEscape from dart:convert as shown in the second step. The default behavior is to escape apostrophes, greater than/less than, quotes, and slashes. If your application is using untrusted HTML to put in variables, it is strongly advised to use a validation scheme, which only covers the syntax you expect users to feed into your app. This is possible because Element.html() has the following optional arguments: Element.html(String html, {NodeValidator validator, NodeTreeSanitizer treeSanitizer}) In step 3, only <a> was an allowed tag. By adding more allowElement rules in cascade, you can allow more tags. Using allowHtml5() permits all HTML5 tags. If you want to remove all control in some cases (perhaps you are dealing with known safe HTML and need to bypass sanitization for performance reasons), you can add the class NullTreeSanitizer to your code, which has no control at all and defines an object allHtml, as shown in step 4. Then, use setInnerHtml() with an optional named attribute treeSanitizer set to allHtml. Using a browser's local storage Local storage (also called the Web Storage API) is widely supported in modern browsers. It enables the application's data to be persisted locally (on the client side) as a map-like structure: a dictionary of key-value string pairs, in fact using JSON strings to store and retrieve data. It provides our application with an offline mode of functioning when the server is not available to store the data in a database. Local storage does not expire, but every application can only access its own data up to a certain limit depending on the browser. In addition, of course, different browsers can't access each other's stores. How to do it... Look at the following example, the local_storage.dart file: import 'dart:html';  Storage local = window.localStorage;  void main() { var job1 = new Job(1, "Web Developer", 6500, "Dart Unlimited") ; Perform the following steps to use the browser's local storage: Write to a local storage with the key Job:1 using the following code: local["Job:${job1.id}"] = job1.toJson; ButtonElement bel = querySelector('#readls'); bel.onClick.listen(readShowData); } A click on the button checks to see whether the key Job:1 can be found in the local storage, and, if so, reads the data in. This is then shown in the data <div>: readShowData(Event e) {    var key = 'Job:1';    if(local.containsKey(key)) { // read data from local storage:    String job = local[key];    querySelector('#data').appendText(job); } }   class Job { int id; String type; int salary; String company; Job(this.id, this.type, this.salary, this.company); String get toJson => '{ "type": "$type", "salary": "$salary", "company": "$company" } '; } The following screenshot depicts how data is stored in and retrieved from a local storage: How it works... You can store data with a certain key in the local storage from the Window class as follows using window.localStorage[key] = data; (both key and data are Strings). You can retrieve it with var data = window.localStorage[key];. In our code, we used the abbreviation Storage local = window.localStorage; because local is a map. You can check the existence of this piece of data in the local storage with containsKey(key); in Chrome (also in other browsers via Developer Tools). You can verify this by navigating to Extra | Tools | Resources | Local Storage (as shown in the previous screenshot), window.localStorage also has a length property; you can query whether it contains something with isEmpty, and you can loop through all stored values using the following code: for(var key in window.localStorage.keys) { String value = window.localStorage[key]; // more code } There's more... Local storage can be disabled (by user action, or via an installed plugin or extension), so we must alert the user when this needs to be enabled; we can do this by catching the exception that occurs in this case: try { window.localStorage[key] = data; } on Exception catch (ex) { window.alert("Data not stored: Local storage is disabled!"); } Local storage is a simple key-value store and does have good cross-browser coverage. However, it can only store strings and is a blocking (synchronous) API; this means that it can temporarily pause your web page from responding while it is doing its job storing or reading large amounts of data such as images. Moreover, it has a space limit of 5 MB (this varies with browsers); you can't detect when you are nearing this limit and you can't ask for more space. When the limit is reached, an error occurs so that the user can be informed. These properties make local storage only useful as a temporary data storage tool; this means that it is better than cookies, but not suited for a reliable, database kind of storage. Web storage also has another way of storing data called sessionStorage used in the same way, but this limits the persistence of the data to only the current browser session. So, data is lost when the browser is closed or another application is started in the same browser window. Using an application cache to work offline When, for some reason, our users don't have web access or the website is down for maintenance (or even broken), our web-based applications should also work offline. The browser cache is not robust enough to be able to do this, so HTML5 has given us the mechanism of ApplicationCache. This cache tells the browser which files should be made available offline. The effect is that the application loads and works correctly, even when the user is offline. The files to be held in the cache are specified in a manifest file, which has a .mf or .appcache extension. How to do it... Look at the appcache application; it has a manifest file called appcache.mf. The manifest file can be specified in every web page that has to be cached. This is done with the manifest attribute of the <html> tag: <html manifest="appcache.mf"> If a page has to be cached and doesn't have the manifest attribute, it must be specified in the CACHE section of the manifest file. The manifest file has the following (minimum) content: CACHE MANIFEST # 2012-09-28:v3  CACHE: Cached1.html appcache.css appcache.dart http://dart.googlecode.com/svn/branches/bleeding_edge/dart/client/dart.js  NETWORK: *  FALLBACK: / offline.html Run cached1.html. This displays the This page is cached, and works offline! text. Change the text to This page has been changed! and reload the browser. You don't see the changed text because the page is created from the application cache. When the manifest file is changed (change version v1 to v2), the cache becomes invalid and the new version of the page is loaded with the This page has been changed! text. The Dart script appcache.dart of the page should contain the following minimal code to access the cache: main() { new AppCache(window.applicationCache); }  class AppCache { ApplicationCache appCache;  AppCache(this.appCache) {    appCache.onUpdateReady.listen((e) => updateReady());    appCache.onError.listen(onCacheError); }  void updateReady() {    if (appCache.status == ApplicationCache.UPDATEREADY) {      // The browser downloaded a new app cache. Alert the user:      appCache.swapCache();      window.alert('A new version of this site is available. Please reload.');    } }  void onCacheError(Event e) {      print('Cache error: ${e}');      // Implement more complete error reporting to developers } } How it works... The CACHE section in the manifest file enumerates all the entries that have to be cached. The NETWORK: and * options mean that to use all other resources the user has to be online. FALLBACK specifies that offline.html will be displayed if the user is offline and a resource is inaccessible. A page is cached when either of the following is true: Its HTML tag has a manifest attribute pointing to the manifest file The page is specified in the CACHE section of the manifest file The browser is notified when the manifest file is changed, and the user will be forced to refresh its cached resources. Adding a timestamp and/or a version number such as # 2014-05-18:v1 works fine. Changing the date or the version invalidates the cache, and the updated pages are again loaded from the server. To access the browser's app cache from your code, use the window.applicationCache object. Make an object of a class AppCache, and alert the user when the application cache has become invalid (the status is UPDATEREADY) by defining an onUpdateReady listener. There's more... The other known states of the application cache are UNCACHED, IDLE, CHECKING, DOWNLOADING, and OBSOLETE. To log all these cache events, you could add the following listeners to the appCache constructor: appCache.onCached.listen(onCacheEvent); appCache.onChecking.listen(onCacheEvent); appCache.onDownloading.listen(onCacheEvent); appCache.onNoUpdate.listen(onCacheEvent); appCache.onObsolete.listen(onCacheEvent); appCache.onProgress.listen(onCacheEvent); Provide an onCacheEvent handler using the following code: void onCacheEvent(Event e) {    print('Cache event: ${e}'); } Preventing an onSubmit event from reloading the page The default action for a submit button on a web page that contains an HTML form is to post all the form data to the server on which the application runs. What if we don't want this to happen? How to do it... Experiment with the submit application by performing the following steps: Our web page submit.html contains the following code: <form id="form1" action="http://www.dartlang.org" method="POST"> <label>Job:<input type="text" name="Job" size="75"></input>    </label>    <input type="submit" value="Job Search">    </form> Comment out all the code in submit.dart. Run the app, enter a job name, and click on the Job Search submit button; the Dart site appears. When the following code is added to submit.dart, clicking on the no button for a longer duration makes the Dart site appear: import 'dart:html';  void main() { querySelector('#form1').onSubmit.listen(submit); }  submit(Event e) {      e.preventDefault(); // code to be executed when button is clicked  } How it works... In the first step, when the submit button is pressed, the browser sees that the method is POST. This method collects the data and names from the input fields and sends it to the URL specified in action to be executed, which only shows the Dart site in our case. To prevent the form from posting the data, make an event handler for the onSubmit event of the form. In this handler code, e.preventDefault(); as the first statement will cancel the default submit action. However, the rest of the submit event handler (and even the same handler of a parent control, should there be one) is still executed on the client side. Summary In this article we learned how to handle web applications, sanitize a HTML, use a browser's local storage, use application cache to work offline, and how to prevent an onSubmit event from reloading a page. Resources for Article: Further resources on this subject: Handling the DOM in Dart [Article] QR Codes, Geolocation, Google Maps API, and HTML5 Video [Article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [Article]
Read more
  • 0
  • 0
  • 1385

article-image-typical-sales-cycle-and-territory-management
Packt
08 Oct 2014
6 min read
Save for later

A typical sales cycle and territory management

Packt
08 Oct 2014
6 min read
In this article by Mohith Shrivastava, the author of Salesforce Essentials for Administrators, we will look into the typical sales cycle and the territory management feature of Salesforce. (For more resources related to this topic, see here.) A typical sales cycle starts from a campaign. An example of a campaign can be a conference or a seminar where marketing individuals explain the product offering of the company to their prospects. Salesforce provides a campaign object to store this data. A campaign may involve different processes, and the campaign management module of Salesforce is simple. A matured campaign management system will have features such as sending e-mails to campaign members in bulk, and tracking how many people really opened and viewed the e-mails, and how many of them responded to the e-mails. Some of these processes can be custom built in Salesforce, but out of the box, Salesforce has a campaign member object apart from the campaign where members are selected by marketing reps. Members can be leads or contacts of Salesforce. A campaign generates leads. Leads are the prospects that have shown interest in the products and offerings of the company. The lead management module provides a lead object to store all the leads in the system. These prospects are converted into accounts, contacts, and opportunities when the prospect qualifies as an account. Salesforce provides a Lead Convert button to convert these leads into accounts, contacts, and opportunities. Features such as Web-to-Lead provided by the platform are ideal for capturing leads in Salesforce. Accounts can be B2B (business to business) or B2C (business to consumer). B2C in Salesforce is represented as person accounts. This is a special feature that needs to be enabled by a request from Salesforce. It's a record type where person accounts fields are from contacts. Contacts are people, and they are stored in objects in the contact object. They have a relationship with accounts (a relationship can be both master-detail as well as lookup.) An opportunity generates revenue if its status is closed won. Salesforce provides an object known as opportunities to store a business opportunity. The sales reps typically work on these opportunities, and their job is to close these deals and generate revenue. Opportunities have a stage field and stages start from prospecting to closed won or closed lost. Opportunity management provided by Salesforce consists of objects such as opportunity line items, products, price books, and price book entries. Products in Salesforce are the objects that are used as a lookup to junction objects such as an opportunity line item. An opportunity line item is a junction between an opportunity and a line item. Price books are price listings for products in Salesforce. A product can have a standard or custom price book. Custom price books are helpful when your company is offering products at discounts or varied prices for different customers based on market segmentation. Salesforce also provides a quote management module that consists of a quote object and quote line items that sales reps can use to send quotes to customers. The Order management module is new to the Salesforce CRM, and Salesforce provides an object known as orders that can generate an order from the draft state to the active state on accounts and contracts. Most companies use an ERP such as a SAP system to do order management. However, now, Salesforce has introduced this new feature, so on closed opportunities from accounts, you can create orders. The following screenshot explains the sales process and the sales life cycle from campaign to opportunity management:   To read more, I would recommend that you go through the Salesforce documentation available at http://www.salesforce.com/ap/assets/pdf/cloudforce/SalesCloud-TheSalesCloud.pdf. Territory management This feature is very helpful for organizations that run sales processes by sales territories. Let's say you have an account and your organization has a private sharing model. The account has to be worked on by sales representatives of the eastern as well as western regions. Presently, the owner is the sales rep of the eastern region, and because of the private sharing model, the sales rep of the western region will not have access. We could have used sharing rules to provide access, but the challenge is also to do a forecasting of the revenue generated from opportunities for both reps, and this is where writing sharing rules simply won't help us. We need the territory management feature of Salesforce for this, where you can retain opportunities and transfer representatives across territories, draw reports based on territories, and share accounts across territories extending the private sharing model. The key feature of this module is that it works with customizable forecasting only. Basic configurations We will explore the basic configuration needed to set up territory management. This feature is not enabled in your instance by default. To enable it, you have to log a case with Salesforce and explain its need. The basic navigation path for the territories feature is Setup | Manage Users | Manage Territories. Under Manage Territories, we have the settings to set the default access level for accounts, contacts, opportunities, and cases. This implies that when a new territory is created, the access level will be based on the default settings configured. There is a checkbox named Forecast managers can manage territories. Once checked, forecast managers can add accounts to territories, manage account assignment rules, and manage users. Under Manage Territories | Settings, you can see two different buttons, which are as follows: Enable Territory Management: This button forecasts hierarchy, and data is copied to the territory hierarchy. Each forecast hierarchy role will have a territory automatically created. Enable Territory Management from Scratch: This is for new organizations. On clicking this button, the forecast data is wiped, and please note that this is irreversible. Based on the role of the user, a territory is automatically assigned to the user. On the Territory Details page, one can use Add Users to assign users to territories. Account assignment rules To write account assignment rules, navigate to Manage Territories | Hierarchy. Select a territory and click on Manage Rules in the list related to the account assignment rules. Enter the rule name and define the filter criteria based on the account field. You can apply these rules to child territories if you check the Apply to Child Territories checkbox. There is a lot more to explore on this topic, but that's beyond the scope of this book. To explore more, I would recommend that you read the documentation from Salesforce available at https://na9.salesforce.com/help/pdfs/en/salesforce_territories_implementation_guide.pdf. Summary In this article, we have looked at how we can use the territory management feature of Salesforce. We have also described a typical sales cycle. Resources for Article: Further resources on this subject: Introducing Salesforce Chatter [article] Salesforce CRM Functions [article] Configuration in Salesforce CRM [article]
Read more
  • 0
  • 0
  • 2085
article-image-introduction-oracle-bpm
Packt
25 Sep 2014
23 min read
Save for later

Introduction to Oracle BPM

Packt
25 Sep 2014
23 min read
In this article, Vivek Acharya, the author of the book, Oracle BPM Suite 12c Modeling Patterns, has discussed the various patterns in Oracle. The spectrum of this book covers patterns and scenarios from strategic alignment (the goals and strategy model) to flow patterns. From conversation, collaboration, and correlation patterns to exception handling and management patterns; from human task patterns and business-IT collaboration to adaptive case management; and many more advance patterns and features have been covered in this book. This is an easy-to-follow yet comprehensive guide to demystify the strategies and best practices for developing BPM solutions on the Oracle BPM 12c platform. All patterns are complemented with code examples to help you better discover how patterns work. Real-life scenarios and examples touch upon many facets of BPM, where solutions are a comprehensive guide to various BPM modeling and implementation challenges. (For more resources related to this topic, see here.) In this section, we will cover dynamic task assignment patterns in detail, while a glimpse of the strategic alignment pattern is offered. Dynamic task assignment pattern Business processes need human interactions for approvals, exception management, and interaction with a running process, group collaboration, document reviews or case management, and so on. There are various requirements for enabling human interaction with a running BPMN process, which are accomplished using human tasks in Oracle BPM. Human tasks are implemented by human workflow services that are responsible for the routing of tasks, assignment of tasks to users, and so on. When a token arrives at the user task, control is passed from the BPMN process to Oracle Human Workflow, and the token remains with human tasks until it's completed. As callbacks are defined implicitly, once the workflow is complete, control is returned back to the user task and the token moves ahead to subsequent flow activities. When analyzing all the human task patterns, it's evident that it comprises various features, such as assignment patterns, routing patterns, participant list builder patterns, and so on. Oracle BPM offers these patterns as a template for developers, which can be extended. The participants are logically grouped using stages that define the assignment with a routing slip. The assignment can be grouped based on assignment modeling patterns. It could be sequential, parallel, or hybrid. Routing of the tasks to participants is governed by the routing pattern, which is a behavioral pattern. It defines whether one participant needs to act on a task, many participants need to act in sequence, all the participants need to act in parallel, or participants need not act at all. The participants of the task are built using participant list building patterns, such as approval groups, management chains, and so on. The assignment of the participant is performed by a task assignment mechanism, such as static, dynamic, or rule-based assignment. In this section, we will be talking about the task assignment pattern. The intent of the task assignment pattern is the assignment of human tasks to user(s), group(s), and/or role(s). Essentially, it's being motivated with the fact that you need to assign participants to tasks either statically, dynamically, or on derivations based on business rules. We can model and define task assignment at runtime and at design time. However, the dynamic task assignment pattern deals with the assignment of tasks at runtime. This means the derivation of task participants will be performed at runtime when the process is executing. This pattern is very valuable in business scenarios where task assignment cannot be defined at design time. There are various business requirements that need task assignments based on the input data of the process. As task assignment will be based on the input data of the process, task assignments will be evaluated at runtime; hence, these are termed as dynamic task assignments. The following pattern table gives the details of the dynamic task assignment pattern: Signature Dynamic Task Assignment Pattern Classification Human Task Pattern Intent To dynamically assign tasks to the user/group/roles. Motivation Evaluating task assignment at runtime. Applicability Dynamic task assignment is applicable where the distribution of work is required. It's required when you need multilevel task assignment or for the evaluation of users/roles/groups at runtime. Implementation Dynamic task assignment can be implemented using complex process models, business rules, organization units, organizational roles (parametric roles), external routing, and so on. Known issues What if the users that are derived based on the evaluation of dynamic conditions have their own specific preferences? Known solution Oracle BPM offers rules that can be defined in the BPM workspace. Defining the use case We will walk though an insurance claim process that has two tasks to be performed: the first task is verification, and the other is the validation of claimant information. The insurance claim needs to be verified by claim agents. Agents are assigned verification tasks based on the location and the business organization unit they belong to. An agent's skills in handling claim cases are also considered based on the case's sensitivity. If the case's sensitivity requires an expert agent, then an expert agent assigned to that organization unit and belonging to that specific location must be assigned. Think of this from the perspective of having the same task being assigned to different users/groups/roles based on different criteria. For example, if the input payload has Regular as the sensitivity and EastCoast as the organization unit, then the verification task needs to be assigned to an EastCoast-Regular agent. However, if the input payload has EastCoast-Expert as the sensitivity and EastCoast as the organization unit, then the verification task needs to be assigned to an EastCoast-Expert agent. Then, we might end up with multiple swimlanes to fulfill such a task assignment model. However, such models are cumbersome and complicated, and the same activities will be duplicated, as you can see in the following screenshot: The routing of tasks can be modeled in human tasks as well as in the BPMN process. It depends purely on the business requirements and on various modeling considerations. For instance, if you are looking for greater business visibility and if there is a requirement to use exceptional handling or something similar, then it's good to model tasks in the BPMN process itself. However, if you are looking for dynamism, abstraction, dynamic assignment, dynamic routing, rule-driven routing, and so on, then modeling task routing in human task assignment and routing is an enhanced modeling mechanism. We need an easy-to-model and dynamic technique for task assignment. There are various ways to achieve dynamic task assignment: Business rules Organization units Organizational roles (parametric roles) We can use business rules to define the condition(s), and then we can invoke various seeded List Builder functions to build the list of participants. This is one way of enabling dynamic task assignment using business rules. Within an organization, departments and divisions are represented by organizational units. You can define the hierarchy of the organization units, which corresponds to your organizational structure. User(s), group(s), role(s), and organizational roles can be assigned as members to organization units. When a task belonging to a process is associated with an organization unit, then the task is available for the members of that organization unit. So, using the scenario we discussed previously, we can define organization units and assign members to them. We can then associate the process with the organization unit, which will result in task assignment to only those members who belong to the organization that is associated with the process. Users have various properties (attributes) defined in LDAP or in the Oracle Internet Directory (OID); however, there are cases where we need to define additional properties for the users. Most common among them is the definition of roles and organization units as properties of the user. There are cases where we don't have the flexibility to extend an enterprise LDAP/OID for adding these properties. What could be the solution in this case? Extended user properties are the solution in such scenarios. Using an admin user, we can define extended properties for the users in the BPM workspace. Once the extended properties are defined, they can be associated with the users. Let's extend the scenario we were talking about previously. Perform the following tasks: Define the agent group in the myrealm weblogic, which contains users as its members. Define two organization units, one for EastCoast and another for West Coast. The users of the agent group will be assigned as members to organization units. Define extended properties. Associate extended properties with the users (these users are members of the agent group). Define organizational roles, and create a process with a user task that builds the participant list using the organizational role defined previously. The features of the use case are as follows: A validation task gets assigned to the insurance claim agent group (the ClaimAgents group) The assignment of the task to be used is dynamically evaluated based on the value of the organization unit being passed as input to the process Once the validation task is acted upon by the participants (users), the process token reaches the verification task The assignment of the verification task is based on the organization role (the parametric role) being defined Assigning users to groups In the course of achieving the use case scenario, we will execute the following tasks to define a group, organization units, extended properties, and parametric roles: Log in to the Oracle weblogic console and navigate to myrealm. Click on the Users & Groups tab and select the Groups tab. Click on the new group with the name ClaimAgents and save the changes. Click on Users; create the users (anju, rivi, and buny) and assign them to the ClaimAgents group as members. Click on the following users and assign them to the ClaimAgents group: jausten, jverne, mmitch, fkafka, achrist, cdickens, wshake, and rsteven. Click on Save. We have defined and assigned some new users and some of the already existing users to the ClaimAgents group. Now, we will define the organization units and assign users as members to the organization units. Perform the following steps: Log in to the Oracle BPM workspace to define the organization units. Go to Administration | Organization | Organization Units. Navigate to Create Organization Unit | Root Organization Unit in order to create a parent organization unit. Name it Finance. Go to Create Organization Unit | Child Organization Unit in order to create child organization units. Name it EastCoastFinOrg and WestCoastFinOrg: Assign the following users as members to the to the EastCoastFinOrg organization unit: Mmitch, fkafka, jverne, jausten, achrist, and anju Assign the following users as members to the WestCoastFinOrg organization unit: rivi, buny, cdickens, wshake, rsteven, and jstein Defining extended properties The users are now assigned as members to organization units. Now, it's time to define the extended properties. Perform the following steps: In the BPM workspace, navigate to Administration | Organization | Extended User Properties. Click on Add Property to define the extended properties, which are as follows: Property Name Value Sensitivity Expert, regular Location FL, CA, and TX Add Users should be clicked on to associate the properties of users in the Map Properties section on the same page, as shown previously. Use the following mapping to create the map of extended properties and users: User Name Sensitivity Location mmitch Regular FL fkafka Regular FL jverne Regular FL jausten Expert FL achrist Expert FL anju Expert FL rivi Expert CA buny Expert CA cdickens Regular CA wshake Regular CA rsteven Regular CA jstein Expert CA Defining the organization role We will define the organizational role (the parametric role) and use it for task assignment in the case/BPM process. The organizational role (parametric role) as task assignment pattern is used for dynamic task assignment because users/roles are assigned to the parametric role based on the evaluation of the condition at runtime. These conditions are evaluated at runtime for the determination of users/roles based on organization units and extended properties. Perform the following steps: Go to Administration | Organization | Parametric Roles. Click on Create Parametric Role to create a role and name it AgentRole. Click on Add Parameter to define the parameters for the parametric role and name them as Sensitivity of string type and Location of string type. You can find the Sensitivity and Location extended properties in the Add Condition drop-box. Select Grantees as Group; browse and select the ClaimAgents group from the LDAP. Select Sensitivity and click on the + sign to include the property in the condition. For the sensitivity, use the Equals operator and select the parametric role's input parameter from the drop-down list. This parameter will be listed as $Sensitivity. Similarly, select $Location and click on the + sign to include the property in the condition. For the location, use the Equals operator and select the parametric role's input parameter from the drop-down list. This parameter will be listed as $Location. This is shown in the following screenshot: Click on Save to apply the changes. As we can see in the preceding screenshot, the organization unit is also available as a property that can be included in the condition. We have configured the parametric role with a specific condition. The admin can log in and change the conditions as and when the business requires (changes at runtime, which bring in agility and dynamism). Implementing the BPMN process We can create a BPM process which has a user task that builds a participant's list using the parametric role, as follows: Create a new BPM project with the project name as DynamicTaskAssignment. Create the composite with a BPMN component. Create an asynchronous BPMN process and name it as VerificationProcess. Create a business object, VerificationProcessBO, based on the InsuranceClaim.xsd schema. The schema (XSD) can be found in the DynamicTaskAssignment project in this article. You can navigate to the schemas folder in the project to get the XSD. Define the process input argument as VerificationProcessIN, which should be based on the VerificationProcessBO business object. Similarly, define the process output argument as VerificationProcessOUT, based on VerificationProcessBO. Define two process data objects as VerificationProcessINPDO and VerificationProcessOUTPDO, which are based on the VerificationProcessBO business object. Click on the message start event's data association and perform the data association as shown in the following screenshot. As we can see, the process input is assigned to a PDO; however, the organization unit element from the process input is assigned to the predefined variable, organizationUnit. This is shown in the following screenshot: The validation task is implemented to demonstrate dynamic task assignment using multilevel organization units. We will check the working of this task when we perform the test. Perform the following steps: Drag-and-drop a user task between the message start event and the message end event and name it ValidationTask. Go to the Implementation tab in the user task property dialog and click on Add to create a human task; name it ValidationTask. Enter the title of the task with outcomes as Accept and Reject. Let the input parameter for ValidationTask be VerificationProcessINPDO. Click on OK to finish the task configuration. This will bring you back to the ValidationTask property dialog box. Perform the necessary data association. Click on ValidationTask.task to open the task metadata, and go to the Assignment section in the task metadata. Click on the Participant block. This will open the participant type dialog box. Select the Parallel Routing Pattern. Select the list building pattern as Lane Participant ( the current lane). For all other values, use the default settings. Click on Save All. Now, we will create a second task in the same process, which will be used to demonstrate dynamic task assignment using the organization role (the parametric role). Perform the following steps: Drag-and-drop a user task between Validation Task and the message end event in the verification process and name it as Verification Task. Go to the Implementation tab in the user task property dialog and click on Add to create a human task. Enter the title of the task as Verification Task, with the Accept and Reject outcomes. Let the input parameter for the Verification Task be VerificationProcessINPDO. Click on OK to finish the task configuration. This will bring you back to VerificationTask property dialog. Click on VerificationTask.task to open the task metadata and go to the Assignment section in the task metadata. Click on the Participant block. This will open the Participant Type dialog box. Select parallel routing pattern. Select list building pattern as Organization Role. This is shown in the following screenshot: Enter the name of the organizational role as AgentRole, which is the organizational role we have defined previously: Along with the organizational role, enter the input arguments for the organizational role. For the Sensitivity input argument of the parametric role, use the XPath expression to browse the input payload and select Sensitivity Element, as shown previously. Similarly, for the Location argument, select State Element from the input payload. Click on Save All. This article contains downloads for DynamicTaskAssignment, which we have already created to facilitate verification. If you have not created the project by following the steps mentioned previously, you can use the project delivered in the download. Deploy the project to the weblogic server. Log in to the BPM workspace as the admin user and go to Administrator | Organization units. Click on roles to assign the ClaimAgents group to the DynamicTaskAssignmentPrj.ClaimAgents role. Click on Save. Testing the dynamic task assignment pattern Log in to the EM console as an admin user to test the project; however, you can use any tool of your choice to test it. We can get the test data from the project itself. Navigate to DynamicTaskAssignment | SOA | Testsuites | TestData12c.xml to find the test data file. Use the TestData.xml file if you are going to execute the project in the 11g environment. The test data contains values where the sensitivity is set to Expert, organization unit is set to Finance/EastCoastFinOrg, and state is set to CA. The following are the test results of the validation task: Org Unit Input Value Validation Task Assignment Finance/EastCoastFinOrg mmitch, fkafka, jverne, jausten, achrist, anju, and jstein Finance jausten, jverne, mmitch, fkafka, achrist, cdickens, wshake, rsteven, rivi, buny, jstein, and anju Note that if you pass the organization unit as Finance, then all the users belonging to the finance organization's child organization will receive the task. However, if you pass the organization unit as Finance/EastCoastFinOrg (EastCoastFinOrg is a child organization in the finance parent organization), then only those users who are in the EastCoastFinOrg child organization will receive the task. The process flow will move from the validation task to the verification task only when 50 percent of the participants act on the validation task, as parallel routing pattern is defined with the voting pattern of 50 percent. The following are the test results for Verification Task: Input Verification Task Sensitivity: Expert buny, rivi, and jstein Location: CA N/A Based on the extended properties mapping, the verification task will get assigned to the users buny, rivi, and jstein. Addressing known issues What if the users are derived based on the evaluation of dynamic conditions? To address this, Oracle BPM offers rules that can be defined in the BPM workspace. We will extend the use case we have defined. When we execute the verification process, the verification task is assigned to the buny, rivi, and jstein users. How will you address a situation where the user, buny, wants the task to be reassigned or delegated to someone when he is in training and cannot act on the assigned tasks? Perform the following steps: Log in to the Oracle BPM workspace as the user (buny) for whom we want to set the preference. Go to Preferences | Rules, as shown in the following screenshot: Click on + to create a new rule. Enter the name of the rule as TrainingRule. Specify the rule condition for the use case user (buny). We have defined the rule condition that executes the rule when the specified dates are met and the rule is applicable to all the tasks: Specify the dates (Start Date and End Date) between which we want the rule to be applicable. (These are the dates on which the user, buny, will be in training). Define the rule condition either by selecting all the tasks or by specifying matching criteria for the task. Hence, when the task condition is evaluated to True, the rule gets executed. Reassign the task to the jcooper user when the rule gets evaluated to True. The rule action constitutes of task reassignment or delegation, or we can specify a rule that takes no action. The rule action is to change task assignment or to allow someone else to perform on behalf of the original assignee, as described in the following points: Select reassignment if you want the task to be assigned to another assignee, who can work on the task as if the task was assigned to him/her Select delegation if you want the assignee to whom the task is delegated to work on behalf of the original assignee Execute the DynamicTaskAssignment project to run the verification process. When the verification task gets executed, log in to the EM console and check the process trace. As we can see in the following screenshot, the task gets reassigned to the jcooper user. We can log in to the BPM workspace as the jcooper user and can also verify the task in the task list. This is shown in the following screenshot: There's more The dynamic task assigned pattern brings in dynamism and agility to the business process. In the preceding use case, we have passed organization unit as the process input parameter. However, with Oracle BPM 12c, we can define business parameters and use them to achieve greater flexibility. The business parameters allow business owners to change the business parameter's value at runtime without changing the process, which essentially allows you to change the process without the inclusion of IT. Basically, business parameters are already used in the process and they are driving the process flow. Changing the value of business parameters at runtime is like changing the process flow at execution. For the preceding use case, the insurance input schema (parameter) has an organization unit that is passed when invoking the process. However, what if there are no placeholders to pass the organization unit in the input parameter? We can define a business parameter in JDeveloper and assign the value to the organization unit. This is shown in the following screenshot: Perform the following steps to deploy the project. Open JDeveloper 12c and expand the project to navigate to Organization | Business Parameters. Define a business parameter with the name ORGUNIT and of the type String; enter a default value and save the changes. Go to the process message start event and navigate to the Implementation tab. Click on data association and assign business parameters to the predefined variable (organization unit). Save and deploy the project. Using the shown mechanism, a developer can enable business parameters; technically, the BPMN engine executes the following function to get the business parameter value: bpmn:getBusinessParameter('Business Parameter') Similarly, a process analyst can click on the BPM composer application and bring about changes in the process to define business parameters and changes in the process. Process asset manager (PAM) will take care of asset sharing and collaboration. Business owners can log in to the BPM workspace application and change the business parameters by navigating to the following path to edit the parameter values to drive/modify the process flow: Administration | Organization | Business Parameter Strategic alignment pattern BPMN needs a solution to align business goals, objectives, and strategies. It also needs a solution to allow business analysts and function/knowledge workers to create business architecture models that drive the IT development of a process, which remains aligned with the goals and objectives. Oracle BPM 12c offers business architecture, a methodology to perform high-level analysis of business processes. This methodology adopts a top-down approach for discovering organizational processes, defining goals and objectives, defining strategies and mapping them to goals and objectives, and reporting the BA components. All these details are elaborated exclusively with use cases and demo projects in the book. The following pattern table highlights facts around the strategic alignment pattern: Signature Strategic Alignment Pattern Classification Analysis and Discovery Pattern Intent To offer a broader business model (an organizational blueprint) that ensures the alignment of goals, objectives, and strategies with organizational initiatives. Motivation A BPMN solution should offer business analysts and functional users with a set of features to analyze, refine, define, optimize, and report business processes in the enterprise. Applicability Such a solution will only empower businesses to define models based on what they actually need, and reporting will help to evaluate the performances. This will then drive the IT development of the processes by translating requirements into BPMN processes and cases. Implementation Using BPM composer, we can define goals, objectives, strategies, and value chain models. We can refer to BPMN processes from the value chain models. Goals are broken down into objects, which are fulfilled by strategies. Strategies are implemented by value chains, which can be decomposed into value chains/business processes. Known issues The sharing of assets between IT developers, business architects, and process analysts. Known solution Oracle BPM 12c offers PAM, which is a comprehensive solution, and offers seamless asset sharing and collaboration between business and IT. This book covers PAM exclusively. Summary In this article, we have just introduced the alignment pattern. However, in the book, alignment pattern is covered in detail. It shows how IT development and process models can be aligned with organization goals. While performing alignments, we will learn enterprise maps, strategy models, and value chain models. We will discover how models are created and linked to an organization. Capturing the business context showcases the importance of documentation in the process model phase. Different document levels and their methods of definition are discussed along with their usage. Further, we learned how to create different reports based on the information we have documented in the process, such as RACI reports and so on. The process player demonstration showcased how process behavior can be emulated in a visual representation, which allows designers and analysts to test and revise a process without deploying it. This infuses a preventive approach and also enables organizations to quickly find the loopholes, making them more responsive to challenges. We also elaborated on how round trips and business-IT collaboration facilitates the storing, sharing, and collaboration of process assets and business architecture assets. While doing so, we witnessed PAM and subversion as well as learnt versioning, save/update/commit, difference and merge, and various other activities that empower developers and analysts to work in concert. In this book, we will learn various patterns in a similar format. Each pattern pairs the classic problem/solution format, which includes signature, intent, motivation, applicability, and implementation; the implementation is demonstrated via a use case scenario along with a BPMN application, in each chapter. It's a one-stop title to learn about patterns, their applicability and implementation, as well as BPMN features. Resources for Article: Further resources on this subject: Oracle GoldenGate- Advanced Administration Tasks - I [article] Oracle B2B Overview [article] Oracle ADF Essentials – Adding Business Logic [article]
Read more
  • 0
  • 0
  • 1950

article-image-exploring-usages-delphi
Packt
24 Sep 2014
12 min read
Save for later

Exploring the Usages of Delphi

Packt
24 Sep 2014
12 min read
This article written by Daniele Teti, the author of Delphi Cookbook, explains the process of writing enumerable types. It also discusses the steps to customize FireMonkey controls. (For more resources related to this topic, see here.) Writing enumerable types When the for...in loop was introduced in Delphi 2005, the concept of enumerable types was also introduced into the Delphi language. As you know, there are some built-in enumerable types. However, you can create your own enumerable types using a very simple pattern. To make your container enumerable, implement a single method called GetEnumerator, that must return a reference to an object, interface, or record, that implements the following three methods and one property (in the sample, the element to enumerate is TFoo):    function GetCurrent: TFoo;    function MoveNext: Boolean;    property Current: TFoo read GetCurrent; There are a lot of samples related to standard enumerable types, so in this recipe you'll look at some not-so-common utilizations. Getting ready In this recipe, you'll see a file enumerable function as it exists in other, mostly dynamic, languages. The goal is to enumerate all the rows in a text file without actual opening, reading and closing the file, as shown in the following code: var row: String; begin for row in EachRows('....myfile.txt') do WriteLn(row); end; Nice, isn't it? Let's start… How to do it... We have to create an enumerable function result. The function simply returns the actual enumerable type. This type is not freed automatically by the compiler so you've to use a value type or an interfaced type. For the sake of simplicity, let's code to return a record type: function EachRows(const AFileName: String): TFileEnumerable; begin Result := TFileEnumerable.Create(AFileName); end; The TFileEnumerable type is defined as follows: type TFileEnumerable = record private FFileName: string; public constructor Create(AFileName: String); function GetEnumerator: TEnumerator<String>; end; . . . constructor TFileEnumerable.Create(AFileName: String); begin FFileName := AFileName; end; function TFileEnumerable.GetEnumerator: TEnumerator<String<; begin Result := TFileEnumerator.Create(FFileName); end; No logic here; this record is required only because you need a type that has a GetEnumerator method defined. This method is called automatically by the compiler when the type is used on the right side of the for..in loop. An interesting thing happens in the TFileEnumerator type, the actual enumerator, declared in the implementation section of the unit. Remember, this object is automatically freed by the compiler because it is the return of the GetEnumerator call: type TFileEnumerator = class(TEnumerator<String>) private FCurrent: String; FFile: TStreamReader; protected constructor Create(AFileName: String); destructor Destroy; override; function DoGetCurrent: String; override; function DoMoveNext: Boolean; override; end; { TFileEnumerator } constructor TFileEnumerator.Create(AFileName: String); begin inherited Create; FFile := TFile.OpenText(AFileName); end; destructor TFileEnumerator.Destroy; begin FFile.Free; inherited; end; function TFileEnumerator.DoGetCurrent: String; begin Result := FCurrent; end; function TFileEnumerator.DoMoveNext: Boolean; begin Result := not FFile.EndOfStream; if Result then FCurrent := FFile.ReadLine; end; The enumerator inherits from TEnumerator<String> because each row of the file is represented as a string. This class also gives a mechanism to implement the required methods. The DoGetCurrent (called internally by the TEnumerator<T>.GetCurrent method) returns the current line. The DoMoveNext method (called internally by the TEnumerator<T>.MoveNext method) returns true or false if there are more lines to read in the file or not. Remember that this method is called before the first call to the GetCurrent method. After the first call to the DoMoveNext method, FCurrent is properly set to the first row of the file. The compiler generates a piece of code similar to the following pseudo code: it = typetoenumerate.GetEnumerator; while it.MoveNext do begin S := it.Current; //do something useful with string S end it.free; There's more… Enumerable types are really powerful and help you to write less, and less error prone, code. There are some shortcuts to iterate over in-place data without even creating an actual container. If you have a bounce or integers or if you want to create a not homogenous for loop over some kind of data type, you can use the new TArray<T> type as shown here: for i in TArray<Integer>.Create(2, 4, 8, 16) do WriteLn(i); //write 2 4 8 16 TArray<T> is a generic type, so the same works also for strings: for s in TArray<String>.Create('Hello','Delphi','World') do WriteLn(s); It can also be used for Plain Old Delphi Object (PODO) or controls: for btn in TArray<TButton>.Create(btn1, btn31,btn2) do btn.Enabled := false See also http://docwiki.embarcadero.com/RADStudio/XE6/en/Declarations_and_Statements#Iteration_Over_Containers_Using_For_statements: This Embarcadero documentation will provide a detailed introduction to enumerable types Giving a new appearance to the standard FireMonkey controls using styles Since Version XE2, RAD Studio includes FireMonkey. FireMonkey is an amazing library. It is a really ambitious target for Embarcadero, but it's important for its long-term strategy. VCL is and will remain a Windows-only library, while FireMonkey has been designed to be completely OS and device independent. You can develop one application and compile it anywhere (if anywhere is contained in Windows, OS X, Android, and iOS; let's say that is a good part of anywhere). Getting ready A styled component doesn't know how it will be rendered on the screen, but the style. Changing the style, you can change the aspect of the component without changing its code. The relation between the component code and style is similar to the relation between HTML and CSS, one is the content and another is the display. In terms of FireMonkey, the component code contains the actual functionalities the component has, but the aspect is completely handled by the associated style. All the TStyledControl child classes support styles. Let's say you have to create an application to find a holiday house for a travel agency. Your customer wants a nice-looking application to search for the dream house their customers. Your graphic design department (if present) decided to create a semitransparent look-and-feel, as shown in the following screenshot, and you've to create such an interface. How to do that? This is the UI we want How to do it… In this case, you require some step-by-step instructions, so here they are: Create a new FireMonkey desktop application (navigate to File | New | FireMonkey Desktop Application). Drop a TImage component on the form. Set its Align property to alClient, and use the MultiResBitmap property and its property editor to load a nice-looking picture. Set the WrapMode property to iwFit and resize the form to let the image cover the entire form. Now, drop a TEdit component and a TListBox component over the TImage component. Name the TEdit component as EditSearch and the TListBox component as ListBoxHouses. Set the Scale property of the TEdit and TListBox components to the following values: Scale.X: 2 Scale.Y: 2 Your form should now look like this: The form with the standard components The actions to be performed by the users are very simple. They should write some search criteria in the Edit field and click on Return. Then, the listbox shows all the houses available for that criteria (with a "contains" search). In a real app, you require a database or a web service to query, but this is a sample so you'll use fake search criteria on fake data. Add the RandomUtilsU.pas file from the Commons folder of the project and add it to the uses clause of the main form. Create an OnKeyUp event handler for the TEdit component and write the following code inside it: procedure TForm1.EditSearchKeyUp(Sender: TObject;      var Key: Word; var KeyChar: Char; Shift: TShiftState); var I: Integer; House: string; SearchText: string; begin if Key <> vkReturn then    Exit;   // this is a fake search... ListBoxHouses.Clear; SearchText := EditSearch.Text.ToUpper; //now, gets 50 random houses and match the criteria for I := 1 to 50 do begin    House := GetRndHouse;    if House.ToUpper.Contains(SearchText) then      ListBoxHouses.Items.Add(House); end; if ListBoxHouses.Count > 0 then    ListBoxHouses.ItemIndex := 0 else    ListBoxHouses.Items.Add('<Sorry, no houses found>'); ListBoxHouses.SetFocus; end; Run the application and try it to familiarize yourself with the behavior. Now, you have a working application, but you still need to make it transparent. Let's start with the FireMonkey Style Designer (FSD). Just to be clear, at the time of writing, the FireMonkey Style Designer is far to be perfect. It works, but it is not a pleasure to work with it. However, it does its job. Right-click on the TEdit component. From the contextual menu, choose Edit Custom Style (general information about styles and the style editor can be found at http://docwiki.embarcadero.com/RADStudio/XE6/en/FireMonkey_Style_Designer and http://docwiki.embarcadero.com/RADStudio/XE6/en/Editing_a_FireMonkey_Style). Delphi opens a new tab that contains the FSD. However, to work with it, you need the Structure pane to be visible as well (navigate to View | Structure or Shift + Alt + F11). In the Structure pane, there are all the styles used by the TEdit control. You should see a Structure pane similar to the following screenshot: The Structure pane showing the default style for the TEdit control In the Structure pane, open the editsearchstyle1 node, select the background subnode, and go to the Object Inspector. In the Object Inspector window, remove the content of the SourceLookup property. The background part of the style is TActiveStyleObject. A TActiveStyleObject style is a style that is able to show a part of an image as default and another part of the same image when the component that uses it is active, checked, focused, mouse hovered, pressed, or selected. The image to use is in the SourceLookup property. Our TEdit component must be completely transparent in every state, so we removed the value of the SourceLookup property. Now the TEdit component is completely invisible. Click on Apply and Close and run the application. As you can confirm, the edit works but it is completely transparent. Close the application. When you opened the FSD for the first time, a TStyleBook component has been automatically dropped on the form and contains all your custom styles. Double-click on it and the style designer opens again. The edit, as you saw, is transparent, but it is not usable at all. You need to see at least where to click and write. Let's add a small bottom line to the edit style, just like a small underline. To perform the next step, you require the Tool Palette window and the Structure pane visible. Here is my preferred setup for this situation: The Structure pane and the Tool Palette window are visible at the same time using the docking mechanism; you can also use the floating windows if you wish Now, search for a TLine component in the Tool Palette window. Drag-and-drop the TLine component onto the editsearchstyle1 node in the Structure pane. Yes, you have to drop a component from the Tool Palette window directly onto the Structure pane. Now, select the TLine component in the Structure Pane (do not use the FSD to select the components, you have to use the Structure pane nodes). In the Object Inspector, set the following properties: Align: alContents HitTest: False LineType: ltTop RotationAngle: 180 Opacity: 0.6 Click on Apply and Close. Run the application. Now, the text is underlined by a small black line that makes it easy to identify that the application is transparent. Stop the application. Now, you've to work on the listbox; it is still 100 percent opaque. Right-click on the ListBoxHouses option and click on Edit Custom Style. In the Structure pane, there are some new styles related to the TListBox class. Select the listboxhousesstyle1 option, open it, and select its child style, background. In the Object Inspector, change the Opacity property of the background style to 0.6. Click on Apply and Close. That's it! Run the application, write Calif in the Edit field and press Return. You should see a nice-looking application with a semitransparent user interface showing your dream houses in California (just like it was shown in the screenshot in the Getting ready section of this recipe). Are you amazed by the power of FireMonkey styles? How it works... The trick used in this recipe is simple. If you require a transparent UI, just identify which part of the style of each component is responsible to draw the background of the component. Then, put the Opacity setting to a level less than 1 (0.6 or 0.7 could be enough for most cases). Why not simply change the Opacity property of the component? Because if you change the Opacity property of the component, the whole component will be drawn with that opacity. However, you need only the background to be transparent; the inner text must be completely opaque. This is the reason why you changed the style and not the component property. In the case of the TEdit component, you completely removed the painting when you removed the SourceLookup property from TActiveStyleObject that draws the background. As a thumb rule, if you have to change the appearance of a control, check its properties. If the required customization is not possible using only the properties, then change the style. There's more… If you are new to FireMonkey styles, probably most concepts in this recipe must have been difficult to grasp. If so, check the official documentation on the Embarcadero DocWiki at the following URL: http://docwiki.embarcadero.com/RADStudio/XE6/en/Customizing_FireMonkey_Applications_with_Styles Summary In this article, we discussed ways to write enumerable types in Delphi. We also discussed how we can use styles to make our FireMonkey controls look better. Resources for Article: Further resources on this subject: Adding Graphics to the Map [Article] Application Performance [Article] Coding for the Real-time Web [Article]
Read more
  • 0
  • 0
  • 3013

article-image-galera-cluster-basics
Packt
23 Sep 2014
5 min read
Save for later

Galera Cluster Basics

Packt
23 Sep 2014
5 min read
This article written by Pierre MAVRO, the author of MariaDB High Performance, provides a brief introduction to Galera Cluster. (For more resources related to this topic, see here.) Galera Cluster is a synchronous multimaster solution created by Codership. It's a patch for MySQL and MariaDB with its own commands and configuration. On MariaDB, it has been officially promoted as the MariaDB Cluster. Galera Cluster provides certification-based replication. This means that each node certifies the replicated write set against other write sets. You don't have to worry about data integrity, as it manages it automatically and very well. Galera Cluster is a young product, but is ready for production. If you have already heard of MySQL Cluster, don't be confused; this is not the same thing at all. MySQL Cluster is a solution that has not been ported to MariaDB due to its complexity, code, and other reasons. MySQL Cluster provides availability and partitioning, while Galera Cluster provides consistency and availability. Galera Cluster is a simple yet powerful solution. How Galera Cluster works The following are some advantages of Galera Cluster: True multimaster: It can read and write to any node at any time Synchronous replication: There is no slave lag and no data is lost at node crash Consistent data: All nodes have the same state (same data exists between nodes at a point in time) Multithreaded slave: This enables better performance with any workload No need of an HA Cluster for management: There are no master-slave failover operations (such as Pacemaker, PCR, and so on) Hot standby: There is no downtime during failover Transparent to applications: No specific drivers or application changes are required No read and write splitting needed: There is no need to split the read and write requests WAN: Galera Cluster supports WAN replication Galera Cluster needs at least three nodes to work properly (because of the notion of quorum, election, and so on). You can also work with a two-node cluster, but you will need an arbiter (hence three nodes). The arbiter could be used on another machine available in the same LAN of your Galera Cluster, if possible. The multimaster replication has been designed for InnoDB/XtraDB. It doesn't mean you can't perform a replication with other storage engines! If you want to use other storage engines, you will be limited by the following: They can only write on a single node at a time to maintain consistency. Replication with other nodes may not be fully supported. Conflict management won't be supported. Applications that connect to Galera will only be able to write on a single node (IP/DNS) at the same time. As you can see in the preceding diagram, HTTP and App servers speak directly to their respective DBMS servers without wondering which node of the Galera Cluster they are targeting. Usually, without Galera Cluster, you can use a cluster software such as Pacemaker/Corosync to get a VIP on a master node that can switch over in case a problem occurs. No need to get PCR in that case; a simple VIP with a custom script will be sufficient to check whether the server is in sync with others is enough. Galera Cluster uses the following advanced mechanisms for replication: Transaction reordering: Transactions are reordered before commitment to other nodes. This increases the number of successful transaction certification pass tests. Write sets: This reduces the number of operations between nodes by writing sets in a single write set to avoid too much node coordination. Database state machine: Read-only transactions are processed on the local node. Write transactions are executed locally on shadow copies and then broadcasted as a read set to the other nodes for certification and commit. Group communication: High-level abstraction for communication between nodes to guarantee consistency (gcomm or spread). To get consistency and similar IDs between nodes, Galera Cluster uses GTID, similar to MariaDB 10 replication. However, it doesn't use the MariaDB GTID replication mechanism at all, as it has its own implementation for its own usage. Galera Cluster limitations Galera Cluster has limitations that prevent it from working correctly. Do not go live in production if you haven't checked that your application is in compliance with the limitations listed. The following are the limitations: Galera Cluster only fully supports InnoDB tables. TokuDB is planned but not yet available and MyISAM is partially supported. Galera Cluster uses primary keys on all your tables (mandatory) to avoid different query execution orders between all your nodes. If you do not do it on your own, Galera will create one. The delete operation is not supported on the tables without primary keys. Locking/unlocking tables and lock functions are not supported. They will be ignored if you try to use them. Galera Cluster disables query cache. XA transactions (global transactions) are not supported. Query logs can't be directed to a table, but can be directed to a file instead. Other less common limitations exist (please refer to the full list if you want to get them all: http://galeracluster.com/documentation-webpages/limitations.html) but in most cases, you shouldn't be annoyed with those ones. Summary This article introduced the benefits and drawbacks of Galera Cluster. It also discussed the features of Galera Cluster that makes it a good solution for write replications. Resources for Article: Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [Article] Using SHOW EXPLAIN with running queries [Article] Installing MariaDB on Windows and Mac OS X [Article]
Read more
  • 0
  • 0
  • 3268
article-image-setting-software-infrastructure-cloud
Packt
23 Sep 2014
42 min read
Save for later

Setting up of Software Infrastructure on the Cloud

Packt
23 Sep 2014
42 min read
In this article by Roberto Freato, author of Microsoft Azure Development Cookbook, we mix some of the recipes of of this book, to build a complete overview of what we need to set up a software infrastructure on the cloud. (For more resources related to this topic, see here.) Microsoft Azure is Microsoft’s Platform for Cloud Computing. It provides developers with elastic building blocks to build scalable applications. Those building blocks are services for web hosting, storage, computation, connectivity, and more, which are usable as stand-alone services or mixed together to build advanced scenarios. Building an application with Microsoft Azure could really mean choosing the appropriate services and mix them together to run our application. We start by creating a SQL Database. Creating a SQL Database server and database SQL Database is a multitenanted database system in which many distinct databases are hosted on many physical servers managed by Microsoft. SQL Database administrators have no control over the physical provisioning of a database to a particular physical server. Indeed, to maintain high availability, a primary and two secondary copies of each SQL Database are stored on separate physical servers, and users can't have any control over them. Consequently, SQL Database does not provide a way for the administrator to specify the physical layout of a database and its logs when creating a SQL Database. The administrator merely has to provide a name, maximum size, and service tier for the database. A SQL Database server is the administrative and security boundary for a collection of SQL Databases hosted in a single Azure region. All connections to a database hosted by the server go through the service endpoint provided by the SQL Database server. At the time of writing this book, an Azure subscription can create up to six SQL Database servers, each of which can host up to 150 databases (including the master database). These are soft limits that can be increased by arrangement with Microsoft Support. From a billing perspective, only the database unit is counted towards, as the server unit is just a container. However, to avoid a waste of unused resources, an empty server is automatically deleted after 90 days of non-hosting user databases. The SQL Database server is provisioned on the Azure Portal. The Region as well as the administrator login and password must be specified during the provisioning process. After the SQL Database server has been provisioned, the firewall rules used to restrict access to the databases associated with the SQL Database server can be modified on the Azure Portal, using Transact SQL or the SQL Database Service Management REST API. The result of the provisioning process is a SQL Database server identified by a fully -qualified DNS name such as SERVER_NAME.database.windows.net, where SERVER_NAME is an automatically generated (random and unique) string that differentiates this SQL Database server from any other. The provisioning process also creates the master database for the SQL Database server and adds a user and associated login for the administrator specified during the provisioning process. This user has the rights to create other databases associated with this SQL Database server as well as any logins needed to access them. Remember to distinguish between the SQL Database service and the famous SQL Server engine available on the Azure platform, but as a plain installation over VMs. In the latter case, you will continue to own the complete control of the instance that runs the SQL Server, the installation details, and the effort to maintain it during the time. Also, remember that the SQL Server virtual machines have a different pricing from the standard VMs due to their license costs. An administrator can create a SQL Database either on the Azure Portal or using the CREATE DATABASE Transact SQL statement. At the time of this writing this book, SQL Database runs in the following two different modes: Version 1.0: This refers to Web or Business Editions Version 2.0: This refers to Basic, Standard, or Premium service tiers with performance levels The first version is deprecating in few months. Web Edition was designed for small databases under 5 GB and Business Edition for databases of 10 GB and larger (up to 150 GB). There is no difference in these editions other than the maximum size and billing increment. The second version introduced service tiers (the equivalent of Editions) with an additional parameter (performance level) that sets the amount of dedicated resource to a given database. The new service tiers (Basic, Standard, and Premium) introduced a lot of advanced features such as active/passive Geo-replication, point-in-time restore, cross-region copy, and restore. Different performance levels have different limits such as the Database Throughput Unit (DTU) and the maximum DB size. An updated list of service tiers and performance levels can be found at http://msdn.microsoft.com/en-us/library/dn741336.aspx. Once a SQL Database has been created, the ALTER DATABASE Transact SQL statement can be used to alter either the edition or the maximum size of the database. The maximum size is important as the database is made read only once it reaches that size (with the The database has reached its size quota error message and number 40544). In this recipe, we'll learn how to create a SQL Database server and a database using the Azure Portal and T-SQL. Getting Ready To perform the majority of operations of the recipe, just a plain internet browser is needed. However, to connect directly to the server, we will use the SQL Server Management Studio (also available in the Express version). How to do it... First, we are going to create a SQL Database server using the Azure Portal. We will do this using the following steps: On the Azure Portal, go to the SQL DATABASES section and then select the SERVERS tab. In the bottom menu, select Add. In the CREATE SERVER window, provide an administrator login and password. Select a Subscription and Region that will host the server. To enable access from the other service in WA to the server, you can check the Allow Windows Azure Services to access the server checkbox; this is a special firewall rule that allows the 0.0.0.0 to 0.0.0.0 IP range. Confirm and wait a few seconds to complete the operation. After that, using the Azure Portal,.go to the SQL DATABASES section and then the SERVERS tab. Select the previously created server by clicking on its name. In the server page, go to the DATABASES tab. In the bottom menu, click on Add; then, after clicking on NEW SQL DATABASE, the CUSTOM CREATE window will open. Specify a name and select the Web Edition. Set the maximum database size to 5 GB and leave the COLLATION dropdown to its default. SQL Database fees are charged differently if you are using the Web/Business Edition rather than the Basic/Standard/Premium service tiers. The most updated pricing scheme for SQL Database can be found at http://azure.microsoft.com/en-us/pricing/details/sql-database/ Verify the server on which you are creating the database (it is specified correctly in the SERVER dropdown) and confirm it. Alternatively, using Transact SQL, launch Microsoft SQL Server Management Studio and open the Connect to Server window. In the Server name field, specify the fully qualified name of the newly created SQL Database server in the following form: serverName.database.windows.net. Choose the SQL Server Authentication method. Specify the administrative username and password associated earlier. Click on the Options button and specify the Encrypt connection checkbox. This setting is particularly critical while accessing a remote SQL Database. Without encryption, a malicious user could extract all the information to log in to the database himself, from the network traffic. Specifying the Encrypt connection flag, we are telling the client to connect only if a valid certificate is found on the server side. Optionally check the Remember password checkbox and connect to the server. To connect remotely to the server, a firewall rule should be created. In the Object Explorer window, locate the server you connected to, navigate to Databases | System Databases folder, and then right-click on the master database and select New Query. 18. Copy and execute this query and wait for its completion:. CREATE DATABASE DATABASE_NAME ( MAXSIZE = 1 GB ) How it works... The first part is pretty straightforward. In steps 1 and 2, we go to the SQL Database section of the Azure portal, locating the tab to manage the servers. In step 3, we fill the online popup with the administrative login details, and in step 4, we select a Region to place the SQL Database server. As a server (with its database) is located in a Region, it is not possible to automatically migrate it to another Region. After the creation of the container resource (the server), we create the SQL Database by adding a new database to the newly created server, as stated from steps 6 to 9. In step 10, we can optionally change the default collation of the database and its maximum size. In the last part, we use the SQL Server Management Studio (SSMS) (step 12) to connect to the remote SQL Database instance. We notice that even without a database, there is a default database (the master one) we can connect to. After we set up the parameters in step 13, 14, and 15, we enable the encryption requirement for the connection. Remember to always set the encryption before connecting or listing the databases of a remote endpoint, as every single operation without encryption consists of plain credentials sent over the network. In step 17, we connect to the server if it grants access to our IP. Finally, in step 18, we open a contextual query window, and in step 19, we execute the creation query, specifying a maximum size for the database. Note that the Database Edition should be specified in the CREATE DATABASE query as well. By default, the Web Edition is used. To override this, the following query can be used: CREATE DATABASE MyDB ( Edition='Basic' ) There's more… We can also use the web-based Management Portal to perform various operations against the SQL Database, such as invoking Transact SQL commands, altering tables, viewing occupancy, and monitoring the performance. We will launch the Management Portal using the following steps: Obtain the name of the SQL Database server that contains the SQL Database. Go to https://serverName.database.windows.net. In the Database fields, enter the database name (leave it empty to connect to the master database). Fill the Username and Password fields with the login information and confirm. Increasing the size of a database We can use the ALTER DATABASE command to increase the size (or the Edition, with the Edition parameter) of a SQL Database by connecting to the master database and invoking the following Transact SQL command: ALTER DATABASE DATABASE_NAME MODIFY ( MAXSIZE = 5 GB ) We must use one of the allowable database sizes. Connecting to a SQL Database with Entity Framework The Azure SQL Database is a SQL Server-like fully managed relation database engine. In many other recipes, we showed you how to connect transparently to the SQL Database, as we did in the SQL Server, as the SQL Database has the same TDS protocol as its on-premise brethren. In addition, using the raw ADO.NET could lead to some of the following issues: Hardcoded SQL: In spite of the fact that a developer should always write good code and make no errors, there is the finite possibility to make mistake while writing stringified SQL, which will not be verified at design time and might lead to runtime issues. These kind of errors lead to runtime errors, as everything that stays in the quotation marks compiles. The solution is to reduce every line of code to a command that is compile time safe. Type safety: As ADO.NET components were designed to provide a common layer of abstraction to developers who connect against several different data sources, the interfaces provided are generic for the retrieval of values from the fields of a data row. A developer could make a mistake by casting a field to the wrong data type, and they will realize it only at run time. The solution is to reduce the mapping of table fields to the correct data type at compile time. Long repetitive actions: We can always write our own wrapper to reduce the code replication in the application, but using a high-level library, such as the ORM, can take off most of the repetitive work to open a connection, read data, and so on. Entity Framework hides the complexity of the data access layer and provides developers with an intermediate abstraction layer to let them operate on a collection of objects instead of rows of tables. The power of the ORM itself is enhanced by the usage of LINQ, a library of extension methods that, in synergy with the language capabilities (anonymous types, expression trees, lambda expressions, and so on), makes the DB access easier and less error prone than in the past. This recipe is an introduction to Entity Framework, the ORM of Microsoft, in conjunction with the Azure SQL Database. Getting Ready The database used in this recipe is the Northwind sample database of Microsoft. It can be downloaded from CodePlex at http://northwinddatabase.codeplex.com/. How to do it… We are going to connect to the SQL Database using Entity Framework and perform various operations on data. We will do this using the following steps: Add a new class named EFConnectionExample to the project. Add a new ADO.NET Entity Data Model named Northwind.edmx to the project; the Entity Data Model Wizard window will open. Choose Generate from database in the Choose Model Contents step. In the Choose Your Data Connection step, select the Northwind connection from the dropdown or create a new connection if it is not shown. Save the connection settings in the App.config file for later use and name the setting NorthwindEntities. If VS asks for the version of EF to use, select the most recent one. In the last step, choose the object to include in the model. Select the Tables, Views, Stored Procedures, and Functions checkboxes. Add the following method, retrieving every CompanyName, to the class: private IEnumerable<string> NamesOfCustomerCompanies() { using (var ctx = new NorthwindEntities()) { return ctx.Customers .Select(p => p.CompanyName).ToArray(); } } Add the following method, updating every customer located in Italy, to the class: private void UpdateItalians() { using (var ctx = new NorthwindEntities()) { ctx.Customers.Where(p => p.Country == "Italy") .ToList().ForEach(p => p.City = "Milan"); ctx.SaveChanges(); } } Add the following method, inserting a new order for the first Italian company alphabetically, to the class: private int FirstItalianPlaceOrder() { using (var ctx = new NorthwindEntities()) { var order = new Orders() { EmployeeID = 1, OrderDate = DateTime.UtcNow, ShipAddress = "My Address", ShipCity = "Milan", ShipCountry = "Italy", ShipName = "Good Ship", ShipPostalCode = "20100" }; ctx.Customers.Where(p => p.Country == "Italy") .OrderBy(p=>p.CompanyName) .First().Orders.Add(order); ctx.SaveChanges(); return order.OrderID; } } Add the following method, removing the previously inserted order, to the class: private void RemoveTheFunnyOrder(int orderId) { using (var ctx = new NorthwindEntities()) { var order = ctx.Orders .FirstOrDefault(p => p.OrderID == orderId); if (order != null) ctx.Orders.Remove(order); ctx.SaveChanges(); } } Add the following method, using the methods added earlier, to the class: public static void UseEFConnectionExample() { var example = new EFConnectionExample(); var customers=example.NamesOfCustomerCompanies(); foreach (var customer in customers) { Console.WriteLine(customer); } example.UpdateItalians(); var order=example.FirstItalianPlaceOrder(); example.RemoveTheFunnyOrder(order); } How it works… This recipe uses EF to connect and operate on a SQL Database. In step 1, we create a class that contains the recipe, and in step 2, we open the wizard for the creation of Entity Data Model (EDMX). We create the model, starting from an existing database in step 3 (it is also possible to write our own model and then persist it in an empty database), and then, we select the connection in step 4. In fact, there is no reference in the entire code to the Windows Azure SQL Database. The only reference should be in the App.config settings created in step 5; this can be changed to point to a SQL Server instance, leaving the code untouched. The last step of the EDMX creation consists of concrete mapping between the relational table and the object model, as shown in step 6. This method generates the code classes that map the table schema, using strong types and collections referred to as Navigation properties. It is also possible to start from the code, writing the classes that could represent the database schema. This method is known as Code-First. In step 7, we ask for every CompanyName of the Customers table. Every table in EF is represented by DbSet<Type>, where Type is the class of the entity. In steps 7 and 8, Customers is DbSet<Customers>, and we use a lambda expression to project (select) a property field and another one to create a filter (where) based on a property value. The SaveChanges method in step 8 persists to the database the changes detected in the disconnected object data model. This magic is one of the purposes of an ORM tool. In step 9, we use the navigation property (relationship) between a Customers object and the Orders collection (table) to add a new order with sample data. We use the OrderBy extension method to order the results by the specified property, and finally, we save the newly created item. Even now, EF automatically keeps track of the newly added item. Additionally, after the SaveChanges method, EF populates the identity field of Order (OrderID) with the actual value created by the database engine. In step 10, we use the previously obtained OrderID to remove the corresponding order from the database. We use the FirstOrDefault() method to test the existence of the ID, and then, we remove the resulting object like we removed an object from a plain old collection. In step 11, we use the methods created to run the demo and show the results. Deploying a Website Creating a Website is an administrative task, which is performed in the Azure Portal in the same way we provision every other building block. The Website created is like a "deployment slot", or better, "web space", since the abstraction given to the user is exactly that. Azure Websites does not require additional knowledge compared to an old-school hosting provider, where FTP was the standard for the deployment process. Actually, FTP is just one of the supported deployment methods in Websites, since Web Deploy is probably the best choice for several scenarios. Web Deploy is a Microsoft technology used for copying files and provisioning additional content and configuration to integrate the deployment process. Web Deploy runs on HTTP and HTTPS with basic (username and password) authentication. This makes it a good choice in networks where FTP is forbidden or the firewall rules are strict. Some time ago, Microsoft introduced the concept of Publish Profile, an XML file containing all the available deployment endpoints of a particular website that, if given to Visual Studio or Web Matrix, could make the deployment easier. Every Azure Website comes with a publish profile with unique credentials, so one can distribute it to developers without giving them grants on the Azure Subscription. Web Matrix is a client tool of Microsoft, and it is useful to edit live sites directly from an intuitive GUI. It uses Web Deploy to provide access to the remote filesystem as to perform remote changes. In Websites, we can host several websites on the same server farm, making administration easier and isolating the environment from the neighborhood. Moreover, virtual directories can be defined from the Azure Portal, enabling complex scenarios or making migrations easier. In this recipe, we will cope with the deployment process, using FTP and Web Deploy with some variants. Getting ready This recipe assumes we have and FTP client installed on the local machine (for example, FileZilla) and, of course, a valid Azure Subscription. We also need Visual Studio 2013 with the latest Azure SDK installed (at the time of writing, SDK Version 2.3). How to do it… We are going to create a new Website, create a new ASP.NET project, deploy it through FTP and Web Deploy, and also use virtual directories. We do this as follows: Create a new Website in the Azure Portal, specifying the following details: The URL prefix (that is, TestWebSite) is set to [prefix].azurewebsites.net The Web Hosting Plan (create a new one) The Region/Location (select West Europe) Click on the newly created Website and go to the Dashboard tab. Click on Download the publish profile and save it on the local computer. Open Visual Studio and create a new ASP.NET web application named TestWebSite, with an empty template and web forms' references. Add a sample Default.aspx page to the project and paste into it the following HTML: <h1>Root Application</h1> Press F5 and test whether the web application is displayed correctly. Create a local publish target. Right-click on the project and select Publish. Select Custom and specify Local Folder. In the Publish method, select File System and provide a local folder where Visual Studio will save files. Then click on Publish to complete. Publish via FTP. Open FileZilla and then open the Publish profile (saved in step 3) with a text editor. Locate the FTP endpoint and specify the following: publishUrl as the Host field username as the Username field userPWD as the Password field Delete the hostingstart.html file that is already present on the remote space. When we create a new Azure Website, there is a single HTML file in the root folder by default, which is served to the clients as the default page. By leaving it in the Website, the file could be served after users' deployments as well if no valid default documents are found. Drag-and-drop all the contents of the local folder with the binaries to the remote folder, then run the website. Publish via Web Deploy. Right-click on the Project and select Publish. Go to the Publish Web wizard start and select Import, providing the previously downloaded Publish Profile file. When Visual Studio reads the Web Deploy settings, it populates the next window. Click on Confirm and Publish the web application. Create an additional virtual directory. Go to the Configure tab of the Website on the Azure Portal. At the bottom, in the virtual applications and directories, add the following: /app01 with the path siteapp01 Mark it as Application Open the Publish Profile file and duplicate the <publishProfile> tag with the method FTP, then edit the following: Add the suffix App01 to profileName Replace wwwroot with app01 in publishUrl Create a new ASP.NET web application called TestWebSiteApp01 and create a new Default.aspx page in it with the following code: <h1>App01 Application</h1> Right-click on the TestWebSiteApp01 project and Publish. Select Import and provide the edited Publish Profile file. In the first step of the Publish Web wizard (go back if necessary), select the App01 method and select Publish. Run the Website's virtual application by appending the /app01 suffix to the site URL. How it works... In step 1, we create the Website on the Azure Portal, specifying the minimal set of parameters. If the existing web hosting plan is selected, the Website will start in the specified tier. In the recipe, by specifying a new web hosting plan, the Website is created in the free tier with some limitations in configuration. The recipe uses the Azure Portal located at https://manage.windowsazure.com. However, the new Azure Portal will be at https://portal.azure.com. New features will be probably added only in the new Portal. In steps 2 and 3, we download the Publish Profile file, which is an XML containing the various endpoints to publish the Website. At the time of writing, Web Deploy and FTP are supported by default. In steps 4, 5, and 6, we create a new ASP.NET web application with a sample ASPX page and run it locally. In steps 7, 8, and 9, we publish the binaries of the Website, without source code files, into a local folder somewhere in the local machine. This unit of deployment (the folder) can be sent across the wire via FTP, as we do in steps 10 to 13 using the credentials and the hostname available in the Publish Profile file. In steps 14 to 16, we use the Publish Profile file directly from Visual Studio, which recognizes the different methods of deployment and suggests Web Deploy as the default one. If we perform the steps 10-13, with steps14-16 we overwrite the existing deployment. Actually, Web Deploy compares the target files with the ones to deploy, making the deployment incremental for those file that have been modified or added. This is extremely useful to avoid unnecessary transfers and to save bandwidth. In steps 17 and 18, we configure a new Virtual Application, specifying its name and location. We can use an FTP client to browse the root folder of a website endpoint, since there are several folders such as wwwroot, locks, diagnostics, and deployments. In step 19, we manually edit the Publish Profile file to support a second FTP endpoint, pointing to the new folder of the Virtual Application. Visual Studio will correctly understand this while parsing the file again in step 22, showing the new deployment option. Finally, we verify whether there are two applications: one on the root folder / and one on the /app01 alias. There's more… Suppose we need to edit the website on the fly, editing a CSS of JS file or editing the HTML somewhere. We can do this using Web Matrix, which is available from the Azure Portal itself through a ClickOnce installation: Go to the Dashboard tab of the Website and click on WebMatrix at the bottom. Follow the instructions to install the software (if not yet installed) and, when it opens, select Edit live site directly (the magic is done through the Publish Profile file and Web Deploy). In the left-side tree, edit the Default.aspx file, and then save and run the Website again. Azure Websites gallery Since Azure Websites is a PaaS service, with no lock-in or particular knowledge or framework required to run it, it can hosts several Open Source CMS in different languages. Azure provides a set of built-in web applications to choose while creating a new website. This is probably not the best choice for production environments; however, for testing or development purposes, it should be a faster option than starting from scratch. Wizards have been, for a while, the primary resources for developers to quickly start off projects and speed up the process of creating complex environments. However, the Websites gallery creates instances of well-known CMS with predefined configurations. Instead, production environments are manually crafted, customizing each aspect of the installation. To create a new Website using the gallery, proceed as follows: Create a new Website, specifying from gallery. Select the web application to deploy and follow the optional configuration steps. If we create some resources (like databases) while using the gallery, they will be linked to the site in the Linked Resources tab. Building a simple cache for applications Azure Cache is a managed service with (at the time of writing this book) the following three offerings: Basic: This service has a unit size of 128 MB, up to 1 GB with one named cache (the default one) Standard: This service has a unit size of 1 GB, up to 10 GB with 10 named caches and support for notifications Premium: This service has a unit size of 5 GB, up to 150 GB with ten named caches, support for notifications, and high availability Different offerings have different unit prices, and remember that when changing from one offering to another, all the cache data is lost. In all offerings, users can define the items' expiration. The Cache service listens to a specific TCP port. Accessing it from a .NET application is quite simple, with the Microsoft ApplicationServer Caching library available on NuGet. In the Microsoft.ApplicationServer.Caching namespace, the following are all the classes that are needed to operate: DataCacheFactory: This class is responsible for instantiating the Cache proxies to interpret the configuration settings. DataCache: This class is responsible for the read/write operation against the cache endpoint. DataCacheFactoryConfiguration: This is the model class of the configuration settings of a cache factory. Its usage is optional as cache can be configured in the App/Web.config file in a specific configuration section. Azure Cache is a key-value cache. We can insert and even get complex objects with arbitrary tree depth using string keys to locate them. The importance of the key is critical, as in a single named cache, only one object can exist for a given key. The architects and developers should have the proper strategy in place to deal with unique (and hierarchical) names. Getting ready This recipe assumes that we have a valid Azure Cache endpoint of the standard type. We need the standard type because we use multiple named caches, and in later recipes, we use notifications. We can create a Standard Cache endpoint of 1 GB via PowerShell. Perform the following steps to create the Standard Cache endpoint : Open the Azure PowerShell and type Add-AzureAccount. A popup window might appear. Type your credentials connected to a valid Azure subscription and continue. Optionally, select the proper Subscription, if not the default one. Type this command to create a new Cache endpoint, replacing myCache with the proper unique name: New-AzureManagedCache -Name myCache -Location "West Europe" -Sku Standard -Memory 1GB After waiting for some minutes until the endpoint is ready, go to the Azure Portal and look for the Manage Keys section to get one of the two Access Keys of the Cache endpoint. In the Configure section of the Cache endpoint, a cache named default is created by default. In addition, create two named caches with the following parameters: Expiry Policy: Absolute Time: 10 Notifications: Enabled Expiry Policy could be Absolute (the default expiration time or the one set by the user is absolute, regardless of how many times the item has been accessed), Sliding (each time the item has been accessed, the expiration timer resets), or Never (items do not expire). This Azure Cache endpoint is now available in the Management Portal, and it will be used in the entire article. How to do it… We are going to create a DataCache instance through a code-based configuration. We will perform simple operations with Add, Get, Put, and Append/Prepend, using a secondary-named cache to transfer all the contents of the primary one. We will do this by performing the following steps: Add a new class named BuildingSimpleCacheExample to the project. Install the Microsoft.WindowsAzure.Caching NuGet package. Add the following using statement to the top of the class file: using Microsoft.ApplicationServer.Caching; Add the following private members to the class: private DataCacheFactory factory = null; private DataCache cache = null; Add the following constructor to the class: public BuildingSimpleCacheExample(string ep, string token,string cacheName) { DataCacheFactoryConfiguration config = new DataCacheFactoryConfiguration(); config.AutoDiscoverProperty = new DataCacheAutoDiscoverProperty(true, ep); config.SecurityProperties = new DataCacheSecurity(token, true); factory = new DataCacheFactory(config); cache = factory.GetCache(cacheName); } Add the following method, creating a palindrome string into the cache: public void CreatePalindromeInCache() { var objKey = "StringArray"; cache.Put(objKey, ""); char letter = 'A'; for (int i = 0; i < 10; i++) { cache.Append(objKey, char.ConvertFromUtf32((letter+i))); cache.Prepend(objKey, char.ConvertFromUtf32((letter + i))); } Console.WriteLine(cache.Get(objKey)); } Add the following method, adding an item into the cache to analyze its subsequent retrievals: public void AddAndAnalyze() { var randomKey = DateTime.Now.Ticks.ToString(); var value="Cached string"; cache.Add(randomKey, value); DataCacheItem cacheItem = cache.GetCacheItem(randomKey); Console.WriteLine(string.Format( "Item stored in {0} region with {1} expiration", cacheItem.RegionName,cacheItem.Timeout)); cache.Put(randomKey, value, TimeSpan.FromSeconds(60)); cacheItem = cache.GetCacheItem(randomKey); Console.WriteLine(string.Format( "Item stored in {0} region with {1} expiration", cacheItem.RegionName, cacheItem.Timeout)); var version = cacheItem.Version; var obj = cache.GetIfNewer(randomKey, ref version); if (obj == null) { //No updates } } Add the following method, transferring the contents of the cache named initially into a second one: public void BackupToDestination(string destCacheName) { var destCache = factory.GetCache(destCacheName); var dump = cache.GetSystemRegions() .SelectMany(p => cache.GetObjectsInRegion(p)) .ToDictionary(p=>p.Key,p=>p.Value); foreach (var item in dump) { destCache.Put(item.Key, item.Value); } } Add the following method to clear the cache named first: public void ClearCache() { cache.Clear(); } Add the following method, using the methods added earlier, to the class: public static void RunExample() { var cacheName = "[named cache 1]"; var backupCache = "[named cache 2]"; string endpoint = "[cache endpoint]"; string token = "[cache token/key]"; BuildingSimpleCacheExample example = new BuildingSimpleCacheExample(endpoint, token, cacheName); example.CreatePalindromeInCache(); example.AddAndAnalyze(); example.BackupToDestination(backupCache); example.ClearCache(); } How it works... From steps 1 to 3, we set up the class. In step 4, we add private members to store the DataCacheFactory object used to create the DataCache object to access the Cache service. In the constructor that we add in step 5, we initialize the DataCacheFactory object using a configuration model class (DataCacheFactoryConfiguration). This strategy is for code-based initialization whenever settings cannot stay in the App.config/Web.config file. In step 6, we use the Put() method to write an empty string into the StringArray bucket. We then use the Append() and Prepend() methods, designed to concatenate strings to existing strings, to build a palindrome string in the memory cache. This sample does not make any sense in real-world scenarios, and we must pay attention to some of the following issues: Writing an empty string into the cache is somehow useless. Each Append() or Prepend() operation travels on TCP to the cache and goes back. Though it is very simple, it requires resources, and we should always try to consolidate calls. In step 7, we use the Add() method to add a string to the cache. The difference between the Add() and Put() methods is that the first method throws an exception if the item already exists, while the second one always overwrites the existing value (or writes it for the first time). GetCacheItem() returns a DataCacheItem object, which wraps the value together with other metadata properties, such as the following: CacheName: This is the named cache where the object is stored. Key: This is the key of the associated bucket. RegionName (user defined or system defined): This is the region of the cache where the object is stored. Size: This is the size of the object stored. Tags: These are the optional tags of the object, if it is located in a user-defined region. Timeout: This is the current timeout before the object would expire. Version: This is the version of the object. This is a DataCacheItemVersion object whose properties are not accessible due to their modifier. However, it is not important to access this property, as the Version object is used as a token against the Cache service to implement the optimistic concurrency. As for the timestamp value, its semantic can stay hidden from developers. The first Add() method does not specify a timeout for the object, leaving the default global expiration timeout, while the next Put() method does, as we can check in the next Get() method. We finally ask the cache about the object with the GetIfNewer() method, passing the latest version token we have. This conditional Get method returns null if the object we own is already the latest one. In step 8, we list all the keys of the first named cache, using the GetSystemRegions() method (to first list the system-defined regions), and for each region, we ask for their objects, copying them into the second named cache. In step 9, we clear all the contents of the first cache. In step 10, we call the methods added earlier, specifying the Cache endpoint to connect to and the token/password, along with the two named caches in use. Replace [named cache 1], [named cache 2], [cache endpoint], and [cache token/key] with actual values. There's more… Code-based configuration is useful when the settings stay in a different place as compared to the default config files for .NET. It is not a best practice to hardcode them, so this is the standard way to declare them in the App.config file: <configSections> <section name="dataCacheClients" type="Microsoft.ApplicationServer.Caching.DataCacheClientsSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere" /> </configSections> The XML mentioned earlier declares a custom section, which should be as follows: <dataCacheClients> <dataCacheClient name="[name of cache]"> <autoDiscover isEnabled="true" identifier="[domain of cache]" /> <securityProperties mode="Message" sslEnabled="true"> <messageSecurity authorizationInfo="[token of endpoint]" /> </securityProperties> </dataCacheClient> </dataCacheClients> In the upcoming recipes, we will use this convention to set up the DataCache objects. ASP.NET Support With almost no effort, the Azure Cache can be used as Output Cache in ASP.NET to save the session state. To enable this, in addition to the configuration mentioned earlier, we need to include those declarations in the <system.web> section as follows: <sessionState mode="Custom" customProvider="AFCacheSessionStateProvider"> <providers> <add name="AFCacheSessionStateProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="[named cache]" dataCacheClientName="[name of cache]" applicationName="AFCacheSessionState"/> </providers> </sessionState> <caching> <outputCache defaultProvider="AFCacheOutputCacheProvider"> <providers> <add name="AFCacheOutputCacheProvider" type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="[named cache]" dataCacheClientName="[name of cache]" applicationName="AFCacheOutputCache" /> </providers> </outputCache> </caching> The difference between [name of cache] and [named cache] is as follows: The [name of cache] part is a friendly name of the cache client declared above an alias. The [named cache] part is the named cache created into the Azure Cache service. Connecting to the Azure Storage service In an Azure Cloud Service, the storage account name and access key are stored in the service configuration file. By convention, the account name and access key for data access are provided in a setting named DataConnectionString. The account name and access key needed for Azure Diagnostics must be provided in a setting named Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString. The DataConnectionString setting must be declared in the ConfigurationSettings section of the service definition file. However, unlike other settings, the connection string setting for Azure Diagnostics is implicitly defined when the Diagnostics module is specified in the Imports section of the service definition file. Consequently, it must not be specified in the ConfigurationSettings section. A best practice is to use different storage accounts for application data and diagnostic data. This reduces the possibility of application data access being throttled by competition for concurrent writes from the diagnostics monitor. What is Throttling? In shared services, where the same resources are shared between tenants, limiting the concurrent access to them is critical to provide service availability. If a client misuses the service or, better, generates a huge amount of traffic, other tenants pointing to the same shared resource could experience unavailability. Throttling (also known as Traffic Control plus Request Cutting) is one of the most adopted solutions that is solving this issue. It also provides a security boundary between application data and diagnostics data, as diagnostics data might be accessed by individuals who should have no access to application data. In the Azure Storage library, access to the storage service is through one of the Client classes. There is one Client class for each Blob service, Queue service, and Table service; they are CloudBlobClient, CloudQueueClient, and CloudTableClient, respectively. Instances of these classes store the pertinent endpoint as well as the account name and access key. The CloudBlobClient class provides methods to access containers, list their contents, and get references to containers and blobs. The CloudQueueClient class provides methods to list queues and get a reference to the CloudQueue instance that is used as an entry point to the Queue service functionality. The CloudTableClient class provides methods to manage tables and get the TableServiceContext instance that is used to access the WCF Data Services functionality while accessing the Table service. Note that the CloudBlobClient, CloudQueueClient, and CloudTableClient instances are not thread safe, so distinct instances should be used when accessing these services concurrently. The client classes must be initialized with the account name, access key, as well as the appropriate storage service endpoint. The Microsoft.WindowsAzure namespace has several helper classes. The StorageCredential class initializes an instance from an account name and access key or from a shared access signature. In this recipe, we'll learn how to use the CloudBlobClient, CloudQueueClient, and CloudTableClient instances to connect to the storage service. Getting ready This recipe assumes that the application's configuration file contains the following: <appSettings> <add key="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/> <add key="AccountName" value="{ACCOUNT_NAME}"/> <add key="AccountKey" value="{ACCOUNT_KEY}"/> </appSettings> We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the storage account name and access key, respectively. We are not working in a Cloud Service but in a simple console application. Storage services, like many other building blocks of Azure, can also be used separately from on-premise environments. How to do it... We are going to connect to the Table service, the Blob service, and the Queue service, and perform a simple operation on each. We will do this using the following steps: Add a new class named ConnectingToStorageExample to the project. Add the following using statements to the top of the class file: using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Blob; using Microsoft.WindowsAzure.Storage.Queue; using Microsoft.WindowsAzure.Storage.Table; using Microsoft.WindowsAzure.Storage.Auth; using System.Configuration; The System.Configuration assembly should be added via the Add Reference action onto the project, as it is not included in most of the project templates of Visual Studio. Add the following method, connecting the blob service, to the class: private static void UseCloudStorageAccountExtensions() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse( ConfigurationManager.AppSettings[ "DataConnectionString"]); CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient(); CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference( "{NAME}"); cloudBlobContainer.CreateIfNotExists(); } Add the following method, connecting the Table service, to the class: private static void UseCredentials() { string accountName = ConfigurationManager.AppSettings[ "AccountName"]; string accountKey = ConfigurationManager.AppSettings[ "AccountKey"]; StorageCredentials storageCredentials = new StorageCredentials( accountName, accountKey); CloudStorageAccount cloudStorageAccount = new CloudStorageAccount(storageCredentials, true); CloudTableClient tableClient = new CloudTableClient( cloudStorageAccount.TableEndpoint, storageCredentials); CloudTable table = tableClient.GetTableReference("{NAME}"); table.CreateIfNotExists(); } Add the following method, connecting the Queue service, to the class: private static void UseCredentialsWithUri() { string accountName = ConfigurationManager.AppSettings[ "AccountName"]; string accountKey = ConfigurationManager.AppSettings[ "AccountKey"]; StorageCredentials storageCredentials = new StorageCredentials( accountName, accountKey); StorageUri baseUri = new StorageUri(new Uri(string.Format( "https://{0}.queue.core.windows.net/", accountName))); CloudQueueClient cloudQueueClient = new CloudQueueClient(baseUri, storageCredentials); CloudQueue cloudQueue = cloudQueueClient.GetQueueReference("{NAME}"); cloudQueue.CreateIfNotExists(); } Add the following method, using the other methods, to the class: public static void UseConnectionToStorageExample() { UseCloudStorageAccountExtensions(); UseCredentials(); UseCredentialsWithUri(); } How it works... In steps 1 and 2, we set up the class. In step 3, we implement the standard way to access the storage service using the Storage Client library. We use the static CloudStorageAccount.Parse() method to create a CloudStorageAccount instance from the value of the connection string stored in the configuration file. We then use this instance with the CreateCloudBlobClient() extension method of the CloudStorageAccount class to get the CloudBlobClient instance that we use to connect to the Blob service. We can also use this technique with the Table service and the Queue service, using the relevant extension methods, CreateCloudTableClient() and CreateCloudQueueClient(), respectively, for them. We complete this example using the CloudBlobClient instance to get a CloudBlobContainer reference to a container and then create it if it does not exist We need to replace {NAME} with the name for a container. In step 4, we create a StorageCredentials instance directly from the account name and access key. We then use this to construct a CloudStorageAccount instance, specifying that any connection should use HTTPS. Using this technique, we need to provide the Table service endpoint explicitly when creating the CloudTableClient instance. We then use this to create the table. We need to replace {NAME} with the name of a table. We can use the same technique with the Blob service and Queue service using the relevant CloudBlobClient or CloudQueueClient constructor. In step 5, we use a similar technique, except that we avoid the intermediate step of using a CloudStorageAccount instance and explicitly provide the endpoint for the Queue service. We use the CloudQueueClient instance created in this step to create the queue. We need to replace {NAME} with the name of a queue. Note that we hardcoded the endpoint for the Queue service. Though this last method is officially supported, it is not a best practice to bind our code to hardcoded strings with endpoint URIs. So, it is preferable to use one of the previous methods that hides the complexity of the URI generation at the library level. In step 6, we add a method that invokes the methods added in the earlier steps. There's more… With the general availability of the .NET Framework Version 4.5, many libraries of the CLR have been added with the support of asynchronous methods with the Async/Await pattern. Latest versions of the Azure Storage Library also have these overloads, which are useful while developing mobile applications, and fast web APIs. They are generally useful when it is needed to combine the task execution model into our applications. Almost each long-running method of the library has its corresponding methodAsync() method to be called as follows: await cloudQueue.CreateIfNotExistsAsync(); In the rest of the book, we will continue to use the standard, synchronous pattern. Adding messages to a Storage queue The CloudQueue class in the Azure Storage library provides both synchronous and asynchronous methods to add a message to a queue. A message comprises up to 64 KB bytes of data (48 KB if encoded in Base64). By default, the Storage library Base64 encodes message content to ensure that the request payload containing the message is valid XML. This encoding adds overhead that reduces the actual maximum size of a message. A message for a queue should not be intended to transport a big payload, since the purpose of a Queue is just messaging and not storing. If required, a user can store the payload in a Blob and use a Queue message to point to that, letting the receiver fetch the message along with the Blob from its remote location. Each message added to a queue has a time-to-live property after which it is deleted automatically. The maximum and default time-to-live value is 7 days. In this recipe, we'll learn how to add messages to a queue. Getting ready This recipe assumes the following code is in the application configuration file: <appSettings> <add key="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/> </appSettings> We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values of the account name and access key. How to do it... We are going to create a queue and add some messages to it. We do this as follows: Add a new class named AddMessagesOnStorageExample to the project. Install the WindowsAzure.Storage NuGet package and add the following assembly references to the project: System.Configuration Add the following using statements to the top of the class file: using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Queue; using System.Configuration; Add the following private member to the class: private CloudQueue cloudQueueClient; Add the following constructor to the class: public AddMessagesOnStorageExample(String queueName) { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse( ConfigurationManager.AppSettings[ "DataConnectionString"]); CloudQueueClient cloudQueueClient = cloudStorageAccount.CreateCloudQueueClient(); cloudQueue = cloudQueueClient.GetQueueReference(queueName); cloudQueue.CreateIfNotExists(); } Add the following method to the class, adding two messages: public void AddMessages() { String content1 = "Do something"; CloudQueueMessage message1 = new CloudQueueMessage(content1); cloudQueue.AddMessage(message1); String content2 = "Do something that expires in 1 day"; CloudQueueMessage message2 = new CloudQueueMessage(content2); cloudQueue.AddMessage(message2, TimeSpan.FromDays(1.0)); String content3 = "Do something that expires in 2 hours,"+ " starting in 1 hour from now"; CloudQueueMessage message3 = new CloudQueueMessage(content3); cloudQueue.AddMessage(message2, TimeSpan.FromHours(2),TimeSpan.FromHours(1)); } Add the following method, that uses the AddMessage() method, to the class: public static void UseAddMessagesExample() { String queueName = "{QUEUE_NAME}"; AddMessagesOnStorageExample example = new AddMessagesOnStorageExample (queueName); example.AddMessages(); } How it works... In steps 1 through 3, we set up the class. In step 4, we add a private member to store the CloudQueue object used to interact with the Queue service. We initialize this in the constructor we add in step 5 where we also create the queue. In step 6, we add a method that adds three messages to a queue. We create three CloudQueueMessage objects. We add the first message to the queue with the default time-to-live of seven days, the second is added specifying an expiration of 1 day, and the third will become visible after 1 hour since its entrance in the queue, with an absolute expiration of 2 hours. Note that a client (library) exception is thrown if we specify a visibility delay higher than the absolute TTL of the message. This is naturally obvious and it is enforced at the client side, instead making a (failing) server call. In step 7, we add a method that invokes the methods we added earlier. We need to replace {QUEUE_NAME} with an appropriate name for a queue. There's more… To clear the queue from the messages we added in this recipe, we can proceed by calling the Clear() method in the CloudQueue class as follows: public void ClearQueue() { cloudQueue.Clear(); } Summary In this article, we have learned some of the recipes in order to build a complete overview of the software infrastructure that we need to set up on the cloud. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] vCloud Networks [Article] Setting Up a Test Infrastructure [Article]
Read more
  • 0
  • 0
  • 1545

article-image-windows-phone-8-applications
Packt
23 Sep 2014
17 min read
Save for later

Windows Phone 8 Applications

Packt
23 Sep 2014
17 min read
In this article by Abhishek Sur, author of Visual Studio 2013 and .NET 4.5 Expert Cookbook, we will build your first Windows Phone 8 application following the MVVM pattern. We will work with Launchers and Choosers in a Windows Phone, relational databases and persistent storage, and notifications in a Windows Phone (For more resources related to this topic, see here.) Introduction Windows Phones are the newest smart device that has come on to the market and host the Windows operating system from Microsoft. The new operating system that was recently introduced to the market significantly differs from the previous Windows mobile operating system. Microsoft has shifted gears on producing a consumer-oriented phone rather than an enterprise mobile environment. The operating system is stylish and focused on the consumer. It was built keeping a few principles in mind: Simple and light, with focus on completing primary tasks quickly Distinct typography (Segoe WP) for all its UI Smart and predefined animation for the UI Focus on content, not chrome (the whole screen is available to the application for use) Honesty in design Unlike the previous Windows Phone operating system, Windows Phone 8 is built on the same core on which Windows PC is now running. The shared core indicates that the Windows core system includes the same Windows OS, including NT Kernel, NT filesystem, and networking stack. Above the core, there is a Mobile Core specific to mobile devices, which includes components such as Multimedia, Core CLR, and IE Trident, as shown in the following screenshot: In the preceding screenshot, the Windows Phone architecture has been depicted. The Windows Core System is shared between the desktop and mobile devices. The Mobile Core is specific to mobile devices that run Windows Phone Shell, all the apps, and platform services such as background downloader/uploader and scheduler. It is important to note that even though both Windows 8 and Windows Phone 8 share the same core and most of the APIs, the implementation of APIs is different from one another. The Windows 8 APIs are considered WinRT, while Windows Phone 8 APIs are considered Windows Phone Runtime (WinPRT). Building your first Windows Phone 8 application following the MVVM pattern Windows Phone applications are generally created using either HTML5 or Silverlight. Most of the people still use the Silverlight approach as it has a full flavor of backend languages such as C# and also the JavaScript library is still in its infancy. With Silverlight or XAML, the architecture that always comes into the developer's mind is MVVM. Like all XAML-based development, Windows 8 Silverlight apps also inherently support MVVM models and hence, people tend to adopt it more often when developing Windows Phone apps. In this recipe, we are going to take a quick look at how you can use the MVVM pattern to implement an application. Getting ready Before starting to develop an application, you first need to set up your machine with the appropriate SDK, which lets you develop a Windows Phone application and also gives you an emulator to debug the application without a device. The SDK for Windows Phone 8 apps can be downloaded from Windows Phone Dev Center at http://dev.windowsphone.com. The Windows Phone SDK includes the following: Microsoft Visual Studio 2012 Express for Windows Phone Microsoft Blend 2012 Express for Windows Phone The Windows Phone Device Emulator Project templates, reference assemblies, and headers/libraries A Windows 8 PC to run Visual Studio 2012 for Windows Phone After everything has been set up for application development, you can open Visual Studio and create a Windows Phone app. When you create the project, it will first ask the target platform; choose Windows Phone 8 as the default and select OK. You need to name and create the project. How to do it... Now that the template is created, let's follow these steps to demonstrate how we can start creating an application: By default, the project template that is loaded will display a split view with the Visual Studio Designer on the left-hand side and an XAML markup on the right-hand side. The MainPage.xaml file should already be loaded with a lot of initial adjustments to support Windows Phone with factors. Microsoft makes sure that they give the best layout to the developer to start with. So the important thing that you need to look at is defining the content inside the ContentPanel property, which represents the workspace area of the page. The Visual Studio template for Windows Phone 8 already gives you a lot of hints on how to start writing your first app. The comments indicate where to start and how the project template behaves on the code edits in XAML. Now let's define some XAML designs for the page. We will create a small page and use MVVM to connect to the data. For simplicity, we use dummy data to show on screen. Let's create a login screen for the application to start with. Add a new page, call it Login.xaml, and add the following code in ContentPanel defined inside the page: <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0" VerticalAlignment="Center"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="UserId" Grid.Row="0" Grid.Column="0" HorizontalAlignment="Right" VerticalAlignment="Center"/> <TextBox Text="{Binding UserId, Mode=TwoWay}" Grid.Row="0"Grid.Column="1" InputScope="Text"/> <TextBlock Text="Password" Grid.Row="1" Grid.Column="0" HorizontalAlignment="Right" VerticalAlignment="Center"/> <PasswordBox x_Name="txtPassword" Grid.Row="1" Grid.Column="1" PasswordChanged="txtPassword_PasswordChanged"/> <Button Command="{Binding LoginCommand}" Content="Login"Grid.Row="2" Grid.Column="0" /> <Button Command="{Binding ClearCommand}" Content="Clear"Grid.Row="2" Grid.Column="1" /> </Grid> In the preceding UI Design, we added a TextBox and a PasswordBox inside ContentPanel. Each TextBox has an InputScope property, which you can define to specify the behavior of the input. We define it as Text, which specifies that the TextBox can have any textual data. The PasswordBox takes any input from the user, but shows asterisks (*) instead of the actual data. The actual data is stored in an encrypted format inside the control and can only be recovered using its Password property. We are going to follow the MVVM pattern to design the application. We create a folder named Model in the solution and put a LoginDataContext class in it. This class is used to generate and validate the login of the UI. Inherit the class from INotifyPropertyChanged, which indicates that the properties can act by binding with the corresponding DependencyProperty that exists in the control, thereby interacting to and fro with the UI. We create properties for UserName, Password, and Status, as shown in the following code: private string userid; public string UserId { get { return userid; } set { UserId = value; this.OnPropertyChanged("UserId"); } } private string password; public string Password { get { return password; } set { password = value; this.OnPropertyChanged("Password"); } } public bool Status { get; set; } You can see in the preceding code that the property setter invokes an OnPropertyChanged event. This ensures that the update on the properties is reflected in the UI control: public ICommand LoginCommand { get { return new RelayCommand((e) => { this.Status = this.UserId == "Abhishek" && this.Password == "winphone"; if (this.Status) { var rootframe = App.Current.RootVisual as PhoneApplicationFrame; rootframe.Navigate(new Uri(string.Format ("/FirstPhoneApp;component/MainPage.xaml?name={0}",this.UserId), UriKind.Relative)); } }); } } public ICommand ClearCommand { get { return new RelayCommand((e) => { this.UserId = this.Password = string.Empty; }); } } We also define two more properties of type ICommand. The UI button control implements the command pattern and uses an ICommand interface to invoke a command. The RelayCommand used on the code is an implementation of the ICommand interface, which could be used to invoke some action. Now let's bind the Text property of the TextBox in XAML with the UserId property, and make it a TwoWay binding. The binder automatically subscribes to the PropertyChanged event. When the UserId property is set and the PropertyChanged event is invoked, the UI automatically receives the invoke request of code, which updates the text in the UI. Similarly, we add two buttons and name them Login and Clear and bind them with the properties LoginCommand and ClearCommand, as shown in the following code: <Button Command="{Binding LoginCommand}" Content="Login" Grid.Row="2" Grid.Column="0" /> <Button Command="{Binding ClearCommand}" Content="Clear" Grid.Row="2" Grid.Column="1" /> In the preceding XAML, we defined the two buttons and specified a command for each of them. We create another page so that when the login is successful, we can navigate the Login page to somewhere else. Let's make use of the existing MainPage.xaml file as follows: <StackPanel x_Name="TitlePanel" Grid.Row="0" Margin="12,17,0,28"> <TextBlock Text="MY APPLICATION" x_Name="txtApplicationDescription"Style=" {StaticResource PhoneTextNormalStyle}" Margin="12,0"/> <TextBlock Text="Enter Details" Margin="9,-7,0,0"Style=" {StaticResource PhoneTextTitle1Style}"/> </StackPanel> We add the preceding XAML to show the message that is passed from the Login screen. We create another class and name it MainDataContext. We define a property that will hold the data to be displayed on the screen. We go to Login.xaml.cs created as a code-behind of Login.xaml, create an instance of LoginDataContext, and assign it to DataContext of the page. We assign this inside the InitializeComponent method of the class, as shown in the following code: this.DataContext = new LoginDataContext(); Now, go to Properties in the Solution Explorer pane, open the WMAppManifest file, and specify Login.xaml as the Navigation page. Once this is done, if you run the application now in any of the emulators available with Visual Studio, you will see what is shown in the following screenshot: You can enter data in the UserId and Password fields and click on Login, but nothing happens. Put a breakpoint on LoginCommand and press Login again with the credentials, and you will see that the Password property is never set to anything and evaluates to null. Note that, PasswordBox in XAML does not support binding to its properties. To deal with this, we define a PasswordChanged event on PasswordBox and specify the following code: private void txtPassword_PasswordChanged(object sender, RoutedEventArgs e) { this.logindataContext.Password = txtPassword.Password; } The preceding code will ensure that the password goes properly to the ViewModel. Finally, clicking on Login, you will see Status is set to true. However, our idea is to move the page from the Login screen to MainPage.xaml. To do this, we change the LoginCommand property to navigate the page, as shown in the following code: if (this.Status) { var rootframe = App.Current.RootVisual as PhoneApplicationFrame; rootframe.Navigate(new Uri(string.Format("/FirstPhoneApp;component/MainPage.xaml?name={0}", this.UserId), UriKind.Relative)); } Each WPF app contains an ApplicationFrame class that is used to show the UI. The application frame can use the navigate method to navigate from one page to another. The navigate method uses NavigationService to redirect the page to the URL provided. Here in the code, after authentication, we pass UserId as querystring to MainPage. We design the MainPage.xaml file to include a pivot control. A pivot control is just like traditional tab controls, but looks awesome in a phone environment. Let's add the following code: <phone:Pivot> <phone:PivotItem Header="Main"> <StackPanel Orientation="Vertical"> <TextBlock Text="Choose your avatar" /> <Image x_Name="imgSelection" Source="{Binding AvatarImage}"/> <Button x_Name="btnChoosePhoto" ClickMode="Release"Content="Choose Photo" Command="{Binding ChoosePhoto}" /> </StackPanel> </phone:PivotItem> <phone:PivotItem Header="Task"> <StackPanel> <phone:LongListSelector ItemsSource="{Binding LongList}" /> </StackPanel> </phone:PivotItem> </phone:Pivot> The phone tag is referred to a namespace that has been added automatically in the header where the Pivot class exists. In the previously defined Pivot class, there are two PivotItem with headers Main and Task. When Main is selected, it allows you to choose a photo from MediaLibrary and the image is displayed on Image Control. The ChoosePhoto command defined inside MainDataContext sets the image to its source, as shown in the following code: public ICommand ChoosePhoto { get { return new RelayCommand((e) => { PhotoChooserTask pTask = new PhotoChooserTask(); pTask.Completed += pTask_Completed; pTask.Show(); }); } } void pTask_Completed(object sender, PhotoResult e) { if (e.TaskResult == TaskResult.OK) { var bitmap = new BitmapImage(); bitmap.SetSource(e.ChosenPhoto); this.AvatarImage = bitmap; } } In the preceding code, the RelayCommand that is invoked when the button is clicked uses PhotoChooserTask to select an image from MediaLibrary and that image is shown on the AvatarImage property bound to the image source. On the other hand, the other PivotItem shows LongList where the ItemsSource is bound to a long list of strings, as shown in the following code: public List<string> LongList { get { this.longList = this.longList ?? this.LoadList(); return this.longList; } } The long list can be anything, a long list that is needed to be shown in the ListBox class. How it works... Windows Phone, being an XAML-based technology, uses Silverlight to generate UI and controls supporting the Model-View-ViewModel (MVVM) pattern. Each of the controls present in the Windows Phone environment implements a number of DependencyProperties. The DependencyProperty is a special type of property that supports DataBinding. When bound to another CLR object, these properties try to find the INotifyPropertyChanged interface and subscribe to the PropertyChanged event. When the data is modified in the controls, the actual bound object gets modified automatically by the dependency property system, and vice versa. Similar to normal DependencyProperties, there is a Command property that allows you to call a method. Just like the normal property, Command implements the ICommand interface and has a return type Action that maps to Command. The RelayCommand here is an implementation of ICommand interfaces, which can be bound to the Command property of Button. There's more... Now let's talk about some other options, or possibly some pieces of general information that are relevant to this task. Using ApplicationBar on the app Just like any of the modern smartphones, Windows Phones also provides a standard way of communicating with any application. Each application can have a standard set of icons at the bottom of the application, which enable the user to perform some actions on the application. The ApplicationBar class is present at the bottom of any application across the operating system and hence, people tend to expect commands to be placed on ApplicationBar rather than on the application itself, as shown in the following screenshot. The ApplicationBar class accepts 72 pixels of height, which cannot be modified by code. When an application is open, the application bar is shown at the bottom of the screen. The preceding screenshot shows how the ApplicationBar class is laid out with two buttons, login and clear. Each ApplicationBar class can also associate a number of menu items for additional commands. The menu could be opened by clicking on the … button in the left-hand side of ApplicationBar. The page of Windows Phone allows you to define one application bar. There is a property called ApplicationBar on PhoneApplicationPage that lets you define the ApplicationBar class of that particular page, as shown in the following screenshot: <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar> <shell:ApplicationBarIconButton Click="ApplicationBarIconButton_Click" Text="Login" IconUri="/Assets/next.png"/> <shell:ApplicationBarIconButton Click="ApplicationBarIconButtonSave_Click" Text="clear" IconUri="/Assets/delete.png"/> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Click="about_Click" Text="about" /> </shell:ApplicationBar.MenuItems> </shell:ApplicationBar> </phone:PhoneApplicationPage.ApplicationBar> In the preceding code, we defined two ApplicationBarIconButton classes. Each of them defines the Command items placed on the ApplicationBar class. The ApplicationBar.MenuItems method allows us to add menu items to the application. There can be a maximum of four application bar buttons and four menus per page. The ApplicationBar button also follows a special type of icon. There are a number of these icons added with the SDK, which could be used for the application. They can be found at DriveNameProgram FilesMicrosoft SDKs Windows Phonev8.0Icons. There are separate folders for both dark and light themes. It should be noted that ApplicationBar buttons do not allow command bindings. Tombstoning When dealing with Windows Phone applications, there are some special things to consider. When a user navigates out of the application, the application is transferred to a dormant state, where all the pages and state of the pages are still in memory but their execution is totally stopped. When the user navigates back to the application again, the state of the application is resumed and the application is again activated. Sometimes, it might also be possible that the app gets tombstoned after the user navigates away from the app. In this case, the app is not preserved in memory, but some information of the app is stored. Once the user comes back to the app, the application needs to be restored, and the application needs to resume in such a way that the user gets the same state as he or she left it. In the following figure, you can see the entire process: There are four states defined, the first one is the Not Running state where there is no existence of the process in memory. The Activated state is when the app is tapped by the user. When the user moves out of the app, it goes from Suspending to Suspended. It can be reactivated or it will be terminated after a certain time automatically. Let's look at the Login screen, where you might sometimes tombstone the login page while entering the user ID and password. To deal with storing the user state data before tombstoning, we use PhoneApplicationPage. The idea is to serialize the whole DataModel once the user navigates away from the page and retrieves the page state again when it navigates back. Let's annotate the UserId and Password of the LoginDataContext with DataMember and LoginDataContext with DataContract, as shown in the following code: [DataContract] public class LoginDataContext : PropertyBase { private string userid; [DataMember] public string UserId { get { return userid; } set { UserId = value; this.OnPropertyChanged("UserId"); } } private string password; [DataMember] public string Password { get { return password; } set { password = value; this.OnPropertyChanged("Password"); } } } The DataMember property will indicate that the properties are capable of serializing. As the user types into these properties, the properties get filled with data so that when the user navigates away, the model will always have the latest data present. In LoginPage, we define a property called _isNewPageInstance and set it to false, and in constructor, we set it to true. This will indicate that only when the page is instantiated, _isNewPageInstance is set to true. Now, when the user navigates away from the page, OnNavigatedFrom gets called. If the user navigates from the page, we save ViewModel into State as shown in the following code: protected override void OnNavigatedFrom(NavigationEventArgs e) { base.OnNavigatedFrom(e); if (e.NavigationMode != System.Windows.Navigation.NavigationMode.Back) { // Save the ViewModel variable in the page''s State dictionary. State[""ViewModel""] = logindataContext; } } Once DataModel is saved in the State object, it is persistent and can be retrieved later on when the application is resumed as follows: protected override void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); if (_isNewPageInstance) { if (this.logindataContext == null) { if (State.Count > 0) { this.logindataContext = (LoginDataContext)State[""ViewModel""]; } else { this.logindataContext = new LoginDataContext(); } } DataContext = this.logindataContext; } _isNewPageInstance = false; } When the application is resumed from tombstoning, it calls OnNavigatedTo and retrieves DataModel back from the state. Summary In this article, we learned device application development with the Windows Phone environment. It provided us with simple solutions to some of the common problems when developing a Windows Phone application. Resources for Article: Further resources on this subject: Layout with Ext.NET [article] ASP.NET: Creating Rich Content [article] ASP.NET: Using jQuery UI Widgets [article]
Read more
  • 0
  • 0
  • 1091