Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-making-specs-more-concise-intermediate
Packt
13 Sep 2013
6 min read
Save for later

Making specs more concise (Intermediate)

Packt
13 Sep 2013
6 min read
(For more resources related to this topic, see here.) Making specs more concise (Intermediate) So far, we've written specifications that work in the spirit of unit testing, but we're not yet taking advantage of any of the important features of RSpec to make writing tests more fluid. The specs illustrated so far closely resemble unit testing patterns and have multiple assertions in each spec. How to do it... Refactor our specs in spec/lib/location_spec.rb to make them more concise: require "spec_helper" describe Location do describe "#initialize" do subject { Location.new(:latitude => 38.911268, :longitude => -77.444243) } its (:latitude) { should == 38.911268 } its (:longitude) { should == -77.444243 } end end While running the spec, you see a clean output because we've separated multiple assertions into their own specifications: Location #initialize latitude should == 38.911268 longitude should == -77.444243 Finished in 0.00058 seconds 2 examples, 0 failures The preceding output requires either the .rspec file to contain the --format doc line, or when executing rspec in the command line, the --format doc argument must be passed. The default output format will print dots (.) for passing tests, asterisks (*) for pending tests, E for errors, and F for failures. It is time to add something meatier. As part of our project, we'll want to determine if Location is within a certain mile radius of another point. In spec/lib/location_spec.rb, we'll write some tests, starting with a new block called context. The first spec we want to write is the happy path test. Then, we'll write tests to drive out other states. I am going to re-use our Location instance for multiple examples, so I'll refactor that into another new construct, a let block: require "spec_helper" describe Location do let(:latitude) { 38.911268 } let(:longitude) { -77.444243 } let(:air_space) { Location.new(:latitude => 38.911268,: longitude => -77.444243) } describe "#initialize" do subject { air_space } its (:latitude) { should == latitude } its (:longitude) { should == longitude } end end Because we've just refactored, we'll execute rspec and see the specs pass. Now, let's spec out a Location#near? method by writing the code we wish we had: describe "#near?" do context "when within the specified radius" do subject { air_space.near?(latitude, longitude, 1) } it { should be_true } end end end Running rspec now results in failure because there's no Location#near? method defined. The following is the naive implementation that passes the test (in lib/location.rb): def near?(latitude, longitude, mile_radius) true end Now, we can drive a failure case, which will force a real implementation in spec/lib/location_spec.rb within the describe "#near?" block: context "when outside the specified radius" do subject { air_space.near?(latitude * 10, longitude * 10, 1) } it { should be_false } end Running the specs now results in the expected failure. The following is a passing implementation of the haversine formula in lib/location.rb that satisfies both cases: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) to_radians = Proc.new { |d| d * Math::PI / 180 } dist_lat = to_radians.call(lat - self.latitude) dist_long = to_radians.call(long - self.longitude) lat1 = to_radians.call(self.latitude) lat2 = to_radians.call(lat) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) + Math.sin(dist_long/2) * Math.sin(dist_long/2) * Math.cos(lat1) * Math.cos(lat2) c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) (R * c) <= mile_radius end Refactor both of the previous tests to be more expressive by utilizing predicate matchers: describe "#near?" do context "when within the specified radius" do subject { air_space } it { should be_near(latitude, longitude, 1) } end context "when outside the specified radius" do subject { air_space } it { should_not be_near(latitude * 10, longitude * 10, 1) } end end Now that we have a passing spec for #near?, we can alleviate a problem with our implementation. The #near? method is too complicated. It could be a pain to try and maintain this code in future. Refactor for ease of maintenance while ensuring that the specs still pass: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) loc = Location.new(:latitude => lat,:longitude => long) R * haversine_distance(loc) <= mile_radius end private def to_radians(degrees) degrees * Math::PI / 180 end def haversine_distance(loc) dist_lat = to_radians(loc.latitude - self.latitude) dist_long = to_radians(loc.longitude - self.longitude) lat1 = to_radians(self.latitude) lat2 = to_radians(loc.latitude) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) +Math.sin(dist_long/2) * Math.sin(dist_long/2) *Math.cos(lat1) * Math.cos(lat2) 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) end Finally, run rspec again and see that the tests continue to pass. A successful refactor! How it works... The subject block takes the return statement of the block—a new instance of Location in the previous example—and binds it to a locally scoped variable named subject. Subsequent it and its blocks can refer to that subject variable. Furthermore, the its blocks implicitly operate on the subject variable to produce more concise tests. Here is an example illustrating how subject is used to produce easier-to-read tests: describe "Example" do subject { { :key1 => "value1", :key2 => "value2" } } it "should have a size of 2" do subject.size.should == 2 end end We can use subject from within the it block and this will refer to the anonymous hash returned by the subject block. In the preceding test, we could have been more concise with an its block: its (:size) { should == 2 } We're not limited to just sending symbols to an its block—we can use strings too: its ('size') { should == 2 } When there is an attribute of subject you want to assert but the value cannot easily be turned into a valid Ruby symbol, you'll need to use a string. This string is not evaluated as Ruby code; it's only evaluated against the subject under test as a method of that class. Hashes, in particular, allow you to define an anonymous array with the key value to assert the value for that key: its ([:key1]) { should == "value1" } There's more... In the previous code examples, another block known as the context block was presented. The context block is a grouping mechanism for associating tests. For example, you may have a conditional branch in your code that changes the outputs of a method. Here, you may use two context blocks, one for a value and the second for another value. In our example, we're separating the happy path (when a given point is within the specified mile radius) from the alternative (when a given point is outside the specified mile radius). context is a useful construct that allows you to declare let and other blocks within it, and those blocks apply only for the scope of the containing context. Summary This article demonstrated to us the idiomatic RSpec code that makes good use of the RSpec Domain Specific Language (DSL). Resources for Article : Further resources on this subject: Quick start - your first Sinatra application [Article] Behavior-driven Development with Selenium WebDriver [Article] External Tools and the Puppet Ecosystem [Article]
Read more
  • 0
  • 0
  • 971

article-image-introducing-windows-store
Packt
12 Sep 2013
17 min read
Save for later

Introducing the Windows Store

Packt
12 Sep 2013
17 min read
(For more resources related to this topic, see here.) Developing a Windows Store app is not just about design, coding, and markup. A very essential part of the process that leads to a successful app is done on the Windows Store Dashboard. It is the place where you submit the app, pave its way to the market, and monitor how it is doing there. Also, it is the place where you can get all the information about your existing apps and where you can plan your next app. The submission process is broken down into seven phases. If you haven't already opened a Windows Store developer account, now is the time to do so because you will need it to access your Dashboard. Before you sign up, make sure you have a credit card. The Windows Store requires a credit card to open a developer account even if you had a registration code that entitles you to a free registration. Once signed in, locate your app listed on the home page under the Apps in progress section and click on Edit. This will direct you to the Release Summary page and the app will be titled AppName: Release 1. The release number will auto-increment each time you submit a new release for the same app. The Release Summary page lists the steps that will get your app ready for Windows Store certification. On this page, you can enter all the info about your Windows Store app and upload its packages for certification. At the moment you will notice that the two buttons at the bottom of the page labeled as Review release info and Submit app for certification are disabled and will remain so until all the previous steps have been marked Complete. The submission progress can always be saved to be resumed later, so it is not necessarily a one-time mission. We'll go over these steps one by one: App name: This is the first step and it includes reserving a unique name for the app. Selling details: This step includes selecting the following: The app price tier option sets the price of your app (for example, free or 1.99 USD). The free trial period option is the number of days the customer can use the app before they start paying to use it. This option is enabled only if the app price tier is not set to Free. The Market where you would like the app to be listed in the Windows Store. Bear in mind that if your app isn't free, your developer account must have a valid tax profile for each country/region you select. The release date option specifies the earliest date when the app will be listed in the Windows Store. The default option is to release as soon as the app passes the certification. The App category and subcategory option indicates where your app be listed in the Store, which in turn lists the apps under Categories. The Hardware requirements option will specify the minimum requirements for the DirectX feature level and the system RAM. The Accessibility option is a checkbox that when checked indicates that the app has been tested to meet accessibility guidelines. Services: In this step, you can add services to your app such as Windows Azure Mobile Services and Live Services. You can also provide products and features that the customer can buy from within the app called In-app offers. Age rating and rating certificates: In this step, you can set an age rating for the app from the available Windows Store age ratings. Also, you can upload country/region-specific rating certificates in case your app is a game. Cryptography: In this step, you specify if your app calls, supports, and contains or uses cryptography or encryption. The following are some of the examples of how an app might apply cryptography or encryption: Use of a digital signature such as authentication or integrity checking Encryption of any data or files that your app uses or accesses Key management, certificate management, or anything that interacts with a public key infrastructure Using a secure communication channel such as NTLM, Kerberos, Secure Sockets Layer (SSL), or Transport Layer Security (TLS) Encrypting passwords or other forms of information security Copy protection or digital rights management (DRM) Antivirus protection Packages: In this step, you can upload your app to the Store by uploading the .appxupload file that was created in Visual Studio during the package-creation process. We will shortly see how to create an app package. The latest upload will show on the Release Summary page in the packages box and should be labeled as Validation Complete. Description: In this step you can add a brief description (mandatory) on what the app does for your customers. The description has a 10,000-character limit and will be displayed in the details page of the app's listing in the Windows Store. Besides description, this step contains the following features: App features: This feature is optional. It allows you to list up to 20 of the app's key features. Screenshots: This feature is mandatory and requires to provide at least one .png file image; the first can be a graphic that represents your app but all the other images must be screenshots with a caption taken directly from the app. Notes: This feature is optional. Enter any other info that you think your customer needs to know; for example, changes in an update. Recommended hardware: This feature is optional. List the hardware configurations that the app will need to run. Keywords: This feature is optional. Enter keywords related to the app to help its listing appear in search results. Copyright and trademark info: This feature is mandatory. Enter the copyright and trademark info that will be displayed to customers in the app's listing page. Additional license terms: This feature is optional. Enter any changes to the Standard App License Terms that the customers accept when they acquire this app. Promotional images: This feature is optional. Add images that the editors use to feature apps in the Store. Website: This feature is optional. Enter the URL of the web page that describes the app, if any. Support contact info: This feature is mandatory. Enter the support contact e-mail address or URL of the web page where your customers can reach out for help. Privacy policy: This feature is optional. Enter the URL of the web page that contains the privacy policy. Notes to testers: This is the last step and it includes adding notes about this specific release for those who will review your app from the Windows Store team. The info will help the testers understand and use this app in order to complete their testing quickly and certify the app for the Windows Store. Each step will remain disabled until the preceding one is completed and the steps that are in progress are labeled with the approximate time (in minutes) it will take you to finish it. And whenever the work in a single step is done, it will be marked Complete on the summary page as shown in the following screenshot: Submitting the app for certification After all the steps are marked Complete, you can submit the app for certification. Once you click on Submit for certification, you will receive an e-mail notification that the Windows Store has received your app for certification. The dashboard will submit the app and you will be directed to the Certification status page. There, you can view the progress of the app during the certification process, which includes the following steps: Pre-processing: This step will check if you have entered all the required details that are needed to publish the app. Security tests: This step tests your app against viruses and malware. Technical compliance: This step involves the Windows App certification Kit to check if the app complies with the technical policies. The same assessment can be run locally using Visual Studio, which we will see shortly, before you upload your package. Content compliance: This step is done by testers from the Store team who will check if the contents available in the app comply with the content policies set by Microsoft. Release: This step involves releasing the app; it shouldn't take much time unless the publish date you've specified in Selling details is in the future, in which case the app will remain in this stage until that date arrives. Signing and publishing: This is the final step in the certification process. At this stage, the packages you submitted will be signed with a trusted certificate that matches the technical details of your developer account, thus guaranteeing for the potential customers and viewers that the app is certified by the Windows Store. The following screenshot shows the certification process on Windows Store Dashboard: No need to wait on that page; you can click on the Go to dashboard button and you will be redirected to the My apps page. In the box containing the app you just submitted, you will notice that the Edit and Delete links are gone, and instead there is only the Status link, which will take you to the Certification status page. Additionally, a Notifications section will appear on this page and will list status notifications about the app you just submitted, for example: BookTestApp: Release 1 submitted for certification. 6/4/2013 When the certification process is completed, you will be notified via e-mail with the result. Also, a notification will be added to the dashboard main page showing the result of the certification, either failed or succeeded, with a link to the certification report. In case the app fails, the certification reports will show you which part needs revisiting. Moreover, there are some resources to help you identify and fix the problems and errors that might arise during the certification process; these resources can be found at the Windows Dev Center page for Windows Store apps at the following location: http://msdn.microsoft.com/en-us/library/windows/apps/jj657968.aspx Also, you can always check your dashboard to check the status of your app during certification. After the certification process is completed successfully, the app package will be published to the Store with all the relevant data that will be visible in your app listing page. This page can be accessed by millions of Windows 8 users who will in turn be able to find, install, and use your app. Once the app has been published to the Store and it's up and running, you can start collecting telemetry data on how it is doing in the Store; these metrics include information on how many times the app has been launched, how long it has been running, and if it is crashing or encountering a JavaScript exception. Once you enable telemetry data collection, the Store will retrieve this info for your apps, analyze them, and summarize them in very informative reports on your dashboard. Now that we have covered almost everything you need to know about the process of submitting your app to the Windows Store, let us see what is needed to be done in Visual Studio. The Store within Visual Studio Windows Store can be accessed from within Visual Studio using the Store menu. Not all the things that we did on the dashboard can be done here; a few very important functionalities such as app package creation are provided by this menu. The Store menu can be located under the Project item in the menu bar using Visual Studio 2012 Ultimate, or if you are using Visual Studio 2012 Express, you can find it directly in the menu bar, and it will appear only if you're working on a Windows Store project or solution. We will get to see the commands provided by the Store menu in detail and the following is the screenshot that shows how the menu will look: The command options in the Store menu are as follows: Open Developer Account...: This option will open a web page that directs you to Windows Dev Center for Windows Store apps, where you can obtain a developer account for the Store. Reserve App Name...: This option will direct you to your Windows Store Dashboard and specifically to the Submit an app page, where you can start with the first step, reserving an app name. Acquire Developer License...: This option will open up a dialog window that will prompt you to sign in with your Microsoft Account; after you sign in, it will retrieve your developer license or renew it if you already have one. Edit App Manifest: This option will open a tab with Manifest Designer, so you can edit the settings in the app's manifest file. Associate App with the Store...: This option will open a wizard-like window in Visual Studio, containing the steps needed to associate an app with the Store. The first step will prompt you to sign in; afterwards, the wizard will retrieve the apps registered with the Microsoft Account you used to sign in. Select an app and the wizard will automatically download the following values to the app's manifest file for the current project on the local computer: Package's display name Package's name Publisher ID Publisher's display name Capture Screenshot...: This option will build the current app project and launch it in the simulator instead of the start screen. Once the simulator opens, you can use the Copy screenshot button on the simulator sidebar. This button will be used to take a screenshot of the running app that will save this image as a .png file. Create App Package...: This option will open a window containing the Create App Packages wizard that we will see shortly. Upload App Package...: This option will open a browser that directs you to the Release Summary page in the Windows Store Dashboard, if your Store account is all set and your app is registered. Otherwise, it will just take you to the sign-in page. In the Release Summary page, you can select Packages and from there upload your app package. Creating an App Package One of the most important utilities in the Store menu is the app package creation, which will build and create a package for the app that we can upload to the Store at a later stage. This package is consistent with all the app-specific and developer-specific details that the Store requires. Moreover, the developers do not have to worry about any of the intricacies of the whole package-creation process, which is abstracted for us and available via a wizard-link window. In the Create App Packages wizard, we can create an app package for the Windows Store directly, or create the one to be used for testing or local distribution. This wizard will prompt you to specify metadata for the app package. The following screenshot shows the first two steps involved in this process: In the first step, the wizard will ask you if you want to build packages to upload to the Windows Store; choose Yes if you want to build a package for the Store or choose No if you want a package for testing and local use. Taking the first scenario in consideration, click on Sign In to proceed and complete the sign-in process using your Microsoft Account. After a successful sign-in, the wizard will prompt you to select the app name (step 2 of the preceding screenshot) either by clicking on the apps listed in the wizard or choosing the Reserve Name link that will direct you to the Windows Store Dashboard to complete the process and reserve a new app name. The following screenshot shows step 3 and step 4: Step 3 contains the Select and Configure Packages section in which we will select Output location that points to where the package files will be created. Also, in this section we can enter a version number for this package or chose to make it auto-increment each time we package the app. Additionally, we can select the build configuration we want for the package from the Neutral, ARM, x64, and x86 options and by default, the current active project platform will be selected and a package will be produced for each configuration type selected. The last option in this section is the Include public symbol files option. Selecting this option will generate the public symbols files (.pdb) and add it to the package, which will help the store later in analyzing your app and will be used to map crashes of your app. Finally, click on Create and wait while the packaging is being processed. Once completed, the Package Creation Completed section appears (step 4) and will show Output location as a link that will direct you to the package files. Also, there is a button to directly launch the Windows App Certification Kit. Windows App Certification Kit will validate the app package against the Store requirements and generate a report of the validation. The following screenshot shows the window containing the Windows App Certification Kit process: Alternatively, there is a second scenario for creating an app package but it is more aimed at testing, which is identical to the process we just saw except that you have to choose No in the first page on the wizard and there is no need to sign-in with the Microsoft Account. This option will end the wizard when the package creation has completed and display the link to the output folder but you will not be able to launch the Windows App Certification Kit. The packages created with this option can only be used on a computer that has a developer license installed. This scenario will be used more often since the package for the Store should ideally be tested locally first. After creating the app package for testing or local distribution, you can install it on a local machine or device. Let's install the package locally. Start the Create App Packages wizard; choose No in the first step, complete the wizard, and find files of the app package just created in the output folder that you specified for the package location. Name this as PackageName_Test. This folder will contain an .appx file, a security certificate, a Windows PowerShell script, and other files. The Windows PowerShell script generated with the app package will be used to install the package for testing. Navigate to the Output folder and install the app package. Locate and select the script file named Add-AppDevPackage, and then right-click and choose Run with PowerShell as shown in the following screenshot: Run the script and it will perform the following steps: It displays information about Execution Policy Change and prompts about changing the execution policy. Enter Y to continue. It checks if you have a developer license; in case there wasn't any script, it will prompt you to get one. It checks and verifies whether the app package and the required certificates are present; if any item is missing, you will be notified to install them before the developer package is installed. It checks for and installs any dependency packages such as the WinJS library. It displays the message Success: Your package was successfully installed. Press Enter to continue and the window will close. The aforementioned steps are shown in the following screenshot: Once the script has completed successfully, you can look for your app on the Start screen and start it. Note that for users who are on a network and don't have permission to access the directory where the Add-AppDevPackage PowerShell script file is located, an error message might appear. This issue can be solved by simply copying the contents of the output folder to the local machine before running the script. Also, for any security-related issues, you might want to consult the Windows Developer Center for solutions. Summary In this article, we saw the ins and outs of the Windows Store Dashboard and we covered the steps of the app submission process leading to the publishing of the app in the Store. We also learned about the Store menu in Visual Studio and the options it provides to interact with the dashboard. Moreover, we learned how to create app packages and how to deploy the app locally for testing. Resources for Article: Further resources on this subject: WPF 4.5 Application and Windows [Article] HTML5 Canvas [Article] Responsive Design with Media Queries [Article]
Read more
  • 0
  • 0
  • 2656

article-image-one-page-application-development
Packt
12 Sep 2013
10 min read
Save for later

One-page Application Development

Packt
12 Sep 2013
10 min read
(For more resources related to this topic, see here.) Model-View-Controller or MVC Model-View-Controller ( MVC ) is a heavily used design pattern in programming. A design pattern is essentially a reusable solution that solves common problems in programming. For example, the Namespace and Immediately-Invoked Function Expressions are patterns that are used throughout this article. MVC is another pattern to help solve the issue of separating the presentation and data layers. It helps us keep our markup and styling outside of the JavaScript; keeping our code organized, clean, and manageable—all essential requirements for creating one-page-applications. So let's briefly discuss the several parts of MVC, starting with models. Models A model is a description of an object, containing the attributes and methods that relate to it. Think of what makes up a song, for example the track's title, artist, album, year, duration, and more. In its essence, a model is a blueprint of your data. Views The view is a physical representation of the model. It essentially displays the appropriate attributes of the model to the user, the markup and styles used on the page. Accordingly, we use templates to populate our views with the data provided. Controllers Controllers are the mediators between the model and the view. The controller accepts actions and communicates information between the model and the view if necessary. For example, a user can edit properties on a model; when this is done the controller tells the View to update according to the user's updated information. Relationships The relationship established in an MVC application is critical to sticking with the design pattern. In MVC, theoretically, the model and view never speak with each other. Instead the controller does all the work; it describes an action, and when that action is called either the model, view, or both update accordingly. This type of relationship is established in the following diagram: This diagram explains a traditional MVC structure, especially that the communication between the controller and model is two-way; the controller can send data to/from the model and vice versa for the view. However, the view and model never communicate, and there's a good reason for that. We want to make sure our logic is contained appropriately; therefore, if we wanted to delegate events properly for user actions, then that code would go into the view. However, if we wanted to have utility methods, such as a getName method that combines a user's first name and last name appropriately, that code would be contained within a user model. Lastly, any sort of action that pertains to retrieving and displaying data would be contained in the controller. Theoretically, this pattern helps us keep our code organized, clean, and efficient. In many cases this pattern can be directly applied, especially in many backend languages like Ruby, PHP, and Java. However, when we start applying this strictly to the frontend, we are confronted with many structural challenges. At the same time, we need this structure to create solid one-page-applications. The following sections will introduce you to the libraries we will use to solve these issues and more. Introduction to Underscore.js One of the libraries we will be utilizing in our sample application will be Underscore.js. Underscore has become extremely popular in the last couple of years due to the many utility methods it provides developers without extending built-in JavaScript objects, such as String, Array, or Object. While it provides many useful methods, the suite has also been optimized and tested across many of the most popular web browsers, including Internet Explorer. For these reasons, the community has widely adopted this library and continually supported it. Implementation Underscore is extremely easy to implement in our applications. In order to get Underscore going, all we need to do is include it on our page like so: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title></title> <meta name="description" content=""> <meta name="viewport" content="width=device-width"> </head> <body> <script src = "//ajax.googleapis.com/ajax/libs/jquery/ 1.9.0/jquery.min.js"></script> <script src = "//cdnjs.cloudflare.com/ajax/libs/underscore.js/ 1.4.3/underscore-min.js"></script> </body> </html> Once we include Underscore on our page, we have access to the library at the global scope using the _ object. We can then access any of the utility methods provided by the library by doing _.methodName. You can review all of the methods provided by Underscore online (http://underscorejs.org/), where all methods are documented and contain samples of their implementation. For now, let's briefly review some of the methods we'll be using in our application. _.extend The extend method in Underscore is very similar to the extend method we have been using from Zepto (http://zeptojs.com/#$.extend). If we look at the documentation provided on Underscore's website (http://underscorejs.org/#extend), we can see that it takes multiple objects with the first parameter being the destination object that gets returned once all objects are combined. Copy all of the properties in the source objects over to the destination object, and return the destination object. It's in-order, so the last source will override properties of the same name in previous arguments. As an example, we can take a Song object and create an instance of it while also overriding its default attributes. This can be seen in the following example: <script> function Song() { this.track = "Track Title"; this.duration = 215; this.album = "Track Album"; }; var Sample = _.extend(new Song(), { 'track': 'Sample Title', 'duration': 0, 'album': 'Sample Album' }); </script> If we log out the Sample object, we'll notice that it has inherited from the Song constructor and overridden the default attributes track, duration, and album. Although we can improve the performance of inheritance using traditional JavaScript, using an extend method helps us focus on delivery. We'll look at how we can utilize this method to create a base architecture within our sample application later on in the article. _.each The each method is extremely helpful when we want to iterate over an Array or Object. In fact this is another method that we can find in Zepto and other popular libraries like jQuery. Although each library's implementation and performance is a little different, we'll be using Underscore's _.each method, so that we can stick within our application's architecture without introducing new dependencies. As per Underscore's documentation (http://underscorejs.org/#each), the use of _.each is similar to other implementations: Iterates over a list of elements, yielding each in turn to an iterator function. The iterator is bound to the context object, if one is passed. Each invocation of iterator is called with three arguments: (element, index, list). If list is a JavaScript object, iterator's arguments will be (value, key, list). Delegates to the native forEach function if it exists. Let's take a look at an example of using _.each with the code we created in the previous section. We'll loop through the instance of Sample and log out the object's properties, including track, duration, and album. Because Underscore's implementation allows us to loop through an Object, just as easily as an Array, we can use this method to iterate over our Sample object's properties: <script> function Song() { this.track = "Track Title"; this.duration = 215; this.album = "Track Album"; }; var Sample = _.extend(new Song(), { 'track': 'Sample Title', 'duration': 0, 'album': 'Sample Album' }); _.each(Sample, function(value, key, list){ console.log(key + ": " + value); }); </script> The output from our log should look something like this: track: Sample Title duration: 0 album: Sample Album As you can see, it's extremely easy to use Underscore's each method with arrays and objects. In our sample application, we'll use this method to loop through an array of objects to populate our page, but for now let's review one last important method we'll be using from Underscore's library. _.template Underscore has made it extremely easy for us to integrate templating into our applications. Out of the box, Underscore comes with a simple templating engine that can be customized for our purposes. In fact, it can also precompile your templates for easy debugging. Because Underscore's templating can interpolate variables, we can utilize it to dynamically change the page as we wish. The documentation provided by Underscore (http://underscorejs.org/#template) helps explain the different options we have when using templates: Compiles JavaScript templates into functions that can be evaluated for rendering. Useful for rendering complicated bits of HTML from JSON data sources. Template functions can both interpolate variables, using <%= … %>, as well as execute arbitrary JavaScript code, with <% … %>. If you wish to interpolate a value, and have it be HTML-escaped, use <%- … %>. When you evaluate a template function, pass in a data object that has properties corresponding to the template's free variables. If you're writing a one-off, you can pass the data object as the second parameter to template in order to render immediately instead of returning a template function. Templating on the frontend can be difficult to understand at first, after all we were used to querying a backend, using AJAX, and retrieving markup that would then be rendered on the page. Today, best practices dictate we use RESTful APIs that send and retrieve data. So, theoretically, you should be working with data that is properly formed and can be interpolated. But where do our templates live, if not on the backend? Easily, in our markup: <script type="tmpl/sample" id="sample-song"> <section> <header> <h1><%= track %></h1> <strong><%= album %></strong> </header> </section> </script> Because the preceding script has an identified type for the browser, the browser avoids reading the contents inside this script. And because we can still target this using the ID, we can pick up the contents and then interpolate it with data using Underscore's template method: <script> function Song() { this.track = "Track Title"; this.duration = 215; this.album = "Track Album"; }; var Sample = _.extend(new Song(), { 'track': 'Sample Title', 'duration': 0, 'album': 'Sample Album' }); var template = _.template(Zepto('#sample-song').html(), Sample); Zepto(document.body).prepend(template); </script> The result of running the page, would be the following markup: <body> <section> <header> <h1>Sample Title</h1> <strong>Sample Album</strong> </header> </section> <!-- scripts and template go here --> </body> As you can see, the content from within the template would be prepended to the body and the data interpolated, displaying the properties we wish to display; in this case the title and album name of the song. If this is a bit difficult to understand, don't worry about it too much, I myself had a lot of trouble trying to pick up the concept when the industry started moving into one-page applications that ran off raw data (JSON). For now, these are the methods we'll be using consistently within the sample application to be built in this article. It is encouraged that you experiment with the Underscore.js library to discover some of the more advanced features that make your life easier, such as _.map, _.reduce, _.indexOf, _.debounce, and _.clone. However, let's move on to Backbone.js and how this library will be used to create our application.
Read more
  • 0
  • 0
  • 3032
Visually different images

article-image-so-what-ext-js
Packt
12 Sep 2013
8 min read
Save for later

So, what is Ext JS?

Packt
12 Sep 2013
8 min read
(For more resources related to this topic, see here.) JavaScript is a classless, prototype-oriented language but Ext JS follows a class-based approach to make the code extensible and scalable over time. Class names can be grouped into packages with namespaces using the object property dot-notation (.). Namespaces allow developers to write structured and maintainable code, use libraries without the risk of overwriting functions, avoid cluttering the global namespace, and provide an ability to encapsulate the code. The strength of the framework lies in its component design. The bundled, basic default components can be easily extended as per your needs and the extended components can be re-used. A new component can also be created by combining one or more default components. The framework includes many default components such as windows, panels, toolbars, drop-down menus, menu bars, dialog boxes, grids, trees, and much more, each with their own configuration properties (configs), component properties, methods, events, and CSS classes. The configs are user-configurable at runtime while instantiating, whereas component properties are references to objects used internally by class. Component properties belong to the prototype of the class and affect all the instances of the class. The properties of the individual components determine the look and feel. The methods help in achieving a certain action. The user interaction triggers the equivalent Ext JS events apart from triggering the DOM events. A cross-browser web application with header, footer, left column section with links, a content with a CSS grid/table (with add, edit, and delete actions for each row of the grid), and a form with few text fields and a submit button can be created with ease using Ext JS's layout mechanism, few default components, and the CSS theme. For the preceding application, the border layout can be used with the north region for the header, south region for the footer, west region for the left column links, and center region for the content. The content area can have a horizontal layout, with the grid and form panel components with text fields and buttons. Creating the preceding application from scratch without using the framework will take a lot more time than it would take by using it. Moreover, this is just one screen, and as the development progresses with more and more features, incorporating new layouts and creating new components will be a tedious process. All the components or a group of components with their layout can be made a custom component and re-used with different data (that is, the grid data can be modified with new data and re-used in a different page). Developers need not worry about the cross-platform compatibility issues, since the framework takes care of this, and they can concentrate on the core logic. The helper functions of the Ext.DomQuery class can be used for querying the DOM. The error handling can be done by using the Ext.Error class, which is a wrapper for the native JavaScript Error object. A simple webpage with a minimal UI too can make use of this framework in many ways. Native JavaScript offers utility classes such as Array, Number, Date, Object, Function, and String, but is limited in what can be done with it across different browsers. Ext JS provides its own version of these classes that works in all the browsers along with offering extra functionality. Any Ext JS component can be added to an existing web page by creating an instance of it. For example, a tab feature can be added to an existing web page by creating a new Ext JS Ext.tab tab component and adding it to an existing div container, by referring the div elements id attribute to the renderTo config property of the tab. The backend communication with your server-side code can be done by using simplified cross-browser Ext.Ajax class methods. Ext JS 4 supports all major web browsers, from Internet Explorer 6 to the latest version of Google Chrome. The recommended browsers for development and debugging are Google Chrome 10+, Apple Safari 5+, and Mozilla Firefox 4+. Both commercial and open source licenses are available for Ext JS. Installation and environment setup In five easy steps, you can be ready with Ext JS and start the development. Step 1 – What do you need? You need the following components for the installation and environment setup: Web browser : Any of the leading browsers mentioned in previous section. For this book, we will consider Mozilla Firebug with the Firebug debugger plugin installed. Web server : To start with, a local web server is not required, but it will be required if communication with a server is required to make AJAX calls. Ext JS 4 SDK : Download the Ext JS bundle from http://www.sencha.com/products/extjs/download/. Click on the Download button on the left side of the page. Step 2 – Installing the browser and debugger Any supported browser mentioned in the previous section can be used for the tutorial. For simplicity and debugging options, we will use the latest Firefox and Firebug debugger plugin. Download the latest Firefox plugin from http://www.mozilla.org/en-US/firefox/fx/#desktop and Firebug from https://getfirebug.com/. Other browser debugging options are as follows: Google Chrome : Chrome Developer Tools ( Tools | Developer tools ) Safari : Go to Settings | Preferences | Advanced , select Show Develop menu in menu bar ; navigate to Develop | Show Web Inspector . Internet Explorer : Go to Tools | Developer Tools   Step 3 – Installing the web server Install the web server and unpack Ext JS. The URLs that provide information for installing the Apache web server on various operating systems are provided as follows: The instructions for installing Apache on Windows can be found at http://httpd.apache.org/docs/current/platform/windows.html The instructions for installing Apache on Linux can be found at http://httpd.apache.org/docs/current/install.html Mac OS X comes with a built-in Apache installation, which you can enable by navigating to System Preferences | Sharing , and selecting the Web Sharing checkbox Install Apache or any other web server in your system. Browse to http://yourwebserver.com or http://localhost, and check that the installation is successful. The http://yourwebserver.com link will show something similar to the the following screenshot, which confirms that Apache is installed successfully: Step 4 – Unpacking Ext JS In this tutorial, we will use Apache for Windows. Unpack the Ext JS bundle into the web server's root directory (htdocs). Rename the Ext JS folder with long version numbers to extjs4 for simplicity. The root directory varies, depending upon your operating system and web server. The Apache root directory path for various operating system are as follows: Windows : C:Program FilesApache Software FoundationApache2.2htdocs Linux : /var/www/ Mac OS X : /Library/WebServer/Documents/ The downloaded EXT JS bundle is packed with examples along with required sources. Browse to http://yourwebserver.com/extjs4, and make sure that it loads the Ext JS index page. This page provides access to all the examples to play around with the API. The API Docs link at bottom-right of the page lists the API information with a search text field at the top-right side of the page. As we progress through the tutorial, please refer to the API as and when required: Step 5 –Testing Ext JS library. A basic Ext JS application page will have a link tag with an Ext JS CSS file (ext-all.css), a script tag for the Ext JS library, and scripts related to your own application. In this example, we don't have any application-specific JavaScripts. Create an HTML file named check.html with the code that follows beneath the httpd folder. Ext.onReady is a method, which is executed when all the scripts are fully loaded. Ext.Msg.alert is a message box that shows a message to the user. The first parameter is the title and the second parameter is the message: <html> <head> <meta http-equiv="Content-Type" content = "text/html; charset=utf-8"> <title>Ext JS started Setup Test</title> <link rel="stylesheet" type="text/css" href = "../extjs4/resources/css/ext-all.css"></link> <script type="text/javascript" src = "../extjs4/ext-all-dev.js"></script> <script type="text/javascript"> Ext.onReady(function() { Ext.Msg.alert("Ext JS 4 Starter","Welcome to Ext 4 Starter!"); }); </script> </head> <body> </body> </html> The following screenshot shows check.html in action: And that's it By now, you should have a working installation of Ext JS, and should be able to play around and discover more about it. Summary Thus we have discussed about having a working environment of EXT JS. Resources for Article : Further resources on this subject: Tips & Tricks for Ext JS 3.x [Article] Ext JS 4: Working with the Grid Component [Article] Building a Ext JS Theme into Oracle APEX [Article]
Read more
  • 0
  • 0
  • 4680

article-image-ibm-cognos-insight
Packt
12 Sep 2013
9 min read
Save for later

IBM Cognos Insight

Packt
12 Sep 2013
9 min read
(For more resources related to this topic, see here.) An example case for IBM Cognos Insight Consider an example of a situation where an organization from the retail industry heavily depends on spreadsheets as its source of data collection, analysis, and decision making. These spreadsheets contain data that is used to analyze customers' buying patterns across the various products sold by multiple channels in order to boost the sales across the company. The analysis hopes to reveal customers' buying patterns demographically, streamline sales channels, improve supply chain management, give an insight into forecast spending, and redirect budgets to advertising, marketing, and human capital management, as required. As this analysis is going to involve multiple departments and resources working with spreadsheets, one of the challenges will be to have everyone speak in similar terms and numbers. Collaboration across departments is important for a successful analysis. Typically in such situations, multiple spreadsheets are created across resource pools and segregated either by time, product, or region (due to the technical limitations of spreadsheets) and often the analysis requires the consolidation of these spreadsheets to be able to make the educated decision. After the number-crunching, a consolidated spreadsheet showing high level summaries is sent out to executives, while the details remain on other tabs within the same spreadsheet or on altogether separate spreadsheet files. This manual procedure has a high probability of errors. The similar data analysis process in IBM Cognos Insight would result in faster decision making by keeping the details and the summaries in a highly compressed Online Analytical Processing (OLAP) in-memory cube. Using the intuitive drag-and-drop functionality or the smart-metadata import wizard, the spreadsheet data now appears instantaneously (due to the in-memory analysis) in a graphical and pivot table format. Similar categorical data values, such as customer, time, product, sales channel and retail location are stored as dimension structures. All the numerical values bearing factual data such as revenue, product cost, and so on, defined as measures are stored in the OLAP cube along with the dimensions. Two or more of these dimensions and measures together form a cube view that can be sliced and diced and viewed at a summarized or a detailed level. Within each dimension, elements such as customer name, store location, revenue amount generated, and so on, are created. These can be used in calculations and trend analysis. These dimensions can be pulled out on the analysis canvas as explorer points that can be used for data filtering and sorting. Calculations, business rules and differentiator metrics can be added to the cube view to enhance the analysis. After enhancements to the IBM Cognos Insight workspace have been saved, these workspaces or fi les can be e-mailed and distributed as offline analyses. Also, the users have the option to publish the workspace into the IBM Cognos Business Intelligence web portal, Cognos Connection or IBM Cognos Express, both of which are targeted to larger audiences, where this information can be shared with broader workgroups. Security layers can be included to protect sensitive data, if required. The publish-and-distribute option within IBM Cognos Insight is used for advanced analytics features and write-back functionality in larger deployments. where, the users can modify plans online or offline, and sync up to the enterprise environment on an as-and-when basis. As an example, the analyst can create what-if scenarios for business purposes to simulate the introduction of a new promotion price for a set of smart phones during high foot traffic times to drive up sales. Or simulating an extension of store hours during summer months to analyze the effects on overall store revenue can be created. The following diagram shows the step-by-step process of dropping a spreadsheet into IBM Cognos Insight and viewing the dashboard and the scorecard style reports instantaneously, which can then be shared on the IBM Cognos BI web-portal or published to an IBM TM1 environment. The preceding screenshot demonstrates the steps from raw data in spreadsheets being imported into IBM Cognos Insight to reveal a dashboard style report instantaneously. Additional calculations to this workspace creates scorecard type graphical variances, thus giving an overall picture through rich graphics. Using analytics successfully Over the past few years, there have been huge improvements in the technology and processes of gathering the data. Using Business Analytics and applications such as IBM Cognos Insight we can now analyze and accurately measure anything and everything. This leads to the question: Are we using Analytics successfully? The following high-level recommendations should be used as a guidance for organizations that are either attempting a Business Analytics implementation for the first time or for those who are already involved with Business Analytics, both working towards a successful implementation: The first step is to prioritize the targets that will produce intelligent analytics from the available trustworthy data. Choosing this target wisely and thoughtfully has an impact on the success rate of the implementation. Usually, these are high value targets that need problem solving and/or quick wins to justify the need and/or investment towards a Business Analytics solution. Avoid the areas with a potential for probable budget cuts and/or involving corporate cultural and political battles that are considered to be the major factors leading to an implementation pitfall. Improve your chances by asking the question—where will we achieve maximum business value? Selecting the appropriate product to deliver the technology is the key for success—a product that is suitable for all the skill levels and that can be supported by the organization's infrastructure. IBM Cognos Insight is one such product where the learning curve is minimal; thanks to its ease of use and vast features. The analysis produced by using IBM Cognos Insight can then be shared by publishing to an enterprise-level solution such as IBM Cognos BI, IBM Cognos Express, or IBM TM1. This product reduces dependencies on IT departments in terms of personnel and IT resources due to the small learning curve, easy setup, intuitive look, feel, and vast features. The sharing and collaborating capabilities eliminate the need for multiple silos of spreadsheets, one of the reasons why organizations want to move towards a more structured and regulated Enterprise Analytics approach. Lastly, organize a governing body such as a Analytics Competency Center (ACC) or Analytics Center of Excellence (ACE) that has the primary responsibility to do the following: Provide the leadership and build the team Plan and manage the Business Analytics vision and strategy (BA Roadmap) Act as a governing body maintaining standardization at the Enterprise level Develop, test, and deliver Business Analytic solutions Document all the processes and procedures, both functional and technical Train and support end users of Business Analytics Find ways to increase the Return on Investment (ROI) Integrate Business Analytics into newer technologies such as mobile and cloud computing The goals of a mature, enterprise-wide Analytics solution is when any employee within the organization, be it an analyst to an executive, or a member of the management team, can have their business-related questions answered in real time or near real time. These answers should also be able to predict the unknown and prepare for the unforeseen circumstances better. With the success of a Business Analytics solution and realized ROI, a question that should be asked is—are the solutions robust and flexible enough to expand regionally/globally? Also, can it sustain a merger or acquisition with minimal consolidation efforts? If the Business Analytics solution provides the confidence in all of the above, the final question should be—can the Business Analytics solution be provided as a service to the organizations' suppliers and customers? In 2012, a global study was conducted jointly by IBM's Institute of Business Value (IBV) and MIT Sloan Management Review. This study, which included 1700 CEOs globally, reinforced the fact that one of the top objectives within their organizations was sharing and collaboration. IBM Cognos Insight, the desktop analysis application, provides collaborative features that allow the users to launch development efforts via IBMs Cognos Business Intelligence, Cognos Express, and Performance Management enterprise platforms. Let us consider a fictitious company called PointScore. Having completed its marketing, sales, and price strategy analysis, PointScore is now ready to demonstrate its research and analysis efforts to its client. Using IBM Cognos Insight, PointScore has three available options. All of these will leverage the Cognos Suite of products that its client has been using and is familiar with. Each of these options can be used to share the information with a larger audience within the organization. Though technical, this article is written for a non-technical audience as well. IBM Cognos Insight is a product that has its roots embedded in Business Intelligence and its foundation is built upon Performance Management solutions. This article provides the readers with Business Analytics techniques and discusses the technical aspects of the product, describing its features and benefits. The goal of writing this article was to make you feel confident about the product. This article is meant to expand on your creativity so that you can build better analysis and workspaces using Cognos Insight. The article focuses on the strengths of the product, which is to share and collaborate the development efforts into an existing IBM Cognos BI, Cognos Express, or TM1 environment. This sharing is possible because of the tight integration among all the products under the IBM Business Analytics umbrella. Summary After reading this article, you should be able to tackle Business Analytics implementations It will also help you to leverage the sharing capability to reach an end goal of spreading the value of Business Analytics throughout their organizations. Resources for Article: Further resources on this subject: How to Set Up IBM Lotus Domino Server [Article] Tips and Tricks on IBM FileNet P8 Content Manager [Article] Reporting Planning Data in IBM Cognos 8: Publish and BI Integration [Article]
Read more
  • 0
  • 0
  • 1384

article-image-video-conversion-required-html5-video-playback
Packt
12 Sep 2013
5 min read
Save for later

Video conversion into the required HTML5 Video playback

Packt
12 Sep 2013
5 min read
(For more resources related to this topic, see here.) If you have issues with Playback support and probably thinking that you would play any video in Windows Media Player, it is not so as Windows Media Player doesn't support all formats. This article will show you how to fix this and get them playing. Transcoding audio files (must know) We start this section by getting ready the files we are going to use later on—it is likely you may well have some music tracks already, but not in the right format. We will fix that in this task by using a shareware program called Switch Audio File Converter, which is available from http://www.nch.com.au/switch for approximately USD40. Getting ready For this task, you need download a copy of the Switch Sound Converter application—it is available from http://www.nch.com.au/switch/index.html. You may like to note that a license is required for encoding AMR files or using MP3 files in certain instances—these can be purchased at the same time as purchasing the main license. How to do it... The first thing to do is install the software, so let's go ahead and run switchsetup.exe—note that for the purposes of this demo, you should not select any of the additional related programs when requested. Double-click the application to open it, then click on Add File and browse to, and then select the file you want to convert: Click on Output Format and change it to .ogg—it will automatically download the required converter as soon as you click on Convert. The file is saved by default into your Music folder underneath your profile. How it works... Switch Sound File Converter has been designed to make the conversion process as simple as possible—this includes downloading any extra components that are required for the purposes of encoding or decoding audio files. You can alter the encoding settings, although you should find that for general use this may not be necessary. There's more... There are lots of converters available that you can try—I picked this one as it is quick and easy to use, and doesn't have a large footprint (unlike some others). If you prefer, you can also use online services to accomplish the same task—two examples include Fre:ac (http://www.freac.org) or Online-Convert.com (http://www.online-convert.com). Note though that some sites will take note of details such as your IP address or what it is you are converting as well as store copies for a period of time. Installing playback support: codecs (Must know) Now that we have converted our audio files ready for playback—it's time to ensure that we can actually play them back in our PCs as well as in our browsers. Most of the latest browsers will play at least one of the formats we've created in the previous task but it is likely that you won't be able to play them outside of the browser. Let's take a look at how we can fix this by updating the codecs installed in your PC. For those of you not familiar with codecs, they are designed to help encode assets when the audio file is created and decode them as part of playback. Software and hardware makers will decide the makeup of each codec based on which containers and technologies they should support; a number of factors such as file size, quality, and bandwidth all play a part in their decisions. Let's take a look at how we can update our PCs to allow for proper playback of HTML5 video. Getting ready There are lots of individuals or companies who have produced different codecs, with differing results. We will take a look at one package that seems to work very well for Windows, which is the K-Lite Codec Pack. You need to download a copy of the pack, which is available from http://fileforum.betanews.com/detail/KLite-Codec-Pack-Basic/1094057842/1 —use the blue Download link on the right side of the page. This will download the basic version, which is more than sufficient for our needs at this stage. How to do it... Download, then run K-Lite_Codec_Pack_860_Basic.exe. Click on Next. On the Installation Mode screen, select the Simple option. On the File Associations page, select Windows Media Player. On the File associations screen for Windows Media Player screen, click on Select all audio: On the Thumbnails screen, click on Next. On the Speaker configuration screen, click on Next, then Install. The software will confirm when the codecs have been installed. How it works... In order to play back HTML5 format audio in Windows Media Player, you need to ensure you have the correct support in place; Windows Media Player doesn't understand the encoding format of HTML5 audio by default. We can overcome this by installing additional codecs that tell Windows how to encode or decode a particular file format; K-Lite's package aims to remove the pain of this process. There's more... The package we've looked at in this task is only available for Windows, if you are a Mac user, you will need to use an alternative method. There are lots of options available online—one such option is X Lossless Decoder, available from http://www.macupdate.com/app/mac/23430/x-lossless-decoder, which includes support for both .ogg and .mp4 formats. Summary We've taken a look at the recipes that show you to transcode a video into HTML5 Format and install playback support. This is only just the start of what you can achieve using this article—there is a whole world out there to explore. Resources for Article : Further resources on this subject: Basic use of Local Storage [Article] Customize your LinkedIn profile headline [Article] Blocking versus Non blocking scripts [Article]
Read more
  • 0
  • 0
  • 1060
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-features-raphaeljs
Packt
12 Sep 2013
16 min read
Save for later

Features of RaphaelJS

Packt
12 Sep 2013
16 min read
(For more resources related to this topic, see here.) Creating a Raphael element Creating a Raphael element is very easy. To make it better, there are predefined methods to create basic geometrical shapes. Basic shape There are three basic shapes in RaphaelJS, namely circle, ellipse, and rectangle. Rectangle We can create a rectangle using the rect() method. This method takes four required parameters and a fifth optional parameter, border-radius. The border-radius parameter will make the rectangle rounded (rounded corners) by the number of pixels specified. The syntax for this method is: paper.rect(X,Y,Width,Height,border-radius(optional)); A normal rectangle can be created using the following code snippet: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); // creating a rectangle with the rect() method. The four required parameters are X,Y,Width & Height var rect = paper.rect(35,25,170,100).attr({ "fill":"#17A9C6", //filling with background color "stroke":"#2A6570", // border color of the rectangle "stroke-width":2 // the width of the border }); The output for the preceding code snippet is shown in the following screenshot: Plain rectangle Rounded rectangle The following code will create a basic rectangle with rounded corners: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); //The fifth parameter will make the rectangle rounded by the number of pixels specified – A rectangle with rounded corners var rect = paper.rect(35,25,170,100,20).attr({ "fill":"#17A9C6",//background color of the rectangle "stroke":"#2A6570",//border color of the rectangle "stroke-width":2 // width of the border }); //in the preceding code 20(highlighted) is the border-radius of the rectangle. The output for the preceding code snippet is a rectangle with rounded corners, as shown in the following screenshot: Rectangle with rounded corners We can create other basic shapes in the same way. Let's create an ellipse with our magic wand. Ellipse An ellipse is created using the ellipse() method and it takes four required parameters, namely x,y, horizontal radius, and vertical radius. The horizontal radius will be the width of the ellipse divided by two and the vertical radius will be the height of the ellipse divided by two. The syntax for creating an ellipse is: paper.ellipse(X,Y,rX,rY); //rX is the horizontal radius & rY is the vertical radius of the ellipse Let's consider the following example for creating an ellipse: // creating a raphael paperin 'paperDiv' var paper = Raphael ("paperDiv", 650,400); //The ellipse() method takes four required parameters: X,Y, horizontal radius & vertical Radius var ellipse = paper.ellipse(195,125,170,100).attr({ "fill":"#17A9C6", // background color of the ellipse "stroke":"#2A6570", // ellipse's border color "stroke-width":2 // border width }); The preceding code will create an ellipse of width 170 x 2 and height 100 x 2. An ellipse created using the ellipse() method is shown in the following screenshot: An Ellipse Complex shapes It's pretty easy to create basic shapes, but what about complex shapes such as stars, octagons, or any other shape which isn't a circle, rectangle, or an ellipse. It's time for the next step of Raphael wizardry. Complex shapes are created using the path() method which has only one parameter called pathString. Though the path string may look like a long genetic sequence with alphanumeric characters, it's actually very simple to read, understand, and draw with. Before we get into path drawing, it's essential that we know how it's interpreted and the simple logic behind those complex shapes. Imagine that you are drawing on a piece of paper with a pencil. To draw something, you will place the pencil at a point in the paper and begin to draw a line or a curve and then move the pencil to another point on the paper and start drawing a line or curve again. After several such cycles, you will have a masterpiece—at least, you will call it a masterpiece. Raphael uses a similar method to draw and it does so with a path string. A typical path string may look like this: M0,0L26,0L13,18L0,0. Let's zoom into this path string a bit. The first letter says M followed by 0,0. That's right genius, you've guessed it correctly. It says move to 0,0 position, the next letter L is line to 26,0. RaphaelJS will move to 0,0 and from there draw a line to 26,0. This is how the path string is understood by RaphaelJS and paths are drawn using these simple notations. Here is a comprehensive list of commands and their respective meanings: Command Meaning expansion Attributes M move to (x, y) Z close path (none) L line to (x, y) H horizontal line to x V vertical line to y C curve to (x1, y1, x2, y2, x, y) S smooth curve to (x2, y2, x, y) Q quadratic Bézier curve to (x1, y1, x, y) T smooth quadratic Bézier curve to (x, y) A elliptical arc (rx, ry, x axis-rotation, large-arc-flag, sweep-flag, x, y) R Catmull-Rom-curve to* x1, y1 (x y) The uppercase commands are absolute (M20, 20); they are calculated from the 0,0 position of the drawing area (paper). The lowercase commands are relative (m20, 20); they are calculated from the last point where the pen left off. There are so many commands, which might feel like too much to take in—don't worry; there is no need to remember every command and its format. Because we'll be using vector graphics editors to extract paths, it's essential that you understand the meaning of each and every command so that when someone asks you "hey genius, what does this mean?", you shouldn't be standing there clueless pretending to have not heard it. The syntax for the path() method is as follows: paper.path("pathString"); Let's consider the following example: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 350,200); // Creating a shape using the path() method and a path string var tri = paper.path("M0,0L26,0L13,18L0,0").attr({ "fill":"#17A9C6", // filling the background color "stroke":"#2A6570", // the color of the border "stroke-width":2 // the size of the border }); All these commands ("M0,0L26,0L13,18L0,0") use uppercase letters. They are therefore absolute values. The output for the previous example is shown in the following screenshot: A triangle shape drawn using the path string Extracting and using paths from an editor Well, a triangle may be an easy shape to put into a path string. How about a complex shape such as a star? It's not that easy to guess and manually find the points. It's also impossible to create a fairly more complex shape like a simple flower or a 2D logo. Here in this section, we'll see a simple but effective method of drawing complex shapes with minimal fuss and sharp accuracy. Vector graphics editors The vector graphics editors are meant for creating complex shapes with ease and they have some powerful tools in their disposal to help us draw. For this example, we'll create a star shape using an open source editor called Inkscape, and then extract those paths and use Raphael to get out the shape! It is as simple as it sounds, and it can be done in four simple steps. Step 1 – Creating the shape in the vector editor Let's create some star shapes in Inkscape using the built-in shapes tool. Star shapes created using the built-in shapes tool Step 2 – Saving the shape as SVG The paths used by SVG and RaphaelJS are similar. The trick is to use the paths generated by the vector graphics editor in RaphaelJS. For this purpose, the shape must be saved as an SVG file. Saving the shape as an SVG file Step 3 – Copying the SVG path string The next step is to copy the path from SVG and paste it into Raphael's path() method. SVG is a markup language, and therefore it's nested in tags. The SVG path can be found in the <path> and </path> tags. After locating the path tag, look for the d attribute. This will contain a long path sequence. You've now hit the bullseye. The path string is highlighted Step 4 – Using the copied path as a Raphael path string After copying the path string from SVG, paste it into Raphael's path() method. var newpath=paper.path("copied path string from SVG").attr({ "fill":"#5DDEF4", "stroke":"#2A6570", "stroke-width":2 }); That's it! We have created a complex shape in RaphaelJS with absolute simplicity. Using this technique, we can only extract the path, not the styles. So the background color, shadow, or any other style in the SVG won't apply. We need to add our own styles to the path objects using the attr() method. A screenshot depicting the complex shapes created in RaphaelJS using the path string copied from an SVG file is shown here: Complex shapes created in RaphaelJS using path string Creating text Text can be created using the text() method. Raphael gives us a way to add a battery of styles to the text object, right from changing colors to animating physical properties like position and size. The text() method takes three required parameters, namely, x,y, and the text string. The syntax for the text() method is as follows: paper.text(X,Y,"Raphael JS Text"); // the text method with X,Y coordinates and the text string Let's consider the following example: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); // creating text var text = paper.text(40,55,"Raphael Text").attr({ "fill":"#17A9C6", // font-color "font-size":75, // font size in pixels //text-anchor indicates the starting position of the text relative to the X, Y position.It can be "start", "middle" or "end" default is "middle" "text-anchor":"start", "font-family":"century gothic" // font family of the text }); I am pretty sure that the text-anchor property is a bit heavy to munch. Well, there is a saying that a picture is worth a thousand words. The following diagram clearly explains the text-anchor property and its usage. A brief explanation of text-anchor property A screenshot of the text rendered using the text() method is as follows: Rendering text using the text() method Manipulating the style of the element The attr() method not only adds styles to an element, but it also modifies an existing style of an element. The following example explains the attr() method: rect.attr('fill','#ddd'); // This will update the background color of the rectangle to gray Transforming an element RaphaelJS not only creates elements, but it also allows the manipulating or transforming of any element and its properties dynamically. Manipulating a shape By the end of this section, you would know how to transform a shape. There might be many scenarios wherein you might need to modify a shape dynamically. For example, when the user mouse-overs a circle, you might want to scale up that circle just to give a visual feedback to the user. Shapes can be manipulated in RaphaelJS using the transform() method. Transformation is done through the transform() method, and it is similar to the path() method where we add the path string to the method. transform() works in the same way, but instead of the path string, it's the transformation string. There is only a moderate difference between a transformation string and a path string. There are four commands in the transformation string: T Translate S Scale R Rotate in degrees M Matrix The fourth command, M, is of little importance and let's keep it out of the way, to avoid confusion. The transformation string might look similar to a path string. In reality, they are different, not entirely but significantly, sharing little in common. The M in a path string means move to , whereas the same in a transformation string means Matrix . The path string is not to be confused with a transformation string. As with the path string, the uppercase letters are for absolute transformations and the lowercase for relative transformation. If the transformation string reads r90T100,0, then the element will rotate 90 degrees and move 100 px in the x axis (left). If the same reads r90t100,0, then the element will rotate 90 degrees and since the translation is relative, it will actually move vertically down 100px, as the rotation has tilted its axis. I am sure the previous point will confuse most, so let me break it up. Imagine a rectangle with a head and now this head is at the right side of the rectangle. For the time being, let's forget about absolute and relative transformation; our objective is to: Rotate the rectangle by 90 degrees. Move the rectangle 100px on the x axis (that is, 100px to the right). It's critical to understand that the elements' original values don't change when we translate it, meaning its x and y values will remain the same, no matter how we rotate or move the element. Now our first requirement is to rotate the rectangle by 90 degrees. The code for that would be rect.transform("r90") where r stands for rotation—fantastic, the rectangle is rotated by 90 degrees. Now pay attention to the next important step. We also need the rectangle to move 100px in the x axis and so we update our previous code to rect.transform("r90t100,0"), where t stands for translation. What happens next is interesting—the translation is done through a lowercase t, which means it's relative. One thing about relative translations is that they take into account any previous transformation applied to the element, whereas absolute translations simply reset any previous transformations before applying their own. Remember the head of the rectangle on the right side? Well, the rectangle's x axis falls on the right side. So when we say, move 100px on the x axis, it is supposed to move 100px towards its right side, that is, in the direction where its head is pointing. Since we have rotated the rectangle by 90 degrees, its head is no longer on the right side but is facing the bottom. So when we apply the relative translation, the rectangle will still move 100px to its x axis, but the x axis is now pointing down because of the rotation. That's why the rectangle will move 100px down when you expect it to move to the right. What happens when we apply absolute translation is something that is entirely different from the previous one. When we again update our code for absolute translation to rect.transform("r90T100,0"), the axis of the rectangle is not taken into consideration. However, the axis of the paper is used, as absolute transformations don't take previous transformations into account, and they simply reset them before applying their own. Therefore, the rectangle will move 100px to the right after rotating 90 degrees, as intended. Absolute transformations will ignore all the previous transformations on that element, but relative transformations won't. Getting a grip on this simple logic will save you a lot of frustration in the future while developing as well as while debugging. The following is a screenshot depicting relative translation: Using relative translation The following is a screenshot depicting absolute translation: Using absolute translation Notice the gap on top of the rotated rectangle; it's moved 100px on the one with relative translation and there is no such gap on top of the rectangle with absolute translation. By default, the transform method will append to any transformation already applied to the element. To reset all transformations, use element.transform(""). Adding an empty string to the transform method will reset all the previous transformations on that element. It's also important to note that the element's original x,y position will not change when translated. The element will merely assume a temporary position but its original position will remain unchanged. Therefore after translation, if we call for the element's position programmatically, we will get the original x,y, not the translated one, just so we don't jump from our seats and call RaphaelJS dull! The following is an example of scaling and rotating a triangle: //creating a Triangle using the path string var tri = paper.path("M0,0L104,0L52,72L0,0").attr({ "fill":"#17A9C6", "stroke":"#2A6570", "stroke-width":2 }); //transforming the triangle. tri.animate({ "transform":"r90t100,0,s1.5" },1000); //the transformation string should be read as rotating the element by 90 degrees, translating it to 100px in the X-axis and scaling up by 1.5 times The following screenshot depicts the output of the preceding code: Scaling and rotating a triangle The triangle is transformed using relative translation (t). Now you know the reason why the triangle has moved down rather than moving to its right. Animating a shape What good is a magic wand if it can't animate inanimate objects! RaphaelJS can animate as smooth as butter almost any property from color, opacity, width, height, and so on with little fuss. Animation is done through the animate() method. This method takes two required parameters, namely final values and milliseconds, and two optional parameters, easing and callback. The syntax for the animate() method is as follows: Element.animate({ Animation properties in key value pairs },time,easing,callback_function); Easing is that special effect with which the animation is done, for example, if the easing is bounce, the animation will appear like a bouncing ball. The following are the several easing options available in RaphaelJS: linear < or easeIn or ease-in > or easeOut or ease-out <> or easeInOut or ease-in-out backIn or back-in backOut or back-out elastic bounce Callbacks are functions that will execute when the animation is complete, allowing us to perform some tasks after the animation. Let's consider the example of animating the width and height of a rectangle: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); rect.animate({ "width":200, // final width "height":200 // final height },300,"bounce',function(){ // something to do when the animation is complete – this callback function is optional // Print 'Animation complete' when the animation is complete $("#animation_status").html("Animation complete") }) The following screenshot shows a rectangle before animation: Rectangle before animation A screenshot demonstrating the use of a callback function when the animation is complete is as follows. The text Animation complete will appear in the browser after completing the animation. Use of a callback function The following code animates the background color and opacity of a rectangle: rect.animate({ "fill":"#ddd", // final color, "fill-opacity":0.7 },300,"easeIn",function(){ // something to do when the animation is complete – this call back function is optional // Alerts done when the animation is complete alert("done"); }) Here the rectangle is animated from blue to gray and with an opacity from 1 to 0.7 over a duration of 300 milliseconds. Opacity in RaphaelJS is the same as in CSS, where 1 is opaque and 0 is transparent.
Read more
  • 0
  • 0
  • 5273

article-image-master-virtual-desktop-image-creation
Packt
11 Sep 2013
11 min read
Save for later

Master Virtual Desktop Image Creation

Packt
11 Sep 2013
11 min read
(For more resources related to this topic, see here.) When designing your VMware Horizon View infrastructure, creating a Virtual Desktop master image is second only to infrastructure design in terms of importance. The reason for this is simple; as ubiquitous as Microsoft Windows is, it was never designed to be a hosted Virtual Desktop. The good news is that with a careful bit of planning, and a thorough understanding of what your end users need, you can build a Windows desktop that serves all your needs, while requiring the bare minimum of infrastructure resources. A default installation of Windows contains many optional components and configuration settings that are either unsuitable for, or not needed in, a Virtual Desktop environment, and understanding their impact is critical to maintaining Virtual Desktop performance over time and during peak levels of use. Uninstalling unneeded components and disabling services or scheduled tasks that are not required will help reduce the amount of resources the Virtual Desktop requires, and ensure that the View infrastructure can properly support the planned number of desktops even as resources are oversubscribed. Oversubscription is defined as having assigned more resources than what is physically available. This is most commonly done with processor resources in Virtual Desktop environments, where a single server processor core may be shared between multiple desktops. As the average desktop does not require 100 percent of its assigned resources at all times, we can share those resources between multiple desktops without affecting the performance. Why is desktop optimization important? To date, Microsoft has only ever released a version of Windows designed to be installed on physical hardware. This isn't to say that Microsoft is unique is this regard, as neither Linux and Mac OS X offers an installation routine that is optimized for a virtualized hardware platform. While nothing stops you from using a default installation of any OS or software package in a virtualized environment, you may find it difficult to maintain consistent levels of performance in Virtual Desktop environments where many of the resources are shared, and in almost every case oversubscribed in some manner. In this section, we will examine a sample of the CPU and disk IO resources that can be recovered were you to optimize the Virtual Desktop master image. Due to the technological diversity that exists from one organization to the next, optimizing your Virtual Desktop master image is not an exact science. The optimization techniques used and their end results will likely vary from one organization to the next due to factors unrelated to View or vSphere. The information contained within this article will serve as a foundation for optimizing a Virtual Desktop master image, focusing primarily on the operating system. Optimization results – desktop IOPS Desktop optimization benefits one infrastructure component more than any other: storage. Until all flash storage arrays achieve price parity with the traditional spinning disk arrays many of us use today, reducing the per-desktop IOPS required will continue to be an important part of any View deployment. On a per-disk basis, a flash drive can accommodate more than 15 times the IOPS of an enterprise SAS or SCSI disk, or 30 times the IOPS of a traditional desktop SATA disk. Organizations that choose an all-flash array may find that they have more than sufficient IOPS capacity for their Virtual Desktops, even without doing any optimization. The following graph shows the reduction in IOPS that occurred after performing the optimization techniques described later in this article. The optimized desktop generated 15 percent fewer IOPS during the user workload simulation. By itself that may not seem like a significant reduction, but when multiplied by hundreds or thousands of desktops the savings become more significant. Optimization results – CPU utilization View supports a maximum of 16 Virtual Desktops per physical CPU core. There is no guarantee that your View implementation will be able to attain this high consolidation ratio, though, as desktop workloads will vary from one type of user to another. The optimization techniques described in this article will help maximize the number of desktops you can run per each server core. The following graph shows the reduction in vSphere host % Processor Time that occurred after performing the optimization techniques described later in this article: % Processor Time is one of the metrics that can be used to measure server processor utilization within vSphere. The statistics in the preceding graph were captured using the vSphere ESXTOP command line utility, which provides a number of performance statistics that the vCenter performance tabs do not offer, in a raw format that is more suited for independent analysis. The optimized desktop required between 5 to 10 percent less processor time during the user workload simulation. As was the case with the IOPS reduction, the savings are significant when multiplied by large numbers of desktops. Virtual Desktop hardware configuration The Virtual Desktop hardware configuration should provide only what is required based on the desktop needs and the performance analysis. This section will examine the different virtual machine configuration settings that you may wish to customize, and explain their purpose. Disabling virtual machine logging Every time a virtual machine is powered on, and while it is running, it logs diagnostic information within the datastore that hosts its VMDK file. For environments that have a large number of Virtual Desktops, this can generate a noticeable amount of storage I/O. The following steps outline how to disable virtual machine logging: In the vCenter client, right-click on the desktop master image virtual machine and click on Edit Settings to open the Virtual Machine Properties window. In the Virtual Machine Properties window, select the Options tab. Under Settings , highlight General . Clear Enable logging as shown in the following screenshot, which sets the logging = "FALSE" option in the virtual machine VMX file: While disabling logging does reduce disk IO, it also removes log files that may be used for advanced troubleshooting or auditing purposes. The implications of this change should be considered before placing the desktop into production. Removing unneeded devices By default, a virtual machine contains several devices that may not be required in a Virtual Desktop environment. In the event that these devices are not required, they should be removed to free up server resources. The following steps outline how to remove the unneeded devices: In the vCenter client, right-click on the desktop master image virtual machine and click on Edit Settings to open the Virtual Machine Properties window. In the Virtual Machine Properties window, under Hardware , highlight Floppy drive 1 as shown in the following screenshot and click on Remove : In the Virtual Machine Properties window, select the Options tab. Under Settings , highlight Boot Options . Check the checkbox under the Force BIOS Setup section as shown in the following screenshot: Click on OK to close the Virtual Machine Properties window. Power on the virtual machine; it will boot into the PhoenixBIOS Setup Utility . The PhoenixBIOS Setup Utility menu defaults to the Main tab. Use the down arrow key to move down to the Legacy Diskette A , and then press the Space bar key until the option changes to Disabled . Use the right arrow key to move to the Advanced tab. Use the arrow down key to select I/O Device Configuration and press Enter to open the I/O Device Configuration window. Disable the serial ports, parallel port, and floppy disk controller as shown in the following screenshot. Use the up and down arrow keys to move between devices, and the Space bar to disable or enable each as required: Press the F10 key to save the configuration and exit the PhoenixBIOS Setup Utility . Do not remove the virtual CD-ROM device, as it is used by vSphere when performing an automated installation or upgrade of the VMware Tools software. Customizing the Windows desktop OS cluster size Microsoft Windows uses a default cluster size, also known as allocation unit size, of 4 KB when creating the boot volume during a new installation of Windows. The cluster size is the smallest amount of disk space that will be used to hold a file, which affects how many disk writes must be made to commit a file to disk. For example, when a file is 12 KB in size, and the cluster size is 4 KB, it will take three write operations to write the file to disk. The default 4 KB cluster size will work with any storage option that you choose to use with your environment, but that does not mean it is the best option. Storage vendors frequently do performance testing to determine which cluster size is optimal for their platforms, and it is possible that some of them will recommend that the Windows cluster size should be changed to ensure optimal performance. The following steps outline how to change the Windows cluster size during the installation process; the process is the same for both Windows 7 and Windows 8. In this example, we will be using an 8 KB cluster size, although any size can be used based on the recommendation from your storage vendor. The cluster size can only be changed during the Windows installation, not after. If your storage vendor recommends the 4 KB Windows cluster size, the default Windows settings are acceptable. Boot from the Windows OS installer ISO image or physical CD and proceed through the install steps until the Where do you want to install Windows? dialog box appears. Press Shift + F10 to bring up a command window. In the command window, enter the following commands: diskpart select disk 0 create partition primary size=100 active format fs=ntfs label="System Reserve" quick create partition primary format fs=ntfs label=OS_8k unit=8192 quick assign exit Click on Refresh to refresh the Where do you want to install Windows? window. Select Drive 0 Partition 2: OS_8k , as shown in the following screenshot, and click on Next to begin the installation: The System Reserve partition is used by Windows to store files critical to the boot process and will not be visible to the end user. These files must reside on a volume that uses a 4 KB cluster size, so we created a small partition solely for that purpose. Windows will automatically detect this partition and use it when performing the Windows installation. In the event that your storage vendor recommends a different cluster size than shown in the previous example, replace the 8192 in the sample command in step 3 with whatever value the vendor recommends, in bytes, without any punctuation. Windows OS pre-deployment tasks The following tasks are unrelated to the other optimization tasks that are described in this article but they should be completed prior to placing the desktop into production. Installing VMware Tools VMware Tools should be installed prior to the installation of the View Agent software. To ensure that the master image has the latest version of the VMware Tools software, apply the latest updates to the host vSphere Server prior to installing the tools package on the desktop. The same applies if you are updating your VMware Tools software. The View Agent software should be reinstalled after the VMware Tools software is updated to ensure that the appropriate View drivers are installed in place of the versions included with VMware Tools. Cleaning up and defragmenting the desktop hard disk To minimize the space required by the Virtual Desktop master image and ensure optimal performance, the Virtual Desktop hard disks should be cleaned of nonessential files and optimized prior to deployment into production. The following actions should be taken once the Virtual Desktop master image is ready for deployment: Use the Windows Disk Cleanup utility to remove any unnecessary files. Use the Windows Defragment utility to defragment the virtual hard disk. If the desktop virtual hard disks are thinly provisioned, you may wish to shrink them after the defragmentation completes. This can be performed with utilities from your storage vendor if available, by using the vSphere vmkfstools utility, or by using the vSphere storage vMotion feature to move the virtual machine to a different datastore. Visit your storage vendor or the VMware vSphere Documentation (http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html) for instructions on how to shrink virtual hard disks or perform a storage vMotion.
Read more
  • 0
  • 0
  • 1457

article-image-html5-canvas
Packt
11 Sep 2013
5 min read
Save for later

HTML5 Canvas

Packt
11 Sep 2013
5 min read
(For more resources related to this topic, see here.) Setting up your HTML5 canvas (Should know) This recipe will show you how to first of all set up your own HTML5 canvas. With the canvas set up, we can then move on to look at some of the basic elements the canvas has to offer and how we would go about implementing them. For this task we will be creating a series of primitives such as circles and rectangles. Modern video games make use of these types of primitives in many different forms. For example, both circles and rectangles are commonly used within collision-detection algorithms such as bounding circles or bounding boxes. How to do it... As previously mentioned we will begin by creating our own HTML5 canvas. We will start by creating a blank HTML file. To do this, you will need some form of text editor such as Microsoft Notepad (available for Windows) or the TextEdit application (available on Mac OS). Once you have a basic webpage set up, all that is left to do in order to create a canvas is to place the following between both body tags: <canvas id="canvas" width="800" height="600"></canvas> As previously mentioned, we will be implementing a number of basic elements within the canvas. In order to do this we must first link the JavaScript file to our webpage. This file will be responsible for the initialization, loading, and drawing of objects to the canvas. In order for our scripts to have any effect on our canvas we must create a separate file called canvas example. Create this new file within your text editor and then insert the following code declarations: var canvas = document.getElementById("canvas"), context = canvas.getContext("2d"); These declarations are responsible for retrieving both the canvas element and context. Using the canvas context, we can begin to draw primitives, text, and load textures into our canvas. We will begin by drawing a rectangle in the top-left corner of our canvas. In order to do this place the following code below our previous JavaScript declarations: context.fillStyle="#FF00FF";context.fillRect(15,15,150,75); If you were to now view the original webpage we created, you would see the rectangle being drawn in the top-left corner at position X: 15, Y: 15. Now that we have a rectangle, we can look at how we would go about drawing a circle onto our canvas. This can be achieved by means of the following code: context.beginPath();context.arc(350,150,40,0,2 * Math.PI);context.stroke(); How it works... The first code extract represents the basic framework required to produce a blank webpage and is necessary for a browser to read and display the webpage in question. With a basic webpage created, we then declare a new HTML5 canvas. This is done by assigning an id attribute, which we use to refer to the canvas within our scripts. The canvas declaration then takes a width and height attribute, both of which are also necessary to specify the size of the canvas, that is, the number of pixels wide and pixels high. Before any objects can be drawn to the canvas, we first need to get the canvas element. This is done through means of the getElementById method that you can see in our canvas example. When retrieving the canvas element, we are also required to specify the canvas context by calling a built-in HTML5 method known as getContext. This object gives access to many different properties and methods for drawing edges, circles, rectangles, external images, and so on. This can be seen when we draw a rectangle to our the canvas. This was done using the fillStyle property, which takes in a hexadecimal value and in return specifies the color of an element. Our next line makes use of the fillRect method, which requires a minimum of four values to be passed to it. These values include the X and Y position of the rectangle, as well as the width and height of the rectangle. As a result, a rectangle is drawn to the canvas with the color, position, width, and height specified. We then move on to drawing a circle to the canvas, which is done by firstly calling a built-in HTML canvas method known as BeginPath. This method is used to either begin a new path or to reset a current path. With a new path setup, we then take advantage of a method known as Arc that allows for the creation of arcs or curves, which can be used to create circles. This method requires that we pass both an X and Y position, a radius, and a starting angle measured in radians. This angle is between 0 and 2 * Pi where 0 and 2 are located at the 3 o'clock position of the arc's circle. We also must pass an ending angle, which is also measured in radians. The following figure is taken directly from the W3C HTML canvas reference, which you can find at the following link http://bit.ly/UCVPY1: Summary In this article we saw how to first of all set up our own HTML5 canvas. With the canvas set up, we can then move on to look at some of the basic elements the canvas has to offer and how we would go about implementing them. Resources for Article: Further resources on this subject: Building HTML5 Pages from Scratch [Article] HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 0
  • 4215

article-image-photo-pad
Packt
11 Sep 2013
7 min read
Save for later

Photo Pad

Packt
11 Sep 2013
7 min read
(For more resources related to this topic, see here.) Time for action – creating Photo Pad In the HTML file, we will add a toolbar with buttons for Load, Save, and Effects. <body> <div id="app"> <header>Photo Pad </header> <div id="main"> <div id="toolbar"> <div class="dropdown-menu"> <button data-action="menu">Load</button> <ul id="load-menu" data-option="file-picker" class="file-picker menu"> <li data-value="file-picker"> <input type="file" /> </li> </ul> </div> <button data-action="save">Save</button> <div class="dropdown-menu"> <button data-action="menu">Effects</button> <ul data-option="applyEffect" class="menu"> <li data-value="invert">Invert</li> </ul> </div> </div> <canvas width="0" height="0"> Sorry, your browser doesn't support canvas. </canvas> </div> <footer>Click load to choose a file</footer> </div> </body> The Load toolbar item has a drop-down menu, but instead of menu items it has a file input control in it where the user can select a file to load. The Effects item has a drop-down menu of effects. For now we just have one in there, Invert, but we will add more later. For our CSS we will copy everything we had in canvasPad.css to photoPad.css, so that we get all of the same styling for the toolbar and menus. We will also use the Toolbar object in toolbar.js. In our JavaScript file we will change the application object name to PhotoPadApp. We also need a couple of variables in PhotoPadApp. We will set the canvas variable to the <canvas> element, the context variable to the canvas's context, and define an $img variable to hold the image we will be showing. Here we initialize it to a new <img> element using jQuery: function PhotoPadApp() { var version = "5.2", canvas = $("#main>canvas")[0], context = canvas.getContext("2d"), $img = $("<img>"); The first toolbar action we will implement is the Save button, since we already have that code from Canvas Pad. We check the action in toolbarButtonClicked() to see if it's "save", and if so we get the data URL and open it in a new browser window: function toolbarButtonClicked(action) { switch (action) { case "save": var url = canvas.toDataURL(); window.open(url, "PhotoPadImage"); break; } } What just happened? We created the scaffolding for the Photo Pad application with toolbar items for Load, Save, and Effects. We implemented the save function the same as we did for Canvas Pad. The next thing we'll implement is the Load drop-down menu since we need an image to manipulate. When the Load toolbar button is clicked, it will show the drop-down menu with a file input control in it that we defined previously. All of that we get for free because it's just another drop-down menu in our toolbar. But before we can do that we need to learn about the HTML5 File API. The File API We may not be able to save files directly to the user's filesystem, but we can access files using HTML5's File API. The File API allows you to get information about, and load the contents of, files that the user selects. The user can select files using an input element with a type of file. The process for loading a file works in the following way: The user selects one or more files using a <input type="file"> element. We get the list of files from the input element's files property. The list is a FileList object containing File objects. You can enumerate over the file list and access the files just like you would an array. The File object contains three fields. name: This is the filename. It doesn't include path information. size: This is the size of the file in bytes. type: This is the MIME type, if it can be determined. Use a FileReader object to read the file's data. The file is loaded asynchronously. After the file has been read, it will call the onload event handler. FileReader has a number of methods for reading files that take a File object and return the file contents. readAsArrayBuffer(): This method reads the file contents into an ArrayBuffer object. readAsBinaryString(): This method reads the file contents into a string as binary data. readAsText(): This method reads the file contents into a string as text. readAsDataURL(): This method reads the file contents into a data URL string. You can use this as the URL for loading an image. Time for action – loading an image file Let's add some code to the start() method of our application to check if the File API is available. You can determine if a browser supports the File API by checking if the File and FileReader objects exist: this.start = function() { // code not shown... if (window.File && window.FileReader) { $("#load-menu input[type=file]").change(function(e) { onLoadFile($(this)); }); } else { loadImage("images/default.jpg"); } } First we check if the File and FileReader objects are available in the window object. If so, we hook up a change event handler for the file input control to call the onLoadFile() method passing in the <input> element wrapped in a jQuery object. If the File API is not available we will just load a default image by calling loadImage(), which we will write later. Let's implement the onLoadFile() event handler method: function onLoadFile($input) { var file = $input[0].files[0]; if (file.type.match("image.*")) { var reader = new FileReader(); reader.onload = function() { loadImage(reader.result); }; reader.readAsDataURL(file); } else { alert("Not a valid image type: " + file.type); setStatus("Error loading image!"); } } Here we get the file that was selected by looking at the file input's files array and taking the first one. Next we check the file type, which is a MIME type, to make sure it is an image. We are using the String object's regular expression match() method to check that it starts with "image". If it is an image, we create a new instance of the FileReader object. Then we set the onload event handler to call the loadImage() method, passing in the FileReader object's result field, which contains the file's contents. Lastly, we call the FileReader object's readAsDataURL() method, passing in the File object to start loading the file asynchronously. If it isn't an image file, we show an alert dialog box with an error message and show an error message in the footer by calling setStatus(). Once the file has been read, the loadImage() method will be called. Here we will use the data URL we got from the FileReader object's result field to draw the image into the canvas: function loadImage(url) { setStatus("Loading image"); $img.attr("src", url); $img[0].onload = function() { // Here "this" is the image canvas.width = this.width; canvas.height = this.height; context.drawImage(this, 0, 0); setStatus("Choose an effect"); } $img[0].onerror = function() { setStatus("Error loading image!"); } } First we set the src attribute for the image element to the data URL we got after the file was loaded. This will cause the image element to load that new image. Next we define the onload event handler for the image, so that we are notified when the image is loaded. Note that when we are inside the onload event handler, this points to the <image> element. First we change the canvas' width and height to the image's width and height. Then we draw the image on the canvas using the context's drawImage() method. It takes the image to draw and the x and y coordinates of where to draw it. In this case we draw it at the top-left corner of the canvas (0, 0). Lastly, we set an onerror event handler for the image. If an error occurs loading the image, we show an error message in the footer. What just happened? We learned how to use the File API to load an image file from the user's filesystem. After the image was loaded we resized the canvas to the size of the image and drew the image onto the canvas.
Read more
  • 0
  • 0
  • 808
article-image-kendo-mvvm-framework
Packt
06 Sep 2013
19 min read
Save for later

The Kendo MVVM Framework

Packt
06 Sep 2013
19 min read
(For more resources related to this topic, see here.) Understanding MVVM – basics MVVM stands for Model ( M ), View ( V ), and View-Model ( VM ). It is part of a family of design patterns related to system architecture that separate responsibilities into distinct units. Some other related patterns are Model-View-Controller ( MVC ) and Model-View-Presenter ( MVP ). These differ on what each portion of the framework is responsible for, but they all attempt to manage complexity through the same underlying design principles. Without going into unnecessary details here, suffice it to say that these patterns are good for developing reliable and reusable code and they are something that you will undoubtedly benefit from if you have implemented them properly. Fortunately, the good JavaScript MVVM frameworks make it easy by wiring up the components for you and letting you focus on the code instead of the "plumbing". In the MVVM pattern for JavaScript through Kendo UI, you will need to create a definition for the data that you want to display and manipulate (the Model), the HTML markup that structures your overall web page (the View), and the JavaScript code that handles user input, reacts to events, and transforms the static markup into dynamic elements (the View-Model). Another way to put it is that you will have data (Model), presentation (View), and logic (View-Model). In practice, the Model is the most loosely-defined portion of the MVVM pattern and is not always even present as a unique entity in the implementation. The View-Model can assume the role of both Model and View-Model by directly containing the Model data properties within itself, instead of referencing them as a separate unit. This is acceptable and is also seen within ASP.NET MVC when a View uses the ViewBag or the ViewData collections instead of referencing a strongly-typed Model class. Don't let it bother you if the Model isn't as well defined as the View-Model and the View. The implementation of any pattern should be filtered down to what actually makes sense for your application. Simple data binding As an introductory example, consider that you have a web page that needs to display a table of data, and also provide the users with the ability to interact with that data, by clicking specifically on a single row or element. The data is dynamic, so you do not know beforehand how many records will be displayed. Also, any change should be reflected immediately on the page instead of waiting for a full page refresh from the server. How do you make this happen? A traditional approach would involve using special server-side controls that can dynamically create tables from a data source and can even wire-up some JavaScript interactivity. The problem with this approach is that it usually requires some complicated extra communication between the server and the web browser either through "view state", hidden fields, or long and ugly query strings. Also, the output from these special controls is rarely easy to customize or manipulate in significant ways and reduces the options for how your site should look and behave. Another choice would be to create special JavaScript functions to asynchronously retrieve data from an endpoint, generate HTML markup within a table and then wire up events for buttons and links. This is a good solution, but requires a lot of coding and complexity which means that it will likely take longer to debug and refine. It may also be beyond the skill set of a given developer without significant research. The third option, available through a JavaScript MVVM like Kendo UI, strikes a balance between these two positions by reducing the complexity of the JavaScript but still providing powerful and simple data binding features inside of the page. Creating the view Here is a simple HTML page to show how a view basically works: <!DOCTYPE html> <html > <head> <title>MVVM Demo 1</title> <script src ="/Scripts/kendo/jquery.js"></script> <script src ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> </body> </html> Here we have a simple table element with three columns but instead of the body containing any tr elements, there are some special HTML5 data-* attributes indicating that something special is going on here. These data-* attributes do nothing by themselves, but Kendo UI reads them (as you will see below) and interprets their values in order to link the View with the View-Model. The data-bind attribute indicates to Kendo UI that this element should be bound to a collection of objects called people. The data-template attribute tells Kendo UI that the people objects should be formatted using a Kendo UI template. Here is the code for the template: <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> This is a simple template that defines a tr structure for each row within the table. The td elements also have a data-bind attribute on them so that Kendo UI knows to insert the value of a certain property as the "text" of the HTML element, which in this case means placing the value in between <td> and </td> as simple text on the page. Creating the Model and View-Model In order to wire this up, we need a View-Model that performs the data binding. Here is the View-Model code for this View: <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ] }); kendo.bind($("body"), viewModel); </script> A Kendo UI View-Model is declared through a call to kendo.observable() which creates an observable object that is then used for the data-binding within the View. An observable object is a special object that wraps a normal JavaScript variable with events that fire any time the value of that variable changes. These events notify the MVVM framework to update any data bindings that are using that variable's value, so that they can update immediately and reflect the change. These data bindings also work both ways so that if a field bound to an observable object variable is changed, the variable bound to that field is also changed in real time. In this case, I created an array called people that contains three objects with properties about some people. This array, then, operates as the Model in this example since it contains the data and the definition of how the data is structured. At the end of this code sample, you can see the call to kendo.bind($("body"), viewModel) which is how Kendo UI actually performs its MVVM wiring. I passed a jQuery selector for the body tag to the first parameter since this viewModel object applies to the full body of my HTML page, not just a portion of it. With everything combined, here is the full source for this simplified example: <!DOCTYPE html> <html > <head> <title>MVVM Demo 1</title> <scriptsrc ="/Scripts/kendo/jquery.js"></script> <scriptsrc ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, { name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad" } ] }); kendo.bind($("body"), viewModel); </script> </body> </html> Here is a screenshot of the page in action. Note how the data from the JavaScript people array is populated into the table automatically: Even though this example contains a Model, a View, and a View-Model, all three units appear in the same HTML file. You could separate the JavaScript into other files, of course, but it is also acceptable to keep them together like this. Hopefully you are already seeing what sort of things this MVVM framework can do for you. Observable data binding Binding data into your HTML web page (View) using declarative attributes is great, and very useful, but the MVVM framework offers some much more significant functionality that we didn't see in the last example. Instead of simply attaching data to the View and leaving it at that, the MVVM framework maintains a running copy of all of the View-Model's properties, and keeps references to those properties up to date in real time. This is why the View-Model is created with a function called "observable". The properties inside, being observable, report changes back up the chain so that the data-bound fields always reflect the latest data. Let's see some examples. Adding data dynamically Building on the example we just saw, add this horizontal rule and form just below the table in the HTML page: <hr /> <form> <header>Add a Person</header> <input type="text" name="personName" placeholder="Name" data-bind="value: personName" /><br /> <input type="text" name="personHairColor" placeholder="Hair Color" data-bind="value: personHairColor" /><br /> <input type="text" name="personFavFood" placeholder="Favorite Food" data-bind="value: personFavFood" /><br /> <button type="button" data-bind="click: addPerson">Add</button> </form> This adds a form to the page so that a user can enter data for a new person that should appear in the table. Note that we have added some data-bind attributes, but this time we are binding the value of the input fields not the text. Note also that we have added a data-bind attribute to the button at the bottom of the form that binds the click event of that button with a function inside our View-Model. By binding the click event to the addPerson JavaScript method, the addPerson method will be fired every time this button is clicked. These bindings keep the value of those input fields linked with the View-Model object at all times. If the value in one of these input fields changes, such as when a user types something in the box, the View-Model object will immediately see that change and update its properties to match; it will also update any areas of the page that are bound to the value of that property so that they match the new data as well. The binding for the button is special because it allows the View-Model object to attach its own event handler to the click event for this element. Binding an event handler to an event is nothing special by itself, but it is important to do it this way (through the data-bind attribute) so that the specific running View-Model instance inside of the page has attached one of its functions to this event so that the code inside the event handler has access to this specific View-Model's data properties and values. It also allows for a very specific context to be passed to the event that would be very hard to access otherwise. Here is the code I added to the View-Model just below the people array. The first three properties that we have in this example are what make up the Model. They contain that data that is observed and bound to the rest of the page: personName: "", // Model property personHairColor: "", // Model property personFavFood: "", // Model property addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); } The first several properties you see are the same properties that we are binding to in the input form above. They start with an empty value because the form should not have any values when the page is first loaded. It is still important to declare these empty properties inside the View-Model in order that their value can be tracked when it changes. The function after the data properties, addPerson , is what we have bound to the click event of the button in the input form. Here in this function we are accessing the people array and adding a new record to it based on what the user has supplied in the form fields. Notice that we have to use the this.get() and this.set() functions to access the data inside of our View-Model. This is important because the properties in this View-Model are special observable properties so accessing their values directly may not give you the results you would expect. The most significant thing that you should notice about the addPerson function is that it is interacting with the data on the page through the View-Model properties. It is not using jQuery, document.querySelector, or any other DOM interaction to read the value of the elements! Since we declared a data-bind attribute on the values of the input elements to the properties of our View-Model, we can always get the value from those elements by accessing the View-Model itself. The values are tracked at all times. This allows us to both retrieve and then change those View-Model properties inside the addPerson function and the HTML page will show the changes right as it happens. By calling this.set() on the properties and changing their values to an empty string, the HTML page will clear the values that the user just typed and added to the table. Once again, we change the View-Model properties without needing access to the HTML ourselves. Here is the complete source of this example: <!DOCTYPE html> <html > <head> <title>MVVM Demo 2</title> <scriptsrc ="/Scripts/kendo/jquery.js"></script> <scriptsrc ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> <hr /> <form> <header>Add a Person</header> <input type="text" name="personName" placeholder="Name"data-bind="value: personName" /><br /> <input type="text" name="personHairColor" placeholder="Hair Color"data-bind="value: personHairColor" /><br /> <input type="text" name="personFavFood" placeholder="Favorite Food"data-bind="value: personFavFood" /><br /> <button type="button" data-bind="click: addPerson">Add</button> </form> <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ], personName: "", personHairColor: "", personFavFood: "", addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); } }); kendo.bind($("body"), viewModel); </script> </body> </html> And here is a screenshot of the page in action. You will see that one additional person has been added to the table by filling out the form. Try it out yourself to see the immediate interaction that you get with this code: Using observable properties in the View We just saw how simple it is to add new data to observable collections in the View-Model, and how this causes any data-bound elements to immediately show that new data. Let's add some more functionality to illustrate working with individual elements and see how their observable values can update content on the page. To demonstrate this new functionality, I have added some columns to the table: <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> <th></th> <th>Live Data</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> The first new column has no heading text but will contain a button on the page for each of the table rows. The second new column will be displaying the value of the "live data" in the View-Model for each of the objects displayed in the table. Here is the updated row template: <script id="row-template" type="text/x-kendo-template"> <tr> <td><input type="text" data-bind="value: name" /></td> <td><input type="text" data-bind="value: hairColor" /></td> <td><input type="text" data-bind="value: favoriteFood" /></td> <td><button type="button" data-bind="click: deletePerson">Delete</button></td> <td><span data-bind="text: name"></span>&nbsp;-&nbsp; <span data-bind="text: hairColor"></span>&nbsp;-&nbsp; <span data-bind="text: favoriteFood"></span></td> </tr> </script> Notice that I have replaced all of the simple text data-bind attributes with input elements and valuedata-bind attributes. I also added a button with a clickdata-bind attribute and a column that displays the text of the three properties so that you can see the observable behavior in real time. The View-Model gets a new method for the delete button: deletePerson: function (e) { var person = e.data; var people = this.get("people"); var index = people.indexOf(person); people.splice(index, 1); } When this function is called through the binding that Kendo UI has created, it passes an event argument, here called e, into the function that contains a data property. This data property is a reference to the model object that was used to render the specific row of data. In this function, I created a person variable for a reference to the person in this row and a reference to the people array; we then use the index of this person to splice it out of the array. When you click on the Delete button, you can observe the table reacting immediately to the change. Here is the full source code of the updated View-Model: <script id="row-template" type="text/x-kendo-template"> <tr> <td><input type="text" data-bind="value: name" /></td> <td><input type="text" data-bind="value: hairColor" /></td><td><input type="text" data-bind="value: favoriteFood" /></td> <td><button type="button" data-bind="click: deletePerson">Delete</button></td> <td><span data-bind="text: name"></span>&nbsp;-&nbsp; <span data-bind="text: hairColor"></span>&nbsp;-&nbsp; <span data-bind="text: favoriteFood"></span></td></tr> </script><script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ], personName: "", personHairColor: "", personFavFood: "", addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); }, deletePerson: function (e) { var person = e.data; var people = this.get("people"); var index = people.indexOf(person); people.splice(index, 1); } }); kendo.bind($("body"), viewModel); </script> </body> </html> Here is a screenshot of the new page: Click on the Delete button to see an entry disappear. You can also see that I have added a new person to the table and that I have made changes in the input boxes of the table and that those changes immediately show up on the right-hand side. This indicates that the View-Model is keeping track of the live data and updating its bindings accordingly.
Read more
  • 0
  • 0
  • 3890

article-image-aperture-action
Packt
06 Sep 2013
14 min read
Save for later

Aperture in Action

Packt
06 Sep 2013
14 min read
Controlling clipped highlights The problem of clipped highlights is a very common issue that a photographer will often have to deal with. Digital cameras only have limited dynamic range, so clipping becomes an issue, especially with high-contrast scenes. However, if you shoot RAW, then your camera will often record more highlighted information than is visible in the image. You may already be familiar with recovering highlights by using the recovery slider in Aperture, but there are actually a couple of other ways that you can bring this information back into range. The three main methods of controlling lost highlights in Aperture are: Using the recovery slider Using curves Using shadows and highlights For many cases, using the recovery slider will be good enough, but the recovery slider has its limitations. Sometimes it still leaves your highlights looking too bright, or it doesn't give you the look you wish to achieve. The other two methods mentioned give you more control over the process of recovery. If you use a Curves adjustment, you can control the way the highlight rolls off, and you can reduce the artificial look that clipped highlights can give your image, even if technically the highlight is still clipped. A highlights & shadows adjustment is also useful because it has a different look, as compared to the one that you get when using the recovery slider. It works in a slightly different way, and includes more of the brighter tones of your image when making its calculations. The highlights and shadows adjustment has the added advantage of being able to be brushed in. So, how do you know which one to use? Consider taking a three-stepped approach. If the first step doesn't work, move on to the second, and so on. Eventually, it will become second nature, and you'll know which way will be the best by just looking at the photograph. Step 1 Use the recovery slider. Drag the slider up until any clipped areas of the image start to reappear. Only drag the slider until the clipped areas have been recovered, and then stop. You may find that if your highlights are completely clipped, you may need to drag the slider all the way to the right, as per the following screenshot: For most clipped highlight issues, this will probably be enough. If you want to see what's going on, add a Curves adjustment and set the Range field to the Extended range. You don't have to make any adjustments at this point, but the histogram in the Curves adjustment will now show you how much image data is being clipped, and how much data that you can actually recover. Real world example In the following screenshot, the highlights on the right-hand edge of the plant pot have been completely blown out: If we zoom in, you will be able to see the problem in more detail. As you can see, all the image information has been lost from the intricate edge of this cast iron plant pot. Luckily this image had been shot in RAW, and the highlights are easily recovered. In this case, all that was necessary was the use of the recovery slider. It was dragged upward until it reached a value of around 1.1, and this brought most of the detail back into the visible range. As you can see from the preceding image, the detail has been recovered nicely and there are no more clipped highlights. The following screenshot is the finished image after the use of the recovery slider: Step 2 If the recovery slider brought the highlights back into range, but still they are too bright, then try the Highlights & Shadows adjustment. This will allow you to bring the highlights down even further. If you find that it is affecting the rest of your image, you can use brushes to limit the highlight adjustment to just the area you want to recover. You may find that with the Highlight and Shadows adjustment, if you drag the sliders too far the image will start to look flat and washed out. In this case, using the mid-contrast slider can add some contrast back into the image. You should use the mid-contrast slider carefully though, as too much can create an unnatural image with too much contrast. Step 3 If the previous steps haven't addressed the problem to your satisfaction, or if the highlight areas are still clipped, you can add a roll off to your Curves adjustment. The following is a quick refresher on what to do: Add a Curves adjustment, if you haven't already added one. From the pop-up range menu at the bottom of the Curves adjustment, set the range to Extended. Drag the white point of the Curves slider till it encompasses all the image information. Create a roll off on the right-hand side of the curve, so it looks something like the following screenshot: If you're comfortable with curves, you can skip directly to step 3 and just use a Curves adjustment, but for better results, you should combine the preceding differing methods to best suit your image. Real world example In the following screenshot (of yours truly), the photo was taken under poor lighting conditions, and there is a badly blown out highlight on the forehead: Before we fix the highlights, however, the first thing that we need to do is to fix the overall white balance, which is quite poor. In this case, the easiest way to fix this problem is to use the Aperture's clever skin tone white-balance adjustment. On the White Balance adjustment brick from the pop-up menu, set the mode to Skin Tone. Now, select the color picker and pick an area of skin tone in the image. This will set the white balance to a more acceptable color. (You can tweak it more if it's not right, but this usually gives satisfactory results.) The next step is to try and fix the clipped highlight. Let's use the three-step approach that we discussed earlier. We will start by using the recovery slider. In this case, the slider was brought all the way up, but the result wasn't enough and leaves an unsightly highlight, as you can see in the following screenshot: The next step is to try the Highlight & Shadows adjustment. The highlights slider was brought up to the mid-point, and while this helped, it still didn't fix the overall problem. The highlights are still quite ugly, as you can see in the following screenshot: Finally, a Curves adjustment was added and a gentle roll off was applied to the highlight portion of the curve. While the burned out highlight isn't completely gone, there is no longer a harsh edge to it. The result is a much better image than the original, with a more natural-looking highlight as shown in the following screenshot: Finishing touches To take this image further, the face was brightened using another Curves adjustment, and the curves was brushed in over the facial area. A vignette was also added. Finally, a skin softening brush was used over the harsh shadow on the nose, and over the edges of the halo on the forehead, just to soften it even further. The result is a much better (and now useable) image than the one we started with. Fixing blown out skies Another common problem one often encounters with digital images is blown out skies. Sometimes it can be as a result of the image being clipped beyond the dynamic range of the camera, whereas other times the day may simply have been overcast and there is no detail there to begin with. While there are situations when the sky is too bright and you just need to bring the brightness down to better match the rest of the scene, that is easily fixed. But what if there is no detail there to recover in the first place? That scenario is what we are going to look at in the next section. This covers what to do when the sky is completely gone and there's nothing left to recover. There are options open to you in this case. The first is pretty obvious. Leave it as it is. However, you might have an image that is nicely lit otherwise, but all that's ruining it is a flat washed-out sky. What would add a nice balance to an image in such a scenario is some subtle blue in the sky, even if it's just a small amount. Luckily, this is fairly easy to achieve in Aperture. Perform the following steps: Try the steps outlined in the previous section to bring clipped highlights back into range. Sometimes simply using the recovery slider will bring clipped skies back into the visible range, depending on the capabilities of your camera. In order for the rest of this trick to work, your highlights must be in the visible range. If you have already made any enhancements using the Enhance brick and you want to preserve those, add another Enhance brick by choosing Add New Enhance adjustment from the cog pop-up on the side of the interface. If the Tint controls aren't visible on the Enhance brick, click on the little arrow beside the word Tint to reveal the Tint controls. Using the right-hand Tint control (the one with the White eyedropper under it), adjust the control until it adds some blue back to the sky. If this is adding too much blue to other areas of your image, then brush the enhance adjustment in by choosing Brush Enhance In from the cog pop-up menu. Real world example In this example, the sky has been completely blown out and has lost most of its color detail. The first thing to try is to see whether any detail can be recovered by using the recovery slider. In this case, some of the sky was recovered, but a lot of it was still burned out. There is simply no more information to recover. The next step is to use the tint adjustment as outlined in the instructions. This puts some color back in the sky and it looks more natural. A small adjustment of the Highlights & Shadows also helps bring the sky back into the range. Finishing touches While the sky has now been recovered, there is still a bit of work to be done. To brighten up the rest of the image, a Curves adjustment was added, and the upper part of the curve was brought up, while the shadows were brought down to add some contrast. The following is the Curves adjustment that was used: Finally, to reduce the large lens flare in the center of the image, I added a color adjustment and reduced the saturation and brightness of the various colors in the flare. I then painted the color adjustment in over the flare, and this reduced the impact of it on the image. This is the same technique that can be used for getting rid of color fringing, which will be discussed later in this article. The following screenshot is the final result: Removing objects from a scene One of the myths about photo workflow applications such as Aperture is that they're not good for pixel-level manipulations. People will generally switch over to something such as Photoshop if they need to do more complex operations, such as cloning out an object. However, Aperture's retouch tool is surprisingly powerful. If you need to remove small distracting objects from a scene, then it works really well. The following is an example of a shot that was entirely corrected in Aperture: It is not really practical to give step-by-step instructions for using the tool because every situation is different, so instead, what follows is a series of tips on how best to use the retouch function: To remove complex objects you will have to switch back and forth between the cloning and healing mode. Don't expect to do everything entirely in one mode or the other. To remove long lines, such as the telegraph wires in the preceding example, start with the healing tool. Use this till you get close to the edge of an object in the scene you want to keep. Then switch to the cloning tool to fix the areas close to the kept object. The healing tool can go a bit haywire near the edges of the frame, or the edges of another object, so it's often best to use the clone tool near the edges. Remember when using the clone tool that you need to keep changing your clone source so as to avoid leaving repetitive patterns in the cloned area. To change your source area, hold down the option key, and click on the image in the area that you want to clone from. Sometimes doing a few smaller strokes works better than one long, big stroke. You can only have one retouch adjustment, but each stroke is stored separately within it. You can delete individual strokes, but only in the reverse order in which they were created. You can't delete the first stroke, and keep the following ones if for example, you have 10 other strokes. It is worth taking the time to experiment with the retouch tool. Once you get the hang of this feature, you will save yourself a lot of time by not having to jump to another piece of software to do basic (or even advanced) cloning and healing. Fixing dust spots on multiple images A common use for the retouch tool is for removing sensor dust spots on an image. If your camera's sensor has become dirty, which is surprisingly common, you may find spots of dust creeping onto your images. These are typically found when shooting at higher f-stops (narrower apertures), such as f/11 or higher, and they manifest as round dark blobs. Dust spots are usually most visible in the bright areas of solid color, such as skies. The big problem with dust spots is that once your sensor has dust on it, it will record that dust in the same place in every image. Luckily Aperture's tools makes it pretty easy to remove those dust spots, and once you've removed them from one image, it's pretty simple to remove them from all your images. To remove dust spots on multiple images, perform the following steps: Start by locating the image in your batch where the dust spots are most visible.   Zoom in to 1:1 view (100 percent zoom), and press X on your keyboard to activate the retouch tool.   Switch the retouch tool to healing mode and decrease the size of your brush till it is just bigger than the dust spot. Make sure there is some softness on the brush. Click once over the spot to get rid of it. You should try to click on it rather than paint when it comes to dust spots, as you want the least amount of area retouched as possible. Scan through your image when viewing at 1:1, and repeat the preceding process until you have removed all the dust spots Close the retouch tool's HUD to drop the tool. Zoom back out. Select the lift tool from the Aperture interface (it's at the bottom of the main window). In the lift and stamp HUD, delete everything except the Retouch adjustment in the Adjustments submenu. To do this, select all the items except the retouch entry, and press the delete (or backspace) key. Select another image or group of images in your batch, and press the Stamp Selected Images button on the Lift and Stamp HUD. Your retouched settings will be copied to all your images, and because the dust spots don't move between shots, the dust should be removed on all your images.
Read more
  • 0
  • 0
  • 967

article-image-playing-max-6-framework
Packt
06 Sep 2013
17 min read
Save for later

Playing with Max 6 Framework

Packt
06 Sep 2013
17 min read
(For more resources related to this topic, see here.) Communicating easily with Max 6 – the [serial] object The easiest way to exchange data between your computer running a Max 6 patch and your Arduino board is via the serial port. The USB connector of our Arduino boards includes the FTDI integrated circuit EEPROM FT-232 that converts the RS-232 plain old serial standard to USB. We are going to use again our basic USB connection between Arduino and our computer in order to exchange data here. The [serial] object We have to remember the [serial] object's features. It provides a way to send and receive data from a serial port. To do this, there is a basic patch including basic blocks. We are going to improve it progressively all along this article. The [serial] object is like a buffer we have to poll as much as we need. If messages are sent from Arduino to the serial port of the computer, we have to ask the [serial] object to pop them out. We are going to do this in the following pages. This article is also a pretext for me to give you some of my tips and tricks in Max 6 itself. Take them and use them; they will make your patching life easier. Selecting the right serial port we have used the message (print) sent to [serial] in order to list all the serial ports available on the computer. Then we checked the Max window. That was not the smartest solution. Here, we are going to design a better one. We have to remember the [loadbang] object. It fires a bang, that is, a (print) message to the following object as soon as the patch is loaded. It is useful to set things up and initialize some values as we could inside our setup() block in our Arduino board's firmware. Here, we do that in order to fill the serial port selector menu. When the [serial] object receives the (print) message, it pops out a list of all the serial ports available on the computer from its right outlet prepended by the word port. We then process the result by using [route port] that only parses lists prepended with the word port. The [t] object is an abbreviation of [trigger]. This object sends the incoming message to many locations, as is written in the documentation, if you assume the use of the following arguments: b means bang f means float number i means integer s means symbol l means list (that is, at least one element) We can also use constants as arguments and as soon as the input is received, the constant will be sent as it is. At last, the [trigger] output messages in a particular order: from the rightmost outlet to the leftmost one. So here we take the list of serial ports being received from the [route] object; we send the clear message to the [umenu] object (the list menu on the left side) in order to clear the whole list. Then the list of serial ports is sent as a list (because of the first argument) to [iter]. [iter] splits a list into its individual elements. [prepend] adds a message in front of the incoming input message. That means the global process sends messages to the [umenu] object similar to the following: append xxxxxx append yyyyyy Here xxxxxx and yyyyyy are the serial ports that are available. This creates the serial port selector menu by filling the list with the names of the serial ports. This is one of the typical ways to create some helpers, in this case the menu, in our patches using UI elements. As soon as you load this patch, the menu is filled, and you only have to choose the right serial port you want to use. As soon as you select one element in the menu, the number of the element in the list is fired to its leftmost outlet. We prepend this number by port and send that to [serial], setting it up to the right-hand serial port. Polling system One of the most used objects in Max 6 to send regular bangs in order to trigger things or count time is [metro]. We have to use one argument at least; this is the time between two bangs in milliseconds. Banging the [serial] object makes it pop out the values contained in its buffer. If we want to send data continuously from Arduino and process them with Max 6, activating the [metro] object is required. We then send a regular bang and can have an update of all the inputs read by Arduino inside our Max 6 patch. Choosing a value between 15 ms and 150 ms is good but depends on your own needs. Let's now see how we can read, parse, and select useful data being received from Arduino. Parsing and selecting data coming from Arduino First, I want to introduce you to a helper firmware inspired by the Arduino2Max page on the Arduino website but updated and optimized a bit by me. It provides a way to read all the inputs on your Arduino, to pack all the data read, and to send them to our Max 6 patch through the [serial] object. The readAll firmware The following code is the firmware. int val = 0; void setup() { Serial.begin(9600); pinMode(13,INPUT); } void loop() { // Check serial buffer for characters incoming if (Serial.available() > 0){ // If an 'r' is received then read all the pins if (Serial.read() == 'r') { // Read and send analog pins 0-5 values for (int pin= 0; pin<=5; pin++){ val = analogRead(pin); sendValue (val); } // Read and send digital pins 2-13 values for (int pin= 2; pin<=13; pin++){ val = digitalRead(pin); sendValue (val); } Serial.println();// Carriage return to mark end of data flow. delay (5); // prevent buffer overload } } } void sendValue (int val){ Serial.print(val); Serial.write(32); // add a space character after each value sent } For starters, we begin the serial communication at 9600 bauds in the setup() block. As usual with serial communication handling, we check if there is something in the serial buffer of Arduino at first by using the Serial.available() function. If something is available, we check if it is the character r. Of course, we can use any other character. r here stands for read, which is basic. If an r is received, it triggers the read of both analog and digital ports. Each value (the val variable) is passed to the sendValue()function; this basically prints the value into the serial port and adds a space character in order to format things a bit to provide an easier parsing by Max 6. We could easily adapt this code to only read some inputs and not all. We could also remove the sendValue() function and find another way of packing data. At the end, we push a carriage return to the serial port by using Serial.println(). This creates a separator between each pack of data that is sent. Now, let's improve our Max 6 patch to handle this pack of data being received from Arduino. The ReadAll Max 6 patch The following screenshot is the ReadAll Max patch that provides a way to communicate with our Arduino: Requesting data from Arduino First, we will see a [t b b] object. It is also a trigger, ordering bangs provided by the [metro] object. Each bang received triggers another bang to another [trigger] object, then another one to the [serial] object itself. The [t 13 r] object can seem tricky. It just triggers a character r and then the integer 13. The character r is sent to [spell] that converts it to ASCII code and then sends the result to [serial]. 13 is the ASCII code for a carriage return. This structure provides a way to fire the character r to the [serial] object, which means to Arduino, each time that the metro bangs. As we already see in the firmware, it triggers Arduino to read all its inputs, then to pack the data, and then to send the pack to the serial port for the Max 6 patch. To summarize what the metro triggers at each bang, we can write this sequence: Send the character r to Arduino. Send a carriage return to Arduino. Bang the [serial] object. This triggers Arduino to send back all its data to the Max patch. Parsing the received data Under the [serial] object, we can see a new structure beginning with the [sel 10 13] object. This is an abbreviation for the [select] object. This object selects an incoming message and fires a bang to the specific output if the message equals the argument corresponding to the specific place of that output. Basically, here we select 10 or 13. The last output pops the incoming message out if that one doesn't equal any argument. Here, we don't want to consider a new line feed (ASCII code 10). This is why we put it as an argument, but we don't do anything if that's the one that has been selected. It is a nice trick to avoid having this message trigger anything and even to not have it from the right output of [select]. Here, we send all the messages received from Arduino, except 10 or 13, to the [zl group 78] object. The latter is a powerful list for processing many features. The group argument makes it easy to group the messages received in a list. The last argument is to make sure we don't have too many elements in the list. As soon as [zl group] is triggered by a bang or the list length reaches the length argument value, it pops out the whole list from its left outlet. Here, we "accumulate" all the messages received from Arduino, and as soon as a carriage return is sent (remember we are doing that in the last rows of the loop() block in the firmware), a bang is sent and all the data is passed to the next object. We currently have a big list with all the data inside it, with each value being separated from the other by a space character (the famous ASCII code 32 we added in the last function of the firmware). This list is passed to the [itoa] object. itoa stands for integer to ASCII . This object converts integers to ASCII characters. The [fromsymbol] object converts a symbol to a list of messages. Finally, after this [fromsymbol] object we have our big list of values separated by spaces and totally readable. We then have to unpack the list. [unpack] is a very useful object that provides a way to cut a list of messages into individual messages. We can notice here that we implemented exactly the opposite process in the Arduino firmware while we packed each value into a big message. [unpack] takes as many arguments as we want. It requires knowing about the exact number of elements in the list sent to it. Here we send 12 values from Arduino, so we put 12 i arguments. i stands for integer . If we send a float, [unpack] would cast it as an integer. It is important to know this. Too many students are stuck with troubleshooting this in particular. We are only playing with the integer here. Indeed, the ADC of Arduino provides data from 0 to 1023 and the digital input provides 0 or 1 only. We attached a number box to each output of the [unpack] object in order to display each value. Then we used a [change] object. This latter is a nice object. When it receives a value, it passes it to its output only if it is different from the previous value received. It provides an effective way to avoid sending the same value each time when it isn't required. Here, I chose the argument -1 because this is not a value sent by the Arduino firmware, and I'm sure that the first element sent will be parsed. So we now have all our values available. We can use them for different jobs. But I propose to use a smarter way, and this will also introduce a new concept. Distributing received data and other tricks Let's introduce here some other tricks to improve our patching style. Cordless trick We often have to use some data in our patches. The same data has to feed more than one object. A good way to avoid messy patches with a lot of cord and wires everywhere is to use the [send] and [receive] objects. These objects can be abbreviated with [s] and [r], and they generate communication buses and provide a wireless way to communicate inside our patches. These three structures are equivalent. The first one is a basic cord. As soon as we send data from the upper number box, it is transmitted to the one at the other side of the cord. The second one generates a data bus named busA. As soon as you send data into [send busA], each [receive busA] object in your patch will pop out that data. The third example is the same as the second one, but it generates another bus named busB. This is a good way to distribute data. I often use this for my master clock, for instance. I have one and only one master clock banging a clock to [send masterClock], and wherever I need to have that clock, I use [receive masterClock] and it provides me with the data I need. If you check the global patch, you can see that we distribute data to the structures at the bottom of the patch. But these structures could also be located elsewhere. Indeed, one of the strengths of any visual programming framework such as Max 6 is the fact that you can visually organize every part of your code exactly as you want in your patcher. And please, do that as much as you can. This will help you to support and maintain your patch all through your long development months. Check the previous screenshot. I could have linked the [r A1] object at the top left corner to the [p process03] object directly. But maybe this will be more readable if I keep the process chains separate. I often work this way with Max 6. This is one of the multiple tricks I teach in my Max 6 course. And of course, I introduced the [p] object, that is the [patcher] abbreviation. Let's check a couple of tips before we continue with some good examples involving Max 6 and Arduino. Encapsulation and subpatching When you open Max 6 and go to File | New Patcher , it opens a blank patcher. The latter, if you recall, is the place where you put all the objects. There is another good feature named subpatching . With this feature, you can create new patchers inside patchers, and embed patchers inside patchers as well. A patcher contained inside another one is also named a subpatcher. Let's see how it works with the patch named ReadAllCutest.maxpat. There are four new objects replacing the whole structures we designed before. These objects are subpatchers. If you double-click on them in patch lock mode or if you push the command key (or Ctrl for Windows), double-click on them in patch edit mode and you'll open them. Let's see what is there inside them. The [requester] subpatcher contains the same architecture that we designed before, but you can see the brown 1 and 2 objects and another blue 1 object. These are inlets and outlets. Indeed, they are required if you want your subpatcher to be able to communicate with the patcher that contains it. Of course, we could use the [send] and [receive] objects for this purpose too. The position of these inlets and outlets in your subpatcher matters. Indeed, if you move the 1 object to the right of the 2 object, the numbers get swapped! And the different inlets in the upper patch get swapped too. You have to be careful about that. But again, you can organize them exactly as you want and need. Check the next screenshot: And now, check the root patcher containing this subpatcher. It automatically inverts the inlets, keeping things relevant. Let's now have a look at the other subpatchers: The [p portHandler] subpatcher The [p dataHandler] subpatcher The [p dataDispatcher] subpatcher In the last figure, we can see only one inlet and no outlets. Indeed, we just encapsulated the global data dispatcher system inside the subpatcher. And this latter generates its data buses with [send] objects. This is an example where we don't need and even don't want to use outlets. Using outlets would be messy because we would have to link each element requesting this or that value from Arduino with a lot of cords. In order to create a subpatcher, you only have to type n to create a new object, and type p, a space, and the name of your subpatcher. While I designed these examples, I used something that works faster than creating a subpatcher, copying and pasting the structure on the inside, removing the structure from the outside, and adding inlets and outlets. This feature is named encapsulate and is part of the Edit menu of Max 6. You have to select the part of the patch you want to encapsulate inside a subpatcher, then click on Encapsulate , and voilà! You have just created a subpatcher including your structures that are connected to inlets and outlets in the correct order. Encapsulate and de-encapsulate features You can also de-encapsulate a subpatcher. It would follow the opposite process of removing the subpatcher and popping out the whole structure that was inside directly outside. Subpatching helps to keep things well organized and readable. We can imagine that we have to design a whole patch with a lot of wizardry and tricks inside it. This one is a processing unit, and as soon as we know what it does, after having finished it, we don't want to know how it does it but only use it . This provides a nice abstraction level by keeping some processing units closed inside boxes and not messing the main patch. You can copy and paste the subpatchers. This is a powerful way to quickly duplicate process units if you need to. But each subpatcher is totally independent of the others. This means that if you need to modify one because you want to update it, you'd have to do that individually in each subpatcher of your patch. This can be really hard. Let me introduce you to the last pure Max 6 concept now named abstractions before I go further with Arduino. Abstractions and reusability Any patch created and saved can be used as a new object in another patch. We can do this by creating a new object by typing n in a patcher; then we just have to type the name of our previously created and saved patch. A patch used in this way is called an abstraction . In order to call a patch as an abstraction in a patcher, the patch has to be in the Max 6 path in order to be found by it. You can check the path known by Max 6 by going to Options | File Preferences . Usually, if you put the main patch in a folder and the other patches you want to use as abstractions in that same folder, Max 6 finds them. The concept of abstraction in Max 6 itself is very powerful because it provides reusability . Indeed, imagine you need and have a lot of small (or big) patch structures that you are using every day, every time, and in almost every project. You can put them into a specific folder on your disk included in your Max 6 path and then you can call (we say instantiate ) them in every patch you are designing. Since each patch using it has only a reference to the one patch that was instantiated itself, you just need to improve your abstraction; each time you load a patch using it, the patch will have up-to-date abstractions loaded inside it. It is really easy to maintain all through the development months or years. Of course, if you totally change the abstraction to fit with a dedicated project/patch, you'll have some problems using it with other patches. You have to be careful to maintain even short documentation of your abstractions. Let's now continue by describing some good examples with Arduino.
Read more
  • 0
  • 0
  • 1656
article-image-setting-single-width-column-system-simple
Packt
05 Sep 2013
3 min read
Save for later

Setting up a single-width column system (Simple)

Packt
05 Sep 2013
3 min read
(For more resources related to this topic, see here.) Getting ready To perform the steps listed in this article, we will need a text editor, a browser, and a copy of the Masonry plugin. Any text editor will do, but my browser of choice is Google Chrome, as the V8 JavaScript engine that ships with it generally performs better and supports CSS3 transitions, and as a result we see smoother animations when resizing the browser window. We need to make sure we have a copy of the most recent version of Masonry, which was Version 2.1.08 at the time of writing this article. This version is compatible with the most recent version of jQuery, which is Version 1.9.1. A production copy of Masonry can be found on the GitHub repository at the following address: https://github.com/desandro/masonry/blob/master/jquery.masonry.min.js For jQuery, we will be using a content delivery network (CDN) for ease of development. Open the basic single-column HTML file to follow along. You can download this file from the following location: http://www.packtpub.com/sites/default/files/downloads/1-single-column.zip How to do it... Set up the styling for the masonry-item class with the proper width, padding, and margins. We want our items to have a total width of 200 pixels, including the padding and margins. <style> .masonry-item { background: #FFA500; float: left; margin: 5px; padding: 5px; width: 180px; }</style> Set up the HTML structure on which you are going to use Masonry. At a minimum, we need a tagged Masonry container with the elements inside tagged as Masonry items. <div id='masonry-container'> <div class='masonry-item '> Maecenas faucibus mollis interdum. </div> <div class='masonry-item '> Maecenas faucibus mollis interdum. Donec sed odio dui. Nullamquis risus eget urna mollis ornare vel eu leo. Vestibulum idligula porta felis euismod semper. </div> <div class='masonry-item '> Nullam quis risus eget urna mollis ornare vel eu leo. Crasjusto odio, dapibus ac facilisis in, egestas eget quam. Aeneaneu leo quam. Pellentesque ornare sem lacinia quam venenatisvestibulum. </div></div> All Masonry options need not be included, but it is recommended (by David DeSandro, the creator of Masonry) to set itemSelector for single-column usage. We will be setting this every time we use Masonry. <script> $(function() { $('#masonry-container').masonry({ // options itemSelector : '.masonry-item', }); });</script> How it works... Using jQuery, we select our Masonry container and use the itemSelector option to select the elements that will be affected by Masonry. The size of the columns will be determined by the CSS code. Using the box model, we set our Masonry items to a width of 90 px (80-px wide, with a 5-px padding all around the item). The margin is our gutter between elements, which is also 5-px wide. With this setup, we can con firm that we have built the basic single-column grid system, with each column being 100-px wide. The end result should look like the following screenshot: Summary This article showed you how to set up the very basic Masonry single-width column system around which Masonry revolves. Resources for Article : Further resources on this subject: Designing Site Layouts in Inkscape [Article] New features in Domino Designer 8.5 [Article] Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article]
Read more
  • 0
  • 0
  • 1026

article-image-chef-infrastructure
Packt
05 Sep 2013
10 min read
Save for later

Chef Infrastructure

Packt
05 Sep 2013
10 min read
(For more resources related to this topic, see here.) First, let's talk about the terminology used in the Chef universe. A cookbook is a collection of recipes – codifying the actual resources, which should be installed and configured on your node – and the files and configuration templates needed. Once you've written your cookbooks, you need a way to deploy them to the nodes you want to provision. Chef offers multiple ways for this task. The most widely used way is to use a central Chef Server. You can either run your own or sign up for Opscode's Hosted Chef. The Chef Server is the central registry where each node needs to get registered. The Chef Server distributes the cookbooks to the nodes based on their configuration settings. Knife is Chef's command-line tool called to interact with the Chef Server. You use it for uploading cookbooks and managing other aspects of Chef. On your nodes, you need to install Chef Client – the part that retrieves the cookbooks from the Chef Server and executes them on the node. In this article, we'll see the basic infrastructure components of your Chef setup at work and learn how to use the basic tools. Let's get started with having a look at how to use Git as a version control system for your cookbooks. Using version control Do you manually back up every file before you change it? And do you invent creative filename extensions like _me and _you when you try to collaborate on a file? If you answer yes to any of the preceding questions, it's time to rethink your process. A version control system (VCS) helps you stay sane when dealing with important files and collaborating on them. Using version control is a fundamental part of any infrastructure automation. There are multiple solutions (some free, some paid) for managing source version control including Git, SVN, Mercurial, and Perforce. Due to its popularity among the Chef community, we will be using Git. However, you could easily use any other version control system with Chef. Getting ready You'll need Git installed on your box. Either use your operating system's package manager (such as Apt on Ubuntu or Homebrew on OS X), or simply download the installer from www.git-scm.org. Git is a distributed version control system. This means that you don't necessarily need a central host for storing your repositories. But in practice, using GitHub as your central repository has proven to be very helpful. In this article, I'll assume that you're using GitHub. Therefore, you need to go to github.com and create a (free) account to follow the instructions given in this article. Make sure that you upload your SSH key following the instructions at https://help.github.com/articles/generating-ssh-keys, so that you're able to use the SSH protocol to interact with your GitHub account. As soon as you've created your GitHub account, you should create your repository by visiting https://github.com/new and using chef-repo as the repository name. How to do it... Before you can write any cookbooks, you need to set up your initial Git repository on your development box. Opscode provides an empty Chef repository to get you started. Let's see how you can set up your own Chef repository with Git using Opscode's skeleton. Download Opscode's skeleton Chef repository as a tarball: mma@laptop $ wget http://github.com/opscode/chef-repo/tarball/master...TRUNCATED OUTPUT...2013-07-05 20:54:24 (125 MB/s) - 'master' saved [9302/9302] Extract the downloaded tarball: mma@laptop $ tar zvf master Rename the directory. Replace 2c42c6a with whatever your downloaded tarball contained in its name: mma@laptop $ mv opscode-chef-repo-2c42c6a/ chef-repo Change into your newly created Chef repository: mma@laptop $ cd chef-repo/ Initialize a fresh Git repository: mma@laptop:~/chef-repo $ git init .Initialized empty Git repository in /Users/mma/work/chef-repo/.git/ Connect your local repository to your remote repository on github.com. Make sure to replace mmarschall with your own GitHub username: mma@laptop:~/chef-repo $ git remote add origin [email protected]:mmarschall/chef-repo.git Add and commit Opscode's default directory structure: mma@laptop:~/chef-repo $ git add .mma@laptop:~/chef-repo $ git commit -m "initial commit"[master (root-commit) 6148b20] initial commit10 files changed, 339 insertions(+), 0 deletions(-)create mode 100644 .gitignore...TRUNCATED OUTPUT...create mode 100644 roles/README.md Push your initialized repository to GitHub. This makes it available to all your co-workers to collaborate on it. mma@laptop:~/chef-repo $ git push -u origin master...TRUNCATED OUTPUT...To [email protected]:mmarschall/chef-repo.git* [new branch] master -> master How it works... You've downloaded a tarball containing Opscode's skeleton repository. Then, you've initialized your chef-repo and connected it to your own repository on GitHub. After that, you've added all the files from the tarball to your repository and committed them. This makes Git track your files and the changes you make later. As a last step, you've pushed your repository to GitHub, so that your co-workers can use your code too. There's more... Let's assume you're working on the same chef-repo repository together with your co-workers. They cloned your repository, added a new cookbook called other_cookbook, committed their changes locally, and pushed their changes back to GitHub. Now it's time for you to get the new cookbook down to your own laptop. Pull your co-workers, changes from GitHub. This will merge their changes into your local copy of the repository. mma@laptop:~/chef-repo $ git pull From github.com:mmarschall/chef-repo * branch master -> FETCH_HEAD ...TRUNCATED OUTPUT... create mode 100644 cookbooks/other_cookbook/recipes/default.rb In the case of any conflicting changes, Git will help you merge and resolve them. Installing Chef on your workstation If you want to use Chef, you'll need to install it on your local workstation first. You'll have to develop your configurations locally and use Chef to distribute them to your Chef Server. Opscode provides a fully packaged version, which does not have any external prerequisites. This fully packaged Chef is called the Omnibus Installer. We'll see how to use it in this section. Getting ready Make sure you've curl installed on your box by following the instructions available at http://curl.haxx.se/download.html. How to do it... Let's see how to install Chef on your local workstation using Opscode's Omnibus Chef installer: In your local shell, run the following command: mma@laptop:~/chef-repo $ curl -L https://www.opscode.com/chef/install.sh | sudo bashDownloading Chef......TRUNCATED OUTPUT...Thank you for installing Chef! Add the newly installed Ruby to your path: mma@laptop:~ $ echo 'export PATH="/opt/chef/embedded/bin:$PATH"'>> ~/.bash_profile && source ~/.bash_profile How it works... The Omnibus Installer will download Ruby and all the required Ruby gems into /opt/chef/embedded. By adding the /opt/chef/embedded/bin directory to your .bash_profile, the Chef command-line tools will be available in your shell. There's more... If you already have Ruby installed in your box, you can simply install the Chef Ruby gem by running mma@laptop:~ $ gem install chef. Using the Hosted Chef platform If you want to get started with Chef right away (without the need to install your own Chef Server) or want a third party to give you an Service Level Agreement (SLA) for your Chef Server, you can sign up for Hosted Chef by Opscode. Opscode operates Chef as a cloud service. It's quick to set up and gives you full control, using users and groups to control the access permissions to your Chef setup. We'll configure Knife, Chef's command-line tool to interact with Hosted Chef, so that you can start managing your nodes. Getting ready Before being able to use Hosted Chef, you need to sign up for the service. There is a free account for up to five nodes. Visit http://www.opscode.com/hosted-chef and register for a free trial or the free account. I registered as the user webops with an organization short-name of awo. After registering your account, it is time to prepare your organization to be used with your chef-repo repository. How to do it... Carry out the following steps to interact with the Hosted Chef: Navigate to http://manage.opscode.com/organizations. After logging in, you can start downloading your validation keys and configuration file. Select your organization to be able to see its contents using the web UI. Regenerate the validation key for your organization and save it as <your-organization-short-name>.pem in the .chef directory inside your chef-repo repository. Generate the Knife config and put the downloaded knife.rb into the .chef directory inside your chef-repo directory as well. Make sure you replace webops with the username you chose for Hosted Chef and awo with the short-name you chose for your organization: current_dir = File.dirname(__FILE__)log_level :infolog_location STDOUTnode_name "webops"client_key "#{current_dir}/webops.pem"validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem"chef_server_url "https://api.opscode.com/organizations/awo"cache_type 'BasicFile'cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )cookbook_path ["#{current_dir}/../cookbooks"] Use Knife to verify that you can connect to your hosted Chef organization. It should only have your validator client so far. Instead of awo, you'll see your organization's short-name: mma@laptop:~/chef-repo $ knife client listawo-validator How it works... Hosted Chef uses two private keys (called validators): one for the organization and the other for every user. You need to tell Knife where it can find these two keys in your knife.rb file. The following two lines of code in your knife.rb file tells Knife about which organization to use and where to find its private key: validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem" The following line of code in your knife.rb file tells Knife about where to find your users' private key: client_key "#{current_dir}/webops.pem" And the following line of code in your knife.rb file tells Knife that you're using Hosted Chef. You will find your organization name as the last part of the URL: chef_server_url "https://api.opscode.com/organizations/awo" Using the knife.rb file and your two validators Knife can now connect to your organization hosted by Opscode. You do not need your own, self-hosted Chef Server, nor do you need to use Chef Solo in this setup. There's more... This setup is good for you if you do not want to worry about running, scaling, and updating your own Chef Server and if you're happy with saving all your configuration data in the cloud (under Opscode's control). If you need to have all your configuration data within your own network boundaries, you might sign up for Private Chef, which is a fully supported and enterprise-ready version of Chef Server. If you don't need any advanced enterprise features like role-based access control or multi-tenancy, then the open source version of Chef Server might be just right for you. Summary In this article, we learned about key concepts such as cookbooks, roles, and environments and how to use some basic tools such as Git, Knife, Chef Shell, Vagrant, and Berkshelf. Resources for Article: Further resources on this subject: Automating the Audio Parameters – How it Works [Article] Skype automation [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 1505