Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
article-image-url-routing-and-template-rendering
Packt
24 Feb 2015
11 min read
Save for later

URL Routing and Template Rendering

Packt
24 Feb 2015
11 min read
In this article by Ryan Baldwin, the author of Clojure Web Development Essentials, however, we will start building our application, creating actual endpoints that process HTTP requests, which return something we can look at. We will: Learn what the Compojure routing library is and how it works Build our own Compojure routes to handle an incoming request What this chapter won't cover, however, is making any of our HTML pretty, client-side frameworks, or JavaScript. Our goal is to understand the server-side/Clojure components and get up and running as quickly as possible. As a result, our templates are going to look pretty basic, if not downright embarrassing. (For more resources related to this topic, see here.) What is Compojure? Compojure is a small, simple library that allows us to create specific request handlers for specific URLs and HTTP methods. In other words, "HTTP Method A requesting URL B will execute Clojure function C. By allowing us to do this, we can create our application in a sane way (URL-driven), and thus architect our code in some meaningful way. For the studious among us, the Compojure docs can be found at https://github.com/weavejester/compojure/wiki. Creating a Compojure route Let's do an example that will allow the awful sounding tech jargon to make sense. We will create an extremely basic route, which will simply print out the original request map to the screen. Let's perform the following steps: Open the home.clj file. Alter the home-routes defroute such that it looks like this: (defroutes home-routes   (GET "/" [] (home-page))   (GET "/about" [] (about-page))   (ANY "/req" request (str request))) Start the Ring Server if it's not already started. Navigate to http://localhost:3000/req. It's possible that your Ring Server will be serving off a port other than 3000. Check the output on lein ring server for the serving port if you're unable to connect to the URL listed in step 4. You should see something like this: Using defroutes Before we dive too much into the anatomy of the routes, we should speak briefly about what defroutes is. The defroutes macro packages up all of the routes and creates one big Ring handler out of them. Of course, you don't need to define all the routes for an application under a single defroutes macro. You can, and should, spread them out across various namespaces and then incorporate them into the app in Luminus' handler namespace. Before we start making a bunch of example routes, let's move the route we've already created to its own namespace: Create a new namespace hipstr.routes.test-routes (/hipstr/routes/test_routes.clj) . Ensure that the namespace makes use of the Compojure library: (ns hipstr.routes.test-routes   (:require [compojure.core :refer :all])) Next, use the defroutes macro and create a new set of routes, and move the /req route we created in the hipstr.routes.home namespace under it: (defroutes test-routes   (ANY "/req" request (str request))) Incorporate the new test-routes route into our application handler. In hipstr.handler, perform the following steps: Add a requirement to the hipstr.routes.test-routes namespace: (:require [compojure.core :refer [defroutes]]   [hipstr.routes.home :refer [home-routes]]   [hipstr.routes.test-routes :refer [test-routes]]   …) Finally, add the test-routes route to the list of routes in the call to app-handler: (def app (app-handler   ;; add your application routes here   [home-routes test-routes base-routes] We've now created a new routing namespace. It's with this namespace where we will create the rest of the routing examples. Anatomy of a route So what exactly did we just create? We created a Compojure route, which responds to any HTTP method at /req and returns the result of a called function, in our case a string representation of the original request map. Defining the method The first argument of the route defines which HTTP method the route will respond to; our route uses the ANY macro, which means our route will respond to any HTTP method. Alternatively, we could have restricted which HTTP methods the route responds to by specifying a method-specific macro. The compojure.core namespace provides macros for GET, POST, PUT, DELETE, HEAD, OPTIONS, and PATCH. Let's change our route to respond only to requests made using the GET method: (GET "/req" request (str request)) When you refresh your browser, the entire request map is printed to the screen, as we'd expect. However, if the URL and the method used to make the request don't match those defined in our route, the not-found route in hipstr.handler/base-routes is used. We can see this in action by changing our route to listen only to the POST methods: (POST "/req" request (str request)) If you try and refresh the browser again, you'll notice we don't get anything back. In fact, an "HTTP 404: Page Not Found" response is returned to the client. If we POST to the URL from the terminal using curl, we'll get the following expected response: # curl -d {} http://localhost:3000/req {:ssl-client-cert nil, :go-bowling? "YES! NOW!", :cookies {}, :remote-addr "0:0:0:0:0:0:0:1", :params {}, :flash nil, :route-params {}, :headers {"user-agent" "curl/7.37.1", "content-type" "application/x-www-form-urlencoded", "content-length" "2", "accept" "*/*", "host" "localhost:3000"}, :server-port 3000, :content-length 2, :form-params {}, :session/key nil, :query-params {}, :content-type "application/x-www-form-urlencoded", :character-encoding nil, :uri "/req", :server-name "localhost", :query-string nil, :body #<HttpInput org.eclipse.jetty.server.HttpInput@38dea1>, :multipart-params {}, :scheme :http, :request-method :post, :session {}} Defining the URL The second component of the route is the URL on which the route is served. This can be anything we want and as long as the request to the URL matches exactly, the route will be invoked. There are, however, two caveats we need to be aware of: Routes are tested in order of their declaration, so order matters. The trailing slash isn't handled well. Compojure will always strip the trailing slash from the incoming request but won't redirect the user to the URL without the trailing slash. As a result an HTTP 404: Page Not Found response is returned. So never base anything off a trailing slash, lest ye peril in an ocean of confusion. Parameter destructuring In our previous example we directly refer to the implicit incoming request and pass that request to the function constructing the response. This works, but it's nasty. Nobody ever said, I love passing around requests and maintaining meaningless code and not leveraging URLs, and if anybody ever did, we don't want to work with them. Thankfully, Compojure has a rather elegant destructuring syntax that's easier to read than Clojure's native destructuring syntax. Let's create a second route that allows us to define a request map key in the URL, then simply prints that value in the response: (GET "/req/:val" [val] (str val)) Compojure's destructuring syntax binds HTTP request parameters to variables of the same name. In the previous syntax, the key :val will be in the request's :params map. Compojure will automatically map the value of {:params {:val...}} to the symbol val in [val]. In the end, you'll get the following output for the URL http://localhost:3000/req/holy-moly-molly: That's pretty slick but what if there is a query string? For example, http://localhost:3000/req/holy-moly-molly!?more=ThatsAHotTomalle. We can simply add the query parameter more to the vector, and Compojure will automatically bring it in: (GET "/req/:val" [val more] (str val "<br>" more)) Destructuring the request What happens if we still need access to the entire request? It's natural to think we could do this: (GET "/req/:val" [val request] (str val "<br>" request)) However, request will always be nil because it doesn't map back to a parameter key of the same name. In Compojure, we can use the magical :as key: (GET "/req/:val" [val :as request] (str val "<br>" request)) This will now result in request being assigned the entire request map, as shown in the following screenshot: Destructuring unbound parameters Finally, we can bind any remaining unbound parameters into another map using &. Take a look at the following example code: (GET "/req/:val/:another-val/:and-another"   [val & remainders] (str val "<br>" remainders)) Saving the file and navigating to http://localhost:3000/req/holy-moly-molly!/what-about/susie-q will render both val and the map with the remaining unbound keys :another-val and :and-another, as seen in the following screenshot: Constructing the response The last argument in the route is the construction of the response. Whatever the third argument resolves to will be the body of our response. For example, in the following route: (GET "/req/:val" [val] (str val)) The third argument, (str val), will echo whatever the value passed in on the URL is. So far, we've simply been making calls to Clojure's str but we can just as easily call one of our own functions. Let's add another route to our hipstr.routes.test-routes, and write the following function to construct its response: (defn render-request-val [request-map & [request-key]]   "Simply returns the value of request-key in request-map,   if request-key is provided; Otherwise return the request-map.   If request-key is provided, but not found in the request-map,   a message indicating as such will be returned." (str (if request-key         (if-let [result ((keyword request-key) request-map)]           result           (str request-key " is not a valid key."))         request-map))) (defroutes test-routes   (POST "/req" request (render-request-val request))   ;no access to the full request map   (GET "/req/:val" [val] (str val))   ;use :as to get access to full request map   (GET "/req/:val" [val :as full-req] (str val "<br>" full-req))   ;use :as to get access to the remainder of unbound symbols   (GET "/req/:val/:another-val/:and-another" [val & remainders]     (str val "<br>" remainders))   ;use & to get access to unbound params, and call our route   ;handler function   (GET "/req/:key" [key :as request]     (render-request-val request key))) Now when we navigate to http://localhost:3000/req/server-port, we'll see the value of the :server-port key in the request map… or wait… we should… what's wrong? If this doesn't seem right, it's because it isn't. Why is our /req/:val route getting executed? As stated earlier, the order of routes is important. Because /req/:val with the GET method is declared earlier, it's the first route to match our request, regardless of whether or not :val is in the HTTP request map's parameters. Routes are matched on URL structure, not on parameters keys. As it stands right now, our /req/:key will never get matched. We'll have to change it as follows: ;use & to get access to unbound params, and call our route handler function (GET "/req/:val/:another-val/:and-another" [val & remainders]   (str val "<br>" remainders))   ;giving the route a different URL from /req/:val will ensure its   execution   (GET "/req/key/:key" [key :as request] (render-request-val   request key))) Now that our /req/key/:key route is logically unique, it will be matched appropriately and render the server-port value to screen. Let's try and navigate to http://localhost:3000/req/key/server-port again: Generating complex responses What if we want to create more complex responses? How might we go about doing that? The last thing we want to do is hardcode a whole bunch of HTML into a function, it's not 1995 anymore, after all. This is where the Selmer library comes to the rescue. Summary In this article we have learnt what Compojure is, what a Compojure routing library is and how it works. You have also learnt to build your own Compojure routes to handle an incoming request, within which you learnt how to use defroutes, the anatomy of a route, destructuring parameter and how to define the URL. Resources for Article: Further resources on this subject: Vmware Vcenter Operations Manager Essentials - Introduction To Vcenter Operations Manager [article] Websockets In Wildfly [article] Clojure For Domain-Specific Languages - Design Concepts With Clojure [article]
Read more
  • 0
  • 0
  • 1650

article-image-testing-ui-using-webdriverjs
Packt
17 Feb 2015
30 min read
Save for later

Testing a UI Using WebDriverJS

Packt
17 Feb 2015
30 min read
In this article, by the author, Enrique Amodeo, of the book, Learning Behavior-driven Development with JavaScript, we will look into an advanced concept: how to test a user interface. For this purpose, you will learn the following topics: Using WebDriverJS to manipulate a browser and inspect the resulting HTML generated by our UI Organizing our UI codebase to make it easily testable The right abstraction level for our UI tests (For more resources related to this topic, see here.) Our strategy for UI testing There are two traditional strategies towards approaching the problem of UI testing: record-and-replay tools and end-to-end testing. The first approach, record-and-replay, leverages the use of tools capable of recording user activity in the UI and saves this into a script file. This script file can be later executed to perform exactly the same UI manipulation as the user performed and to check whether the results are exactly the same. This approach is not very compatible with BDD because of the following reasons: We cannot test-first our UI. To be able to use the UI and record the user activity, we first need to have most of the code of our application in place. This is not a problem in the waterfall approach, where QA and testing are performed after the codification phase is finished. However, in BDD, we aim to document the product features as automated tests, so we should write the tests before or during the coding. The resulting test scripts are low-level and totally disconnected from the problem domain. There is no way to use them as a live documentation for the requirements of the system. The resulting test suite is brittle and it will stop working whenever we make slight changes, even cosmetic ones, to the UI. The problem is that the tools record the low-level interaction with the system that depends on technical details of the HTML. The other classic approach is end-to-end testing, where we do not only test the UI layer, but also most of the system or even the whole of it. To perform the setup of the tests, the most common approach is to substitute the third-party systems with test doubles. Normally, the database is under the control of the development team, so some practitioners use a regular database for the setup. However, we could use an in-memory database or even mock the DAOs. In any case, this approach prompts us to create an integrated test suite where we are not only testing the correctness of the UI, but the business logic as well. In the context of this discussion, an integrated test is a test that checks several layers of abstraction, or subsystems, in combination. Do not confuse it with the act of testing several classes or functions together. This approach is not inherently against BDD; for example, we could use Cucumber.js to capture the features of the system and implement Gherkin steps using WebDriver to drive the UI and make assertions. In fact, for most people, when you say BDD they always interpret this term to refer to this kind of test. We will end up writing a lot of test cases, because we need to combine the scenarios from the business logic domain with the ones from the UI domain. Furthermore, in which language should we formulate the tests? If we use the UI language, maybe it will be too low-level to easily describe business concepts. If we use the business domain language, maybe we will not be able to test the important details of the UI because they are too low-level. Alternatively, we can even end up with tests that mix UI language with business terminology, so they will neither be focused nor very clear to anyone. Choosing the right tests for the UI If we want to test whether the UI works, why should we test the business rules? After all, this is already tested in the BDD test suite of the business logic layer. To decide which tests to write, we should first determine the responsibilities of the UI layer, which are as follows: Presenting the information provided by the business layer to the user in a nice way. Transforming user interaction into requests for the business layer. Controlling the changes in the appearance of the UI components, which includes things such as enabling/disabling controls, highlighting entry fields, showing/hiding UI elements, and so on. Orchestration between the UI components. Transferring and adapting information between the UI components and navigation between pages fall under this category. We do not need to write tests about business rules, and we should not assume much about the business layer itself, apart from a loose contract. How we should word our tests? We should use a UI-related language when we talk about what the user sees and does. Words such as fields, buttons, forms, links, click, hover, highlight, enable/disable, or show and hide are relevant in this context. However, we should not go too far; otherwise, our tests will be too brittle. Saying, for example, that the name field should have a pink border is too low-level. The moment that the designer decides to use red instead of pink, or changes his mind and decides to change the background color instead of the border, our test will break. We should aim for tests that express the real intention of the user interface; for example, the name field should be highlighted as incorrect. The testing architecture At this point, we could write tests relevant for our UI using the following testing architecture: A simple testing architecture for our UI We can use WebDriver to issue user gestures to interact with the browser. These user gestures are transformed by the browser in to DOM events that are the inputs of our UI logic and will trigger operations on it. We can use WebDriver again to read the resulting HTML in the assertions. We can simply use a test double to impersonate our server, so we can set up our tests easily. This architecture is very simple and sounds like a good plan, but it is not! There are three main problems here: UI testing is very slow. Take into account that the boot time and shutdown phase can take 3 seconds in a normal laptop. Each UI interaction using WebDriver can take between 50 and 100 milliseconds, and the latency with the fake server can be an extra 10 milliseconds. This gives us only around 10 tests per second, plus an extra 3 seconds. UI tests are complex and difficult to diagnose when they fail. What is failing? Our selectors used to tell WebDriver how to find the relevant elements. Some race condition we were not aware of? A cross-browser issue? Also note that our test is now distributed between two different processes, a fact that always makes debugging more difficult. UI tests are inherently brittle. We can try to make them less brittle with best practices, but even then a change in the structure of the HTML code will sometimes break our tests. This is a bad thing because the UI often changes more frequently than the business layer. As UI testing is very risky and expensive, we should try to code as less amount of tests that interact with the UI as possible. We can achieve this without losing testing power, with the following testing architecture:   A smarter testing architecture We have now split our UI layer into two components: the view and the UI logic. This design aligns with the family of MV* design patterns. In the context of this article, the view corresponds with a passive view, and the UI logic corresponds with the controller or the presenter, in combination with the model. A passive view is usually very hard to test; so in this article we will focus mostly on how to do it. You will often be able to easily separate the passive view from the UI logic, especially if you are using an MV* pattern, such as MVC, MVP, or MVVM. Most of our tests will be for the UI logic. This is the component that implements the client-side validation, orchestration of UI components, navigation, and so on. It is the UI logic component that has all the rules about how the user can interact with the UI, and hence it needs to maintain some kind of internal state. The UI logic component can be tested completely in memory using standard techniques. We can simply mock the XMLHttpRequest object, or the corresponding object in the framework we are using, and test everything in memory using a single Node.js process. No interaction with the browser and the HTML is needed, so these tests will be blazingly fast and robust. Then we need to test the view. This is a very thin component with only two responsibilities: Manipulating and updating the HTML to present the user with the information whenever it is instructed to do so by the UI logic component Listening for HTML events and transforming them into suitable requests for the UI logic component The view should not have more responsibilities, and it is a stateless component. It simply does not need to store the internal state, because it only transforms and transmits information between the HTML and the UI logic. Since it is the only component that interacts with the HTML, it is the only one that needs to be tested using WebDriver. The point of all of this is that the view can be tested with only a bunch of tests that are conceptually simple. Hence, we minimize the number and complexity of the tests that need to interact with the UI. WebDriverJS Testing the passive view layer is a technical challenge. We not only need to find a way for our test to inject native events into the browser to simulate user interaction, but we also need to be able to inspect the DOM elements and inject and execute scripts. This was very challenging to do approximately 5 years ago. In fact, it was considered complex and expensive, and some practitioners recommended not to test the passive view. After all, this layer is very thin and mostly contains the bindings of the UI to the HTML DOM, so the risk of error is not supposed to be high, specially if we use modern cross-browser frameworks to implement this layer. Nonetheless, nowadays the technology has evolved, and we can do this kind of testing without much fuss if we use the right tools. One of these tools is Selenium 2.0 (also known as WebDriver) and its library for JavaScript, which is WebDriverJS (https://code.google.com/p/selenium/wiki/WebDriverJs).  In this book, we will use WebDriverJS, but there are other bindings in JavaScript for Selenium 2.0, such as WebDriverIO (http://webdriver.io/). You can use the one you like most or even try both. The point is that the techniques I will show you here can be applied with any client of WebDriver or even with other tools that are not WebDriver. Selenium 2.0 is a tool that allows us to make direct calls to a browser automation API. This way, we can simulate native events, we can access the DOM, and we can control the browser. Each browser provides a different API and has its own quirks, but Selenium 2.0 will offer us a unified API called the WebDriver API. This allows us to interact with different browsers without changing the code of our tests. As we are accessing the browser directly, we do not need a special server, unless we want to control browsers that are on a different machine. Actually, this is only true, due some technical limitations, if we want to test against a Google Chrome or a Firefox browser using WebDriverJS. So, basically, the testing architecture for our passive view looks like this: Testing with WebDriverJS We can see that we use WebDriverJS for the following: Sending native events to manipulate the UI, as if we were the user, during the action phase of our tests Inspecting the HTML during the assert phase of our test Sending small scripts to set up the test doubles, check them, and invoke the update method of our passive view Apart from this, we need some extra infrastructure, such as a web server that serves our test HTML page and the components we want to test. As is evident from the diagram, the commands of WebDriverJS require some network traffic to able to send the appropriate request to the browser automation API, wait for the browser to execute, and get the result back through the network. This forces the API of WebDriverJS to be asynchronous in order to not block unnecessarily. That is why WebDriverJS has an API designed around promises. Most of the methods will return a promise or an object whose methods return promises. This plays perfectly well with Mocha and Chai.  There is a W3C specification for the WebDriver API. If you want to have a look, just visit https://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html. The API of WebDriverJS is a bit complex, and you can find its official documentation at http://selenium.googlecode.com/git/docs/api/javascript/module_selenium-webdriver.html. However, to follow this article, you do not need to read it, since I will now show you the most important API that WebDriverJS offers us. Finding and interacting with elements It is very easy to find an HTML element using WebDriverJS; we just need to use either the findElement or the findElements methods. Both methods receive a locator object specifying which element or elements to find. The first method will return the first element it finds, or simply fail with an exception, if there are no elements matching the locator. The findElements method will return a promise for an array with all the matching elements. If there are no matching elements, the promised array will be empty and no error will be thrown. How do we specify which elements we want to find? To do so, we need to use a locator object as a parameter. For example, if we would like to find the element whose identifier is order_item1, then we could use the following code: var By = require('selenium-webdriver').By;   driver.findElement(By.id('order_item1')); We need to import the selenium-webdriver module and capture its locator factory object. By convention, we store this locator factory in a variable called By. Later, we will see how we can get a WebDriverJS instance. This code is very expressive, but a bit verbose. There is another version of this: driver.findElement({ id: 'order_item1' }); Here, the locator criteria is passed in the form of a plain JSON object. There is no need to use the By object or any factory. Which version is better? Neither. You just use the one you like most. In this article, the plain JSON locator will be used. The following are the criteria for finding elements: Using the tag name, for example, to locate all the <li> elements in the document: driver.findElements(By.tagName('li'));driver.findElements({ tagName: 'li' }); We can also locate using the name attribute. It can be handy to locate the input fields. The following code will locate the first element named password: driver.findElement(By.name('password')); driver.findElement({ name: 'password' }); Using the class name; for example, the following code will locate the first element that contains a class called item: driver.findElement(By.className('item')); driver.findElement({ className: 'item' }); We can use any CSS selector that our target browser understands. If the target browser does not understand the selector, it will throw an exception; for example, to find the second item of an order (assuming there is only one order on the page): driver.findElement(By.css('.order .item:nth-of-type(2)')); driver.findElement({ css: '.order .item:nth-of-type(2)' }); Using only the CSS selector you can locate any element, and it is the one I recommend. The other ones can be very handy in specific situations. There are more ways of locating elements, such as linkText, partialLinkText, or xpath, but I seldom use them. Locating elements by their text, such as in linkText or partialLinkText, is brittle because small changes in the wording of the text can break the tests. Also, locating by xpath is not as useful in HTML as using a CSS selector. Obviously, it can be used if the UI is defined as an XML document, but this is very rare nowadays. In both methods, findElement and findElements, the resulting HTML elements are wrapped as a WebElement object. This object allows us to send an event to that element or inspect its contents. Some of its methods that allow us to manipulate the DOM are as follows: clear(): This will do nothing unless WebElement represents an input control. In this case, it will clear its value and then trigger a change event. It returns a promise that will be fulfilled whenever the operation is done. sendKeys(text or key, …): This will do nothing unless WebElement is an input control. In this case, it will send the equivalents of keyboard events to the parameters we have passed. It can receive one or more parameters with a text or key object. If it receives a text, it will transform the text into a sequence of keyboard events. This way, it will simulate a user typing on a keyboard. This is more realistic than simply changing the value property of an input control, since the proper keyDown, keyPress, and keyUp events will be fired. A promise is returned that will be fulfilled when all the key events are issued. For example, to simulate that a user enters some search text in an input field and then presses Enter, we can use the following code: var Key = require('selenium-webdriver').Key;   var searchField = driver.findElement({name: 'searchTxt'}); searchField.sendKeys('BDD with JS', Key.ENTER);  The webdriver.Key object allows us to specify any key that does not represent a character, such as Enter, the up arrow, Command, Ctrl, Shift, and so on. We can also use its chord method to represent a combination of several keys pressed at the same time. For example, to simulate Alt + Command + J, use driver.sendKeys(Key.chord(Key.ALT, Key.COMMAND, 'J'));. click(): This will issue a click event just in the center of the element. The returned promise will be fulfilled when the event is fired.  Sometimes, the center of an element is nonclickable, and an exception is thrown! This can happen, for example, with table rows, since the center of a table row may just be the padding between cells! submit(): This will look for the form that contains this element and will issue a submit event. Apart from sending events to an element, we can inspect its contents with the following methods: getId(): This will return a promise with the internal identifier of this element used by WebDriver. Note that this is not the value of the DOM ID property! getText(): This will return a promise that will be fulfilled with the visible text inside this element. It will include the text in any child element and will trim the leading and trailing whitespaces. Note that, if this element is not displayed or is hidden, the resulting text will be an empty string! getInnerHtml() and getOuterHtml(): These will return a promise that will be fulfilled with a string that contains innerHTML or outerHTML of this element. isSelected(): This will return a promise with a Boolean that determines whether the element has either been selected or checked. This method is designed to be used with the <option> elements. isEnabled(): This will return a promise with a Boolean that determines whether the element is enabled or not. isDisplayed(): This will return a promise with a Boolean that determines whether the element is displayed or not. Here, "displayed" is taken in a broad sense; in general, it means that the user can see the element without resizing the browser. For example, whether the element is hidden, whether it has diplay: none, or whether it has no size, or is in an inaccessible part of the document, the returned promise will be fulfilled as false. getTagName(): This will return a promise with the tag name of the element. getSize(): This will return a promise with the size of the element. The size comes as a JSON object with width and height properties that indicate the height and width in pixels of the bounding box of the element. The bounding box includes padding, margin, and border. getLocation(): This will return a promise with the position of the element. The position comes as a JSON object with x and y properties that indicate the coordinates in pixels of the element relative to the page. getAttribute(name): This will return a promise with the value of the specified attribute. Note that WebDriver does not distinguish between attributes and properties! If there is neither an attribute nor a property with that name, the promise will be fulfilled as null. If the attribute is a "boolean" HTML attribute (such as checked or disabled), the promise will be evaluated as true only if the attribute is present. If there is both an attribute and a property with the same name, the attribute value will be used.  If you really need to be precise about getting an attribute or a property, it is much better to use an injected script to get it. getCssValue(cssPropertyName): This will return a promise with a string that represents the computed value of the specified CSS property. The computed value is the resulting value after the browser has applied all the CSS rules and the style and class attributes. Note that the specific representation of the value depends on the browser; for example, the color property can be returned as red, #ff0000, or rgb(255, 0, 0) depending on the browser. This is not cross-browser, so we should avoid this method in our tests. findElement(locator) and findElements(locator): These will return an element, or all the elements that are the descendants of this element, and match the locator. isElementPresent(locator): This will return a promise with a Boolean that indicates whether there is at least one descendant element that matches this locator. As you can see, the WebElement API is pretty simple and allows us to do most of our tests easily. However, what if we need to perform some complex interaction with the UI, such as drag-and-drop? Complex UI interaction WebDriverJS allows us to define a complex action gesture in an easy way using the DSL defined in the webdriver.ActionSequence object. This DSL allows us to define any sequence of browser events using the builder pattern. For example, to simulate a drag-and-drop gesture, proceed with the following code: var beverageElement = driver.findElement({ id: 'expresso' });var orderElement = driver.findElement({ id: 'order' });driver.actions()    .mouseMove(beverageElement)    .mouseDown()    .mouseMove(orderElement)    .mouseUp()    .perform(); We want to drag an espresso to our order, so we move the mouse to the center of the espresso and press the mouse. Then, we move the mouse, by dragging the element, over the order. Finally, we release the mouse button to drop the espresso. We can add as many actions we want, but the sequence of events will not be executed until we call the perform method. The perform method will return a promise that will be fulfilled when the full sequence is finished. The webdriver.ActionSequence object has the following methods: sendKeys(keys...): This sends a sequence of key events, exactly as we saw earlier, to the method with the same name in the case of WebElement. The difference is that the keys will be sent to the document instead of a specific element. keyUp(key) and keyDown(key): These send the keyUp and keyDown events. Note that these methods only admit the modifier keys: Alt, Ctrl, Shift, command, and meta. mouseMove(targetLocation, optionalOffset): This will move the mouse from the current location to the target location. The location can be defined either as a WebElement or as page-relative coordinates in pixels, using a JSON object with x and y properties. If we provide the target location as a WebElement, the mouse will be moved to the center of the element. In this case, we can override this behavior by supplying an extra optional parameter indicating an offset relative to the top-left corner of the element. This could be needed in the case that the center of the element cannot receive events. mouseDown(), click(), doubleClick(), and mouseUp(): These will issue the corresponding mouse events. All of these methods can receive zero, one, or two parameters. Let's see what they mean with the following examples: var Button = require('selenium-webdriver').Button;   // to emit the event in the center of the expresso element driver.actions().mouseDown(expresso).perform(); // to make a right click in the current position driver.actions().click(Button.RIGHT).perform(); // Middle click in the expresso element driver.actions().click(expresso, Button.MIDDLE).perform();  The webdriver.Button object defines the three possible buttons of a mouse: LEFT, RIGHT, and MIDDLE. However, note that mouseDown() and mouseUp() only support the LEFT button! dragAndDrop(element, location): This is a shortcut to performing a drag-and-drop of the specified element to the specified location. Again, the location can be WebElement of a page-relative coordinate. Injecting scripts We can use WebDriver to execute scripts in the browser and then wait for its results. There are two methods for this: executeScript and executeAsyncScript. Both methods receive a script and an optional list of parameters and send the script and the parameters to the browser to be executed. They return a promise that will be fulfilled with the result of the script; it will be rejected if the script failed. An important detail is how the script and its parameters are sent to the browser. For this, they need to be serialized and sent through the network. Once there, they will be deserialized, and the script will be executed inside an autoexecuted function that will receive the parameters as arguments. As a result of of this, our scripts cannot access any variable in our tests, unless they are explicitly sent as parameters. The script is executed in the browser with the window object as its execution context (the value of this). When passing parameters, we need to take into consideration the kind of data that WebDriver can serialize. This data includes the following: Booleans, strings, and numbers. The null and undefined values. However, note that undefined will be translated as null. Any function will be transformed to a string that contains only its body. A WebElement object will be received as a DOM element. So, it will not have the methods of WebElement but the standard DOM method instead. Conversely, if the script results in a DOM element, it will be received as WebElement in the test. Arrays and objects will be converted to arrays and objects whose elements and properties have been converted using the preceding rules. With this in mind, we could, for example, retrieve the identifier of an element, such as the following one: var elementSelector = ".order ul > li"; driver.executeScript(     "return document.querySelector(arguments[0]).id;",     elementSelector ).then(function(id) {   expect(id).to.be.equal('order_item0'); }); Notice that the script is specified as a string with the code. This can be a bit awkward, so there is an alternative available: var elementSelector = ".order ul > li"; driver.executeScript(function() {     var selector = arguments[0];     return document.querySelector(selector).id; }, elementSelector).then(function(id) {   expect(id).to.be.equal('order_item0'); }); WebDriver will just convert the body of the function to a string and send it to the browser. Since the script is executed in the browser, we cannot access the elementSelector variable, and we need to access it through parameters. Unfortunately, we are forced to retrieve the parameters using the arguments pseudoarray, because WebDriver have no way of knowing the name of each argument. As its name suggest, executeAsyncScript allows us to execute an asynchronous script. In this case, the last argument provided to the script is always a callback that we need to call to signal that the script has finalized. The result of the script will be the first argument provided to that callback. If no argument or undefined is explicitly provided, then the result will be null. Note that this is not directly compatible with the Node.js callback convention and that any extra parameters passed to the callback will be ignored. There is no way to explicitly signal an error in an asynchronous way. For example, if we want to return the value of an asynchronous DAO, then proceed with the following code: driver.executeAsyncScript(function() {   var cb = arguments[1],       userId = arguments[0];   window.userDAO.findById(userId).then(cb, cb); }, 'user1').then(function(userOrError) {   expect(userOrError).to.be.equal(expectedUser); }); Command control flows All the commands in WebDriverJS are asynchronous and return a promise or WebElement. How do we execute an ordered sequence of commands? Well, using promises could be something like this: return driver.findElement({name:'quantity'}).sendKeys('23')     .then(function() {       return driver.findElement({name:'add'}).click();     })     .then(function() {       return driver.findElement({css:firstItemSel}).getText();     })     .then(function(quantity) {       expect(quantity).to.be.equal('23');     }); This works because we wait for each command to finish before issuing the next command. However, it is a bit verbose. Fortunately, with WebDriverJS we can do the following: driver.findElement({name:'quantity'}).sendKeys('23'); driver.findElement({name:'add'}).click(); return expect(driver.findElement({css:firstItemSel}).getText())     .to.eventually.be.equal('23'); How can the preceding code work? Because whenever we tell WebDriverJS to do something, it simply schedules the requested command in a queue-like structure called the control flow. The point is that each command will not be executed until it reaches the top of the queue. This way, we do not need to explicitly wait for the sendKeys command to be completed before executing the click command. The sendKeys command is scheduled in the control flow before click, so the latter one will not be executed until sendKeys is done. All the commands are scheduled against the same control flow queue that is associated with the WebDriver object. However, we can optionally create several control flows if we want to execute commands in parallel: var flow1 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); var flow2 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); webdriver.promise.fullyResolved([flow1, flow2]).then(function(){   // Wait for flow1 and flow2 to finish and do something }); We need to create each control flow instance manually and, inside each flow, create a separate WebDriver instance. The commands in both flows will be executed in parallel, and we can wait for both of them to be finalized to do something else using fullyResolved. In fact, we can even nest flows if needed to create a custom parallel command-execution graph. Taking screenshots Sometimes, it is useful to take some screenshots of the current screen for debugging purposes. This can be done with the takeScreenshot() method. This method will return a promise that will be fulfilled with a string that contains a base-64 encoded PNG. It is our responsibility to save this string as a PNG file. The following snippet of code will do the trick: driver.takeScreenshot()     .then(function(shot) {       fs.writeFileSync(fileFullPath, shot, 'base64');     });  Note that not all browsers support this capability. Read the documentation for the specific browser adapter to see if it is available. Working with several tabs and frames WebDriver allows us to control several tabs, or windows, for the same browser. This can be useful if we want to test several pages in parallel or if our test needs to assert or manipulate things in several frames at the same time. This can be done with the switchTo() method that will return a webdriver.WebDriver.TargetLocator object. This object allows us to change the target of our commands to a specific frame or window. It has the following three main methods: frame(nameOrIndex): This will switch to a frame with the specified name or index. It will return a promise that is fulfilled when the focus has been changed to the specified frame. If we specify the frame with a number, this will be interpreted as a zero-based index in the window.frames array. window(windowName): This will switch focus to the window named as specified. The returned promise will be fulfilled when it is done. alert(): This will switch the focus to the active alert window. We can dismiss an alert with driver.switchTo().alert().dismiss();. The promise returned by these methods will be rejected if the specified window, frame, or alert window is not found. To make tests on several tabs at the same time, we must ensure that they do not share any kind of state, or interfere with each other through cookies, local storage, or an other kind of mechanism. Summary This article showed us that a good way to test the UI of an application is actually to split it into two parts and test them separately. One part is the core logic of the UI that takes responsibility for control logic, models, calls to the server, validations, and so on. This part can be tested in a classic way, using BDD, and mocking the server access. No new techniques are needed for this, and the tests will be fast. Here, we can involve nonengineer stakeholders, such as UX designers, users, and so on, to write some nice BDD features using Gherkin and Cucumber.js. The other part is a thin view layer that follows a passive view design. It only updates the HTML when it is asked for, and listens to DOM events to transform them as requests to the core logic UI layer. This layer has no internal state or control rules; it simply transforms data and manipulates the DOM. We can use WebDriverJS to test the view. This is a good approach because the most complex part of the UI can be fully test-driven easily, and the hard and slow parts to test the view do not need many tests since they are very simple. In this sense, the passive view should not have a state; it should only act as a proxy of the DOM. Resources for Article: Further resources on this subject: Dart With Javascript [article] Behavior-Driven Development With Selenium WebDriver [article] Event-Driven Programming [article]
Read more
  • 0
  • 0
  • 3673

article-image-advanced-less-coding
Packt
09 Feb 2015
40 min read
Save for later

Advanced Less Coding

Packt
09 Feb 2015
40 min read
In this article by Bass Jobsen, author of the book Less Web Development Cookbook, you will learn: Giving your rules importance with the !important statement Using mixins with multiple parameters Using duplicate mixin names Building a switch leveraging argument matching Avoiding individual parameters to leverage the @arguments variable Using the @rest... variable to use mixins with a variable number of arguments Using mixins as functions Passing rulesets to mixins Using mixin guards (as an alternative for the if…else statements) Building loops leveraging mixin guards Applying guards to the CSS selectors Creating color contrasts with Less Changing the background color dynamically Aggregating values under a single property (For more resources related to this topic, see here.) Giving your rules importance with the !important statement The !important statement in CSS can be used to get some style rules always applied no matter where that rules appears in the CSS code. In Less, the !important statement can be applied with mixins and variable declarations too. Getting ready You can write the Less code for this recipe with your favorite editor. After that, you can use the command-line lessc compiler to compile the Less code. Finally, you can inspect the compiled CSS code to see where the !important statements appear. To see the real effect of the !important statements, you should compile the Less code client side, with the client-side compiler less.js and watch the effect in your web browser. How to do it… Create an important.less file that contains the code like the following snippet: .mixin() { color: red; font-size: 2em; } p { &.important {    .mixin() !important; } &.unimportant {    .mixin(); } } After compiling the preceding Less code with the command-line lessc compiler, you will find the following code output produced in the console: p.important { color: red !important; font-size: 2em !important; } p.unimportant { color: red; font-size: 2em; } You can, for instance, use the following snippet of the HTML code to see the effect of the !important statements in your browser: <p class="important"   style="color:green;font-size:4em;">important</p> <p class="unimportant"   style="color:green;font-size:4em;">unimportant</p> Your HTML document should also include the important.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="important.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in the following screenshot:  How it works… In Less, you can use the !important statement not only for properties, but also with mixins. When !important is set for a certain mixin, all properties of this mixin will be declared with the !important statement. You can easily see this effect when inspecting the properties of the p.important selector, both the color and size property got the !important statement after compiling the code. There's more… You should use the !important statements with care as the only way to overrule an !important statement is to use another !important statement. The !important statement overrules the normal CSS cascading, specificity rules, and even the inline styles. Any incorrect or unnecessary use of the !important statements in your Less (or CCS) code will make your code messy and difficult to maintain. In most cases where you try to overrule a style rule, you should give preference to selectors with a higher specificity and not use the !important statements at all. With Less V2, you can also use the !important statement when declaring your variables. A declaration with the !important statement can look like the following code: @main-color: darkblue !important; Using mixins with multiple parameters In this section, you will learn how to use mixins with more than one parameter. Getting ready For this recipe, you will have to create a Less file, for instance, mixins.less. You can compile this mixins.less file with the command-line lessc compiler. How to do it… Create the mixins.less file and write down the following Less code into it: .mixin(@color; @background: black;) { background-color: @background; color: @color; } div { .mixin(red; white;); } Compile the mixins.less file by running the command shown in the console, as follows: lessc mixins.less Inspect the CSS code output on the console, and you will find that it looks like that shown, as follows: div { background-color: #ffffff; color: #ff0000; } How it works… In Less, parameters are either semicolon-separated or comma-separated. Using a semicolon as a separator will be preferred because the usage of the comma will be ambiguous. The comma separator is not used only to separate parameters, but is also used to define a csv list, which can be an argument itself. The mixin in this recipe accepts two arguments. The first parameter sets the @color variable, while the second parameter sets the @background variable and has a default value that has been set to black. In the argument list, the default values are defined by writing a colon behind the variable's name, followed by the value. Parameters with a default value are optional when calling the mixins. So the .color mixin in this recipe can also be called with the following line of code: .mixin(red); Because the second argument has a default value set to black, the .mixin(red); call also matches the .mixin(@color; @background:black){} mixin, as described in the Building a switch leveraging argument matching recipe. Only variables set as parameter of a mixin are set inside the scope of the mixin. You can see this when compiling the following Less code: .mixin(@color:blue){ color2: @color; } @color: red; div { color1: @color; .mixin; } The preceding Less code compiles into the following CSS code: div { color1: #ff0000; color2: #0000ff; } As you can see in the preceding example, setting @color inside the mixin to its default value does not influence the value of @color assigned in the main scope. So lazy loading is applied on only variables inside the same scope; nevertheless, you will have to note that variables assigned in a mixin will leak into the caller. The leaking of variables can be used to use mixins as functions, as described in the Using mixins as functions recipe. There's more… Consider the mixin definition in the following Less code: .mixin(@font-family: "Helvetica Neue", Helvetica, Arial,   sans-serif;) { font-family: @font-family; } The semicolon added at the end of the list prevents the fonts after the "Helvetica Neue" font name in the csv list from being read as arguments for this mixin. If the argument list contains any semicolon, the Less compiler will use semicolons as a separator. In the CSS3 specification, among others, the border and background shorthand properties accepts csv. Also, note that the Less compiler allows you to use the named parameters when calling mixins. This can be seen in the following Less code that uses the @color variable as a named parameter: .mixin(@width:50px; @color: yellow) { width: @width; color: @color; } span { .mixin(@color: green); } The preceding Less code will compile into the following CSS code: span { width: 50px; color: #008000; } Note that in the preceding code, #008000 is the hexadecimal representation for the green color. When using the named parameters, their order does not matter. Using duplicate mixin names When your Less code contains one or more mixins with the same name, the Less compiler compiles them all into the CSS code. If the mixin has parameters (see the Building a switch leveraging argument matching recipe) the number of parameters will also match. Getting ready Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a file called mixins.less that contains the following Less code: .mixin(){ height:50px; } .mixin(@color) { color: @color; }   .mixin(@width) { color: green; width: @width; }   .mixin(@color; @width) { color: @color; width: @width; }   .selector-1 { .mixin(red); } .selector-2 { .mixin(red; 500px); } Compile the Less code from step 1 by running the following command in the console: lessc mixins.less After running the command from the previous step, you will find the following Less code output on the console: .selector-1 { color: #ff0000; color: green; width: #ff0000; } .selector-2 { color: #ff0000; width: 500px; } How it works… The .selector-1 selector contains the .mixin(red); call. The .mixin(red); call does not match the .mixin(){}; mixin as the number of arguments does not match. On the other hand, both .mixin(@color){}; and .mixin(@width){}; match the color. For this reason, these mixins will compile into the CSS code. The .mixin(red; 500px); call inside the .selector-2 selector will match only the .mixin(@color; @width){}; mixin, so all other mixins with the same .mixin name will be ignored by the compiler when building the .selector-2 selector. The compiled CSS code for the .selector-1 selector also contains the width: #ff0000; property value as the .mixin(@width){}; mixin matches the call too. Setting the width property to a color value makes no sense in CSS as the Less compiler does not check for this kind of errors. In this recipe, you can also rewrite the .mixin(@width){}; mixin, as follows: .mixin(@width) when (ispixel(@width)){};. There's more… Maybe you have noted that the .selector-1 selector contains two color properties. The Less compiler does not remove duplicate properties unless the value also is the same. The CSS code sometimes should contain duplicate properties in order to provide a fallback for older browsers. Building a switch leveraging argument matching The Less mixin will compile into the final CSS code only when the number of arguments of the caller and the mixins match. This feature of Less can be used to build switches. Switches enable you to change the behavior of a mixin conditionally. In this recipe, you will create a mixin, or better yet, three mixins with the same name. Getting ready Use the command-line lessc compiler to evaluate the effect of this mixin. The compiler will output the final CSS to the console. You can use your favorite text editor to edit the Less code. This recipe makes use of browser-vendor prefixes, such as the -ms-transform prefix. CSS3 introduced vendor-specific rules, which offer you the possibility to write some additional CSS, applicable for only one browser. These rules allow browsers to implement proprietary CSS properties that would otherwise have no working standard (and might never actually become the standard). To find out which prefixes should be used for a certain property, you can consult the Can I use database (available at http://caniuse.com/). How to do it… Create a switch.less Less file, and write down the following Less code into it: @browserversion: ie9; .mixin(ie9; @degrees){ transform:rotate(@degrees); -ms-transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(ie10; @degrees){ transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(@_; @degrees){ transform:rotate(@degrees); } div { .mixin(@browserversion; 70deg); } Compile the Less code from step 1 by running the following command in the console: lessc switch.less Inspect the compiled CSS code that has been output to the console, and you will find that it looks like the following code: div { -ms-transform: rotate(70deg); -webkit-transform: rotate(70deg); transform: rotate(70deg); } Finally, run the following command and you will find that the compiled CSS wll indeed differ from that of step 2: lessc --modify-var="browserversion=ie10" switch.less Now the compiled CSS code will look like the following code snippet: div { -webkit-transform: rotate(70deg); transform: rotate(70deg); } How it works… The switch in this recipe is the @browserversion variable that can easily be changed just before compiling your code. Instead of changing your code, you can also set the --modify-var option of the compiler. Depending on the value of the @browserversion variable, the mixins that match will be compiled, and the other mixins will be ignored by the compiler. The .mixin(ie10; @degrees){} mixin matches the .mixin(@browserversion; 70deg); call only when the value of the @browserversion variable is equal to ie10. Note that the first ie10 argument of the mixin will be used only for matching (argument = ie10) and does not assign any value. You will note that the .mixin(@_; @degrees){} mixin will match each call no matter what the value of the @browserversion variable is. The .mixin(ie9,70deg); call also compiles the .mixin(@_; @degrees){} mixin. Although this should result in the transform: rotate(70deg); property output twice, you will find only one. Since the property got exactly the same value twice, the compiler outputs the property only once. There's more… Not only switches, but also mixin guards, as described in the Using mixin guards (as an alternative for the if…else statements) recipe, can be used to set some properties conditionally. Current versions of Less also support JavaScript evaluating; JavaScript code put between back quotes will be evaluated by the compiler, as can be seen in the following Less code example: @string: "example in lower case"; p { &:after { content: "`@{string}.toUpperCase()`"; } } The preceding code will be compiled into CSS, as follows: p:after { content: "EXAMPLE IN LOWER CASE"; } When using client-side compiling, JavaScript evaluating can also be used to get some information from the browser environment, such as the screen width (screen.width), but as mentioned already, you should not use client-side compiling for production environments. Because you can't be sure that future versions of Less still support JavaScript evaluating, and alternative compilers not written in JavaScript cannot evaluate the JavaScript code, you should always try to write your Less code without JavaScript. Avoiding individual parameters to leverage the @arguments variable In the Less code, the @arguments variable has a special meaning inside mixins. The @arguments variable contains all arguments passed to the mixin. In this recipe, you will use the @arguments variable together with the the CSS url() function to set a background image for a selector. Getting ready You can inspect the compiled CSS code in this recipe after compiling the Less code with the command-line lessc compiler. Alternatively, you can inspect the results in your browser using the client-side less.js compiler. When inspecting the result in your browser, you will also need an example image that can be used as a background image. Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a background.less file that contains the following Less code: .background(@color; @image; @repeat: no-repeat; @position:   top right;) { background: @arguments; }   div { .background(#000; url("./images/bg.png")); width:300px; height:300px; } Finally, inspect the compiled CSS code, and you will find that it will look like the following code snippet: div { background: #000000 url("./images/bg.png") no-repeat top     right; width: 300px; height: 300px; } How it works… The four parameters of the .background() mixin are assigned as a space-separated list to the @arguments variable. After that, the @arguments variable can be used to set the background property. Also, other CSS properties accept space-separated lists, for example, the margin and padding properties. Note that the @arguments variable does not contain only the parameters that have been set explicit by the caller, but also the parameters set by their default value. You can easily see this when inspecting the compiled CSS code of this recipe. The .background(#000; url("./images/bg.png")); caller doesn't set the @repeat or @position argument, but you will find their values in the compiled CSS code. Using the @rest... variable to use mixins with a variable number of arguments As you can also see in the Using mixins with multiple parameters and Using duplicate mixin names recipes, only matching mixins are compiled into the final CSS code. In some situations, you don't know the number of parameters or want to use mixins for some style rules no matter the number of parameters. In these situations, you can use the special ... syntax or the @rest... variable to create mixins that match independent of the number of parameters. Getting ready You will have to create a file called rest.less, and this file can be compiled with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called rest.less that contains the following Less code: .mixin(@a...) { .set(@a) when (iscolor(@a)) {    color: @a; } .set(@a) when (length(@a) = 2) {    margin: @a; } .set(@a); } p{ .mixin(red); } p { .mixin(2px;4px); } Compile the rest.less file from step 1 using the following command in the console: lessc rest.less Inspect the CSS code output to the console that will look like the following code: p { color: #ff0000; } p { margin: 2px 4px; } How it works… The special ... syntax (three dots) can be used as an argument for a mixin. Mixins with the ... syntax in their argument list match any number of arguments. When you put a variable name starting with an @ in front of the ... syntax, all parameters are assigned to that variable. You will find a list of examples of mixins that use the special ... syntax as follows: .mixin(@a; ...){}: This mixin matches 1-N arguments .mixin(...){}: This mixin matches 0-N arguments; note that mixin() without any argument matches only 0 arguments .mixin(@a: 1; @rest...){}: This mixin matches 0-N arguments; note that the first argument is assigned to the @a variable, and all other arguments are assigned as a space-separated list to @rest Because the @rest... variable contains a space-separated list, you can use the Less built-in list function. Using mixins as functions People who are used to functional programming expect a mixin to change or return a value. In this recipe, you will learn to use mixins as a function that returns a value. In this recipe, the value of the width property inside the div.small and div.big selectors will be set to the length of the longest side of a right-angled triangle based on the length of the two shortest sides of this triangle using the Pythagoras theorem. Getting ready The best and easiest way to inspect the results of this recipe will be compiling the Less code with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called pythagoras.less that contains the following Less code: .longestSide(@a,@b) { @length: sqrt(pow(@a,2) + pow(@b,2)); } div { &.small {    .longestSide(3,4);    width: @length; } &.big {    .longestSide(6,7);    width: @length; } } Compile the pythagoras.less file from step 1 using the following command in the console: lessc pyhagoras.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code snippet: div.small { width: 5; } div.big { width: 9.21954446; } How it works… Variables set inside a mixin become available inside the scope of the caller. This specific behavior of the Less compiler was used in this recipe to set the @length variable and to make it available in the scope of the div.small and div.big selectors and the caller. As you can see, you can use the mixin in this recipe more than once. With every call, a new scope is created and both selectors get their own value of @length. Also, note that variables set inside the mixin do not overwrite variables with the same name that are set in the caller itself. Take, for instance, the following code: .mixin() { @variable: 1; } .selector { @variable: 2; .mixin; property: @variable; } The preceding code will compile into the CSS code, as follows: .selector { property: 2; } There's more… Note that variables won't leak from the mixins to the caller in the following two situations: Inside the scope of the caller, a variable with the same name already has been defined (lazy loading will be applied) The variable has been previously defined by another mixin call (lazy loading will not be applied) Passing rulesets to mixins Since Version 1.7, Less allows you to pass complete rulesets as an argument for mixins. Rulesets, including the Less code, can be assigned to variables and passed into mixins, which also allow you to wrap blocks of the CSS code defined inside mixins. In this recipe, you will learn how to do this. Getting ready For this recipe, you will have to create a Less file called keyframes.less, for instance. You can compile this mixins.less file with the command-line lessc compiler. Finally, inspect the Less code output to the console. How to do it… Create the keyframes.less file, and write down the following Less code into it: // Keyframes .keyframe(@name; @roules) { @-webkit-keyframes @name {    @roules(); } @-o-keyframes @name {    @roules(); } @keyframes @name {    @roules(); } } .keyframe(progress-bar-stripes; { from { background-position: 40px 0; } to   { background-position: 0 0; } }); Compile the keyframes.less file by running the following command shown in the console: lessc keyframes.less Inspect the CSS code output on the console and you will find that it looks like the following code: @-webkit-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @-o-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } How it works… Rulesets wrapped between curly brackets are passed as an argument to the mixin. A mixin's arguments are assigned to a (local) variable. When you assign the ruleset to the @ruleset variable, you are enabled to call @ruleset(); to "mixin" the ruleset. Note that the passed rulesets can contain the Less code, such as built-in functions too. You can see this by compiling the following Less code: .mixin(@color; @rules) { @othercolor: green; @media (print) {    @rules(); } }   p { .mixin(red; {color: lighten(@othercolor,20%);     background-color:darken(@color,20%);}) } The preceding Less code will compile into the following CSS code: @media (print) { p {    color: #00e600;    background-color: #990000; } } A group of CSS properties, nested rulesets, or media declarations stored in a variable is called a detached ruleset. Less offers support for the detached rulesets since Version 1.7. There's more… As you could see in the last example in the previous section, rulesets passed as an argument can be wrapped in the @media declarations too. This enables you to create mixins that, for instance, wrap any passed ruleset into a @media declaration or class. Consider the example Less code shown here: .smallscreens-and-olderbrowsers(@rules) { .lt-ie9 & {    @rules(); } @media (min-width:768px) {    @rules(); } } nav { float: left; width: 20%; .smallscreens-and-olderbrowsers({    float: none;    width:100%; }); } The preceding Less code will compile into the CSS code, as follows: nav { float: left; width: 20%; } .lt-ie9 nav { float: none; width: 100%; } @media (min-width: 768px) { nav {    float: none;    width: 100%; } } The style rules wrapped in the .lt-ie9 class can, for instance, be used with Paul Irish's <html> conditional classes technique or Modernizr. Now you can call the .smallscreens-and-olderbrowsers(){} mixin anywhere in your code and pass any ruleset to it. All passed rulesets get wrapped in the .lt-ie9 class or the @media (min-width: 768px) declaration now. When your requirements change, you possibly have to change only these wrappers once. Using mixin guards (as an alternative for the if…else statements) Most programmers are used to and familiar with the if…else statements in their code. Less does not have these if…else statements. Less tries to follow the declarative nature of CSS when possible and for that reason uses guards for matching expressions. In Less, conditional execution has been implemented with guarded mixins. Guarded mixins use the same logical and comparison operators as the @media feature in CSS does. Getting ready You can compile the Less code in this recipe with the command-line lessc compiler. Also, check the compiler options; you can find them by running the lessc command in the console without any argument. In this recipe, you will have to use the –modify-var option. How to do it… Create a Less file named guards.less, which contains the following Less code: @color: white; .mixin(@color) when (luma(@color) >= 50%) { color: black; } .mixin(@color) when (luma(@color) < 50%) { color: white; }   p { .mixin(@color); } Compile the Less code in the guards.less using the command-line lessc compiler with the following command entered in the console: lessc guards.less Inspect the output written on the console, which will look like the following code: p { color: black; } Compile the Less code with different values set for the @color variable and see how to output change. You can use the command as follows: lessc --modify-var="color=green" guards.less The preceding command will produce the following CSS code: p {   color: white;   } Now, refer to the following command: lessc --modify-var="color=lightgreen" guards.less With the color set to light green, it will again produce the following CSS code: p {   color: black;   }   How it works… The use of guards to build an if…else construct can easily be compared with the switch expression, which can be found in the programming languages, such as PHP, C#, and pretty much any other object-oriented programming language. Guards are written with the when keyword followed by one or more conditions. When the condition(s) evaluates true, the code will be mixed in. Also note that the arguments should match, as described in the Building a switch leveraging argument matching recipe, before the mixin gets compiled. The syntax and logic of guards is the same as that of the CSS @media feature. A condition can contain the following comparison operators: >, >=, =, =<, and < Additionally, the keyword true is the only value that evaluates as true. Two or more conditionals can be combined with the and keyword, which is equivalent to the logical and operator or, on the other hand, with a comma as the logical or operator. The following code will show you an example of the combined conditionals: .mixin(@a; @color) when (@a<10) and (luma(@color) >= 50%) { } The following code contains the not keyword that can be used to negate conditions: .mixin(@a; @color) when not (luma(@color) >= 50%) { } There's more… Inside the guard conditions, (global) variables can also be compared. The following Less code example shows you how to use variables inside guards: @a: 10; .mixin() when (@a >= 10) {} The preceding code will also enable you to compile the different CSS versions with the same code base when using the modify-var option of the compiler. The effect of the guarded mixin described in the preceding code will be very similar with the mixins built in the Building a switch leveraging argument matching recipe. Note that in the preceding example, variables in the mixin's scope overwrite variables from the global scope, as can be seen when compiling the following code: @a: 10; .mixin(@a) when (@a < 10) {property: @a;} selector { .mixin(5); } The preceding Less code will compile into the following CSS code: selector { property: 5; } When you compare guarded mixins with the if…else constructs or switch expressions in other programming languages, you will also need a manner to create a conditional for the default situations. The built-in Less default() function can be used to create such a default conditional that is functionally equal to the else statement in the if…else constructs or the default statement in the switch expressions. The default() function returns true when no other mixins match (matching also takes the guards into account) and can be evaluated as the guard condition. Building loops leveraging mixin guards Mixin guards, as described besides others in the Using mixin guards (as an alternative for the if…else statements) recipe, can also be used to dynamically build a set of CSS classes. In this recipe, you will learn how to do this. Getting ready You can use your favorite editor to create the Less code in this recipe. How to do it… Create a shadesofblue.less Less file, and write down the following Less code into it: .shadesofblue(@number; @blue:100%) when (@number > 0) {   .shadesofblue(@number - 1, @blue - 10%);   @classname: e(%(".color-%a",@number)); @{classname} {    background-color: rgb(0, 0, @blue);    height:30px; } } .shadesofblue(10); You can, for instance, use the following snippet of the HTML code to see the effect of the compiled Less code from the preceding step: <div class="color-1"></div> <div class="color-2"></div> <div class="color-3"></div> <div class="color-4"></div> <div class="color-5"></div> <div class="color-6"></div> <div class="color-7"></div> <div class="color-8"></div> <div class="color-9"></div> <div class="color-10"></div> Your HTML document should also include the shadesofblue.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="shadesofblue.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in this screenshot: How it works… The CSS classes in this recipe are built with recursion. The recursion here has been done by the .shadesofblue(){} mixin calling itself with different parameters. The loop starts with the .shadesofblue(10); call. When the compiler reaches the .shadesofblue(@number - 1, @blue – 10%); line of code, it stops the current code and starts compiling the .shadesofblue(){} mixin again with @number decreased by one and @blue decreased by 10 percent. The process will be repeated till @number < 1. Finally, when the @number variable becomes equal to 0, the compiler tries to call the .shadesofblue(0,0); mixin, which does not match the when (@number > 0) guard. When no matching mixin is found, the compiler stops, compiles the rest of the code, and writes the first class into the CSS code, as follows: .color-1 { background-color: #00001a; height: 30px; } Then, the compiler starts again where it stopped before, at the .shadesofblue(2,20); call, and writes the next class into the CSS code, as follows: .color-2 { background-color: #000033; height: 30px; } The preceding code will be repeated until the tenth class. There's more… When inspecting the compiled CSS code, you will find that the height property has been repeated ten times, too. This kind of code repeating can be prevented using the :extend Less pseudo class. The following code will show you an example of the usage of the :extend Less pseudo class: .baseheight { height: 30px; } .mixin(@i: 2) when(@i > 0) { .mixin(@i - 1); .class@{i} {    width: 10*@i;    &:extend(.baseheight); } } .mixin(); Alternatively, in this situation, you can create a more generic selector, which sets the height property as follows: div[class^="color"-] { height: 30px; } Recursive loops are also useful when iterating over a list of values. Max Mikhailov, one of the members of the Less core team, wrote a wrapper mixin for recursive Less loops, which can be found at https://github.com/seven-phases-max. This wrapper contains the .for and .-each mixins that can be used to build loops. The following code will show you how to write a nested loop: @import "for"; #nested-loops { .for(3, 1); .-each(@i) {    .for(0, 2); .-each(@j) {      x: (10 * @i + @j);    } } } The preceding Less code will produce the following CSS code: #nested-loops { x: 30; x: 31; x: 32; x: 20; x: 21; x: 22; x: 10; x: 11; x: 12; } Finally, you can use a list of mixins as your data provider in some situations. The following Less code gives an example about using mixins to avoid recursion: .data() { .-("dark"; black); .-("light"; white); .-("accent"; pink); }   div { .data(); .-(@class-name; @color){    @class: e(@class-name);    &.@{class} {      color: @color;    } } } The preceding Less code will compile into the CSS code, as follows: div.dark { color: black; } div.light { color: white; }   div.accent { color: pink; } Applying guards to the CSS selectors Since Version 1.5 of Less, guards can be applied not only on mixins, but also on the CSS selectors. This recipe will show you how to apply guards on the CSS selectors directly to create conditional rulesets for these selectors. Getting ready The easiest way to inspect the effect of the guarded selector in this recipe will be using the command-line lessc compiler. How to do it… Create a Less file named darkbutton.less that contains the following code: @dark: true; button when (@dark){ background-color: black; color: white; } Compile the darkbutton.less file with the command-line lessc compiler by entering the following command into the console: lessc darkbutton.less Inspect the CSS code output on the console, which will look like the following code: button { background-color: black; color: white; } Now try the following command and you will find that the button selector is not compiled into the CSS code: lessc --modify-var="dark=false" darkbutton.less How it works… The guarded CSS selectors are ignored by the compiler and so not compiled into the CSS code when the guard evaluates false. Guards for the CSS selectors and mixins leverage the same comparison and logical operators. You can read in more detail how to create guards with these operators in Using mixin guards (as an alternative for the if…else statements) recipe. There's more… Note that the true keyword will be the only value that evaluates true. So the following command, which sets @dark equal to 1, will not generate the button selector as the guard evaluates false: lessc --modify-var="dark=1" darkbutton.less The following Less code will give you another example of applying a guard on a selector: @width: 700px; div when (@width >= 600px ){ border: 1px solid black; } The preceding code will output the following CSS code: div {   border: 1px solid black;   } On the other hand, nothing will be output when setting @width to a value smaller than 600 pixels. You can also rewrite the preceding code with the & feature referencing the selector, as follows: @width: 700px; div { & when (@width >= 600px ){    border: 1px solid black; } } Although the CSS code produced of the latest code does not differ from the first, it will enable you to add more properties without the need to repeat the selector. You can also add the code in a mixin, as follows: .conditional-border(@width: 700px) {    & when (@width >= 600px ){    border: 1px solid black; } width: @width; } Creating color contrasts with Less Color contrasts play an important role in the first impression of your website or web application. Color contrasts are also important for web accessibility. Using high contrasts between background and text will help the visually disabled, color blind, and even people with dyslexia to read your content more easily. The contrast() function returns a light (white by default) or dark (black by default) color depending on the input color. The contrast function can help you to write a dynamical Less code that always outputs the CSS styles that create enough contrast between the background and text colors. Setting your text color to white or black depending on the background color enables you to meet the highest accessibility guidelines for every color. A sample can be found at http://www.msfw.com/accessibility/tools/contrastratiocalculator.aspx, which shows you that either black or white always gives enough color contrast. When you use Less to create a set of buttons, for instance, you don't want some buttons with white text while others have black text. In this recipe, you solve this situation by adding a stroke to the button text (text shadow) when the contrast ratio between the button background and button text color is too low to meet your requirements. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. You will have to create some HTML and Less code, and you can use your favorite editor to do this. You will have to create the following file structure: How to do it… Create a Less file named contraststrokes.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @buttonTextColor: white; @ContrastRatio: 7; //AAA, small texts   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px black; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px white; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   button { padding:10px; border-radius:10px; color: @buttonTextColor; width:200px; }   .safe { .setcontrast(@safe); background-color: @safe; }   .danger { .setcontrast(@danger); background-color: @danger; }   .warning { .setcontrast(@warning); background-color: @warning; } Create an HTML file, and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>    <link rel="stylesheet/less" type="text/css"       href="contraststrokes.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">     warning</button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like what's shown in the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. How it works… The main purpose of this recipe is to show you how to write dynamical code based on the color contrast ratio. Web Content Accessibility Guidelines (WCAG) 2.0 covers a wide range of recommendations to make web content more accessible. They have defined the following three conformance levels: Conformance Level A: In this level, all Level A success criteria are satisfied Conformance Level AA: In this level, all Level A and AA success criteria are satisfied Conformance Level AAA: In this level, all Level A, AA, and AAA success criteria are satisfied If you focus only on the color contrast aspect, you will find the following paragraphs in the WCAG 2.0 guidelines. 1.4.1 Use of Color: Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. (Level A) 1.4.3 Contrast (Minimum): The visual presentation of text and images of text has a contrast ratio of at least 4.5:1 (Level AA) 1.4.6 Contrast (Enhanced): The visual presentation of text and images of text has a contrast ratio of at least 7:1 (Level AAA) The contrast ratio can be calculated with a formula that can be found at http://www.w3.org/TR/WCAG20/#contrast-ratiodef: (L1 + 0.05) / (L2 + 0.05) In the preceding formula, L1 is the relative luminance of the lighter of the colors, and L2 is the relative luminance of the darker of the colors. In Less, the relative luminance of a color can be found with the built-in luma() function. In the Less code of this recipe are the four guarded .setcontrast(){} mixins. The guard conditions, such as (luma(@backgroundcolor) =< luma(@buttonTextColor)) are used to find which of the @backgroundcolor and @buttonTextColor colors is the lighter one. Then the (((luma({the lighter color})+5)/(luma({the darker color})+5)) < @ContrastRatio) condition can, according to the preceding formula, be used to determine whether the contrast ratio between these colors meets the requirement (@ContrastRatio) or not. When the value of the calculated contrast ratio is lower than the value set by the @ContrastRatio, the text-shadow: 0 0 2px {color}; ruleset will be mixed in, where {color} will be white or black depending on the relative luminance of the color set by the @buttonTextColor variable. There's more… In this recipe, you added a stroke to the web text to improve the accessibility. First, you will have to bear in mind that improving the accessibility by adding a stroke to your text is not a proven method. Also, automatic testing of the accessibility (by calculating the color contrast ratios) cannot be done. Other options to solve this issue are to increase the font size or change the background color itself. You can read how to change the background color dynamically based on color contrast ratios in the Changing the background color dynamically recipe. When you read the exceptions of the 1.4.6 Contrast (Enhanced) paragraph of the WCAG 2.0 guidelines, you will find that large-scale text requires a color contrast ratio of 4.5 instead of 7.0 to meet the requirements of the AAA Level. Large-scaled text is defined as at least 18 point or 14 point bold or font size that would yield the equivalent size for Chinese, Japanese, and Korean (CJK) fonts. To try this, you could replace the text-shadow properties in the Less code of step 1 of this recipe with the font-size, 14pt, and font-weight, bold; declarations. After this, you can inspect the results in your browser again. Depending on, among others, the values you have chosen for the @buttonTextColor and @ContrastRatio variables, you will find something like the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. Note that when you set the @ContrastRatio variable to 7.0, the code does not check whether the larger font indeed meets the 4.5 contrast ratio requirement. Changing the background color dynamically When you define some basic colors to generate, for instance, a set of button elements, you can use the built-in contrast() function to set the font color. The built-in contrast() function provides the highest possible contrast, but does not guarantee that the contrast ratio is also high enough to meet your accessibility requirements. In this recipe, you will learn how to change your basic color automatically to meet the required contrast ratio. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. Use your favorite editor to create the HTML and Less code in this recipe. You will have to create the following file structure: How to do it… Create a Less file named backgroundcolors.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @ContrastRatio: 7.0; //AAA @precision: 1%; @buttonTextColor: black; @threshold: 43;   .setcontrastcolor(@startcolor) when (luma(@buttonTextColor)   < @threshold) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) {    .contrastcolor(lighten(@startcolor,@precision)); } .contrastcolor(@startcolor) when (@startcolor =     color("white")),(((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   .setcontrastcolor(@startcolor) when (default()) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@buttonTextColor)+5)/     (luma(@startcolor)+5)) < @ContrastRatio) {    .contrastcolor(darken(@startcolor,@precision)); } .contrastcolor(@startcolor) when (luma(@startcolor) = 100     ),(((luma(@buttonTextColor)+5)/(luma(@startcolor)+5))       >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   button { padding:10px; border-radius:10px; color:@buttonTextColor; width:200px; }   .safe { .setcontrastcolor(@safe); background-color: @contrastcolor; }   .danger { .setcontrastcolor(@danger); background-color: @contrastcolor; }   .warning { .setcontrastcolor(@warning); background-color: @contrastcolor; } Create an HTML file and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>      <link rel="stylesheet/less" type="text/css"       href="backgroundcolors.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">warning     </button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like the following screenshot: On the left-hand side of the preceding figure, you will see the original colored buttons, and on the right-hand side, you will find the high contrast buttons. How it works… The guarded .setcontrastcolor(){} mixins are used to determine the color set depending upon whether the @buttonTextColor variable is a dark color or not. When the color set by @buttonTextColor is a dark color, with a relative luminance below the threshold value set by the @threshold variable, the background colors should be made lighter. For light colors, the background colors should be made darker. Inside each .setcontrastcolor(){} mixin, a second set of mixins has been defined. These guarded .contrastcolor(){} mixins construct a recursive loop, as described in the Building loops leveraging mixin guards recipe. In each step of the recursion, the guards test whether the contrast ratio that is set by the @ContrastRatio variable has been reached or not. When the contrast ratio does not meet the requirements, the @startcolor variable will darken or lighten by the number of percent set by the @precision variable with the built-in darken() and lighten() functions. When the required contrast ratio has been reached or the color defining the @startcolor variable has become white or black, the modified color value of @startcolor will be assigned to the @contrastcolor variable. The guarded .contrastcolor(){} mixins are used as functions, as described in the Using mixins as functions recipe to assign the @contrastcolor variable that will be used to set the background-color property of the button selectors. There's more… A small value of the @precision variable will increase the number of recursions (possible) needed to find the required colors as there will be more and smaller steps needed. With the number of recursions also, the compilation time will increase. When you choose a bigger value for @precision, the contrast color found might differ from the start color more than needed to meet the contrast ratio requirement. When you choose, for instance, a dark button text color, which is not black, all or some base background colors will be set to white. The chances of finding the highest contrast for white increase for high values of the @ContrastRatio variable. The recursions will stop when white (or black) has been reached as you cannot make the white color lighter. When the recursion stops on reaching white or black, the colors set by the mixins in this recipe don't meet the required color contrast ratios. Aggregating values under a single property The merge feature of Less enables you to merge property values into a list under a single property. Each list can be either space-separated or comma-separated. The merge feature can be useful to define a property that accepts a list as a value. For instance, the background accepts a comma-separated list of backgrounds. Getting ready For this recipe, you will need a text editor and a Less compiler. How to do it… Create a file called defaultfonts.less that contains the following Less code: .default-fonts() { font-family+: Helvetica, Arial, sans-serif; } p { font-family+: "Helvetica Neue"; .default-fonts(); } Compile the defaultfonts.less file from step 1 using the following command in the console: lessc defaultfonts.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code: p { font-family: "Helvetica Neue", Helvetica, Arial, sans-   serif; } How it works… When the compiler finds the plus sign (+) before the assignment sign (:), it will merge the values into a CSV list and will not create a new property into the CSS code. There's more… Since Version 1.7 of Less, you can also merge the property's values separated by a space instead of a comma. For space-separated values, you should use the +_ sign instead of a + sign, as can be seen in the following code: .text-overflow(@text-overflow: ellipsis) { text-overflow+_ : @text-overflow; } p, .text-overflow { .text-overflow(); text-overflow+_ : ellipsis; } The preceding Less code will compile into the CSS code, as follows: p, .text-overflow { text-overlow: ellipsis ellipsis; } Note that the text-overflow property doesn't force an overflow to occur; you will have to explicitly set, for instance, the overflow property to hidden for the element. Summary This article walks you through the process of parameterized mixins and shows you how to use guards. A guard can be used with as if-else statements and make it possible to construct interactive loops in Less. Resources for Article: Further resources on this subject: Web Application Testing [article] LESS CSS Preprocessor [article] Bootstrap 3 and other applications [article]
Read more
  • 0
  • 0
  • 2753
Banner background image

article-image-learning-nservicebus-preparing-failure
Packt
09 Feb 2015
19 min read
Save for later

Learning NServiceBus - Preparing for Failure

Packt
09 Feb 2015
19 min read
 In this article by David Boike, author of the book Learning NServiceBus Second Edition we will explore the tools that NServiceBus gives us to stare at failure in the face and laugh. We'll discuss error queues, automatic retries, and controlling how those retries occur. We'll also discuss how to deal with messages that may be transient and should not be retried in certain conditions. Lastly, we'll examine the difficulty of web service integrations that do not handle retries cleanly on their own. (For more resources related to this topic, see here.) Fault tolerance and transactional processing In order to understand the fault tolerance we gain from using NServiceBus, let's first consider what happens without it. Let's order something from a fictional website and watch what might happen to process that order. On our fictional website, we add Batman Begins to our shopping cart and then click on the Checkout button. While our cursor is spinning, the following process is happening: Our web request is transmitted to the web server. The web application knows it needs to make several database calls, so it creates a new transaction scope. Database Call 1 of 3: The shopping cart information is retrieved from the database. Database Call 2 of 3: An Order record is inserted. Database Call 3 of 3: We attempt to insert OrderLine records, but instead get Error Message: Transaction (Process ID 54) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. This exception causes the transaction to roll back. This process is shown in the following diagram:   Ugh! If you're using SQL Server and you've never seen this, you haven't been coding long enough. It never happens during development; there just isn't enough load. It's even possible that this won't occur during load testing. It will likely occur during heavy load at the worst possible time, for example, right after your big launch. So obviously, we should log the error, right? But then what happens to the order? Well that's gone, and your boss may not be happy about losing that revenue. And what about our user? They will likely get a nasty error message. We won't want to divulge the actual exception message, so they will get something like, "An unknown error has occurred. The system administrator has been notified. Please try again later." However, the likelihood that they want to trust their credit card information to a website that has already blown up in their face once is quite low. So how can we do better? Here's how this scenario could have happened with NServiceBus:   The web request is transmitted to the web server. We add the shopping cart identifier to an NServiceBus command and send it through the Bus. We redirect the user to a new page that displays the receipt, even though the order has not yet been processed. Elsewhere, an Order service is ready to start processing a new message: The service creates a new transaction scope, and receives the message within the transaction. Database Call 1 of 3: The shopping cart information is retrieved from the database. Database Call 2 of 3: An Order record is inserted. Database Call 3 of 3: Deadlock! The exception causes the database transaction to roll back. The transaction controlling the message also rolls back. The order is back in the queue. This is great news! The message is back in the queue, and by default, NServiceBus will automatically retry this message a few times. Generally, deadlocks are a temporary condition, and simply trying again is all that is needed. After all, the SQL Server exception says Rerun the transaction. Meanwhile, the user has no idea that there was ever a problem. It will just take a little longer (in the order of milliseconds or seconds) to process the order. Error queues and replay Whenever you talk about automatic retries in a messaging environment, you must invariably consider poison messages. A poison message is a message that cannot be immediately resolved by a retry because it will consistently result in an error. A deadlock is a transient error. We can reasonably expect deadlocks and other transient errors to resolve by themselves without any intervention. Poison messages, on the other hand, cannot resolve themselves. Sometimes, this is because of an extended outage. At other times, it is purely our fault—an exception we didn't catch or an input condition we didn't foresee. Automatic retries If we retry poison messages in perpetuity, they will create a blockage in our incoming queue of messages. They will retry over and over, and valid messages will get stuck behind them, unable to make it through. For this reason, we must set a reasonable limit on retries, and after failing too many times, poison messages must be removed from the processing queue and stored someplace else. NServiceBus handles all of this for us. By default, NServiceBus will try to process a message five times, after which it will move the message to an error queue, configured by the MessageForwardingInCaseOfFaultConfig configuration section: <MessageForwardingInCaseOfFaultConfigErrorQueue="error" /> It is in this error queue that messages will wait for administrative intervention. In fact, you can even specify a different server to collect these messages, which allows you to configure one central point in a system where you watch for and deal with all failures: <MessageForwardingInCaseOfFaultConfigErrorQueue="error@SERVER" /> As mentioned previously, five failed attempts form the default metric for a failed message, but this is configurable via the TransportConfig configuration section: <section name="TransportConfig" type="NServiceBus.Config.TransportConfig, NServiceBus.Core" /> ... <TransportConfig MaxRetries="3" /> You could also generate the TransportConfig section using the Add-NServiceBusTransportConfig PowerShell cmdlet. Keep two things in mind: Depending upon how you read it, MaxRetries can be a somewhat confusing name. What it really means is the total number of tries, so a value of 5 will result in the initial attempt plus 4 retries. This has the odd side effect that MaxRetries="0" is the same as MaxRetries="1". In both instances, the message would be attempted once. During development, you may want to limit retries to MaxRetries="1" so that a single error doesn't cause a nausea-inducing wall of red that flushes your console window's buffer, leaving you unable to scroll up to see what came before. You can then enable retries in production by deploying the endpoint with a different configuration. Replaying errors What happens to those messages unlucky enough to fail so many times that they are unceremoniously dumped in an error queue? "I thought you said that Alfred would never give up on us!" you cry. As it turns out, this is just a temporary holding pattern that enables the rest of the system to continue functioning, while the errant messages await some sort of intervention, which can be human or automated based on your own business rules. Let's say our message handler divides two numbers from the incoming message, and we forget to account for the possibility that one of those numbers might be zero and that dividing by zero is frowned upon. At this point, we need to fix the error somehow. Exactly what we do will depend upon your business requirements: If the messages were sent in an error, we can fix the code that was sending them. In this case, the messages in the error queue are junk and can be discarded. We can check the inputs on the message handler, detect the divide-by-zero condition, and make compensating actions. This may mean returning from the message handler, effectively discarding any divide-by-zero messages that are processed, or it may mean doing new work or sending new messages. In this case, we may want to replay the error messages after we have deployed the new code. We may want to fix both the sending and receiving side. Second-level retries Automatically retrying error messages and sending repeated errors to an error queue is a pretty good strategy to manage both transient errors, such as deadlocks, and poison messages, such as an unrecoverable exception. However, as it turns out, there is a gray area in between, which is best referred to as semi-transient errors. These include incidents such as a web service being down for a few seconds, or a database being temporarily offline. Even with a SQL Server failover cluster, the failover procedure can take upwards of a minute depending on its size and traffic levels. During a time like this, the automatic retries will be executed immediately and great hordes of messages might go to the error queue, requiring an administrator to take notice and return them to their source queues. But is this really necessary? As it turns out, it is not. NServiceBus contains a feature called Second-Level Retries (SLR) that will add additional sets of retries after a wait. By default, the SLR will add three additional retry sessions, with an additional wait of 10 seconds each time. By contrast, the original set of retries is commonly referred to as First-Level Retries (FLR). Let's track a message's full path to complete failure, assuming default settings: Attempt to process the message five times, then wait for 10 seconds Attempt to process the message five times, then wait for 20 seconds Attempt to process the message five times, then wait for 30 seconds Attempt to process the message five times, and then send the message to the error queue Remember that by using five retries, NServiceBus attempts to process the message five times on every pass. Using second-level retries, almost every message should be able to be processed unless it is definitely a poison message that can never be successfully processed. Be warned, however, that using SLR has its downsides too. The first is ignorance of transient errors. If an error never makes it to an error queue and we never manually check out the error logs, there's a chance we might miss it completely. For this reason, it is smart to always keep an eye on error logs. A random deadlock now and then is not a big deal, but if they happen all the time, it is probably still worth some work to improve the code so that the deadlock is not as frequent. An additional risk lies in the time to process a true poison message through all the retry levels. Not accounting for any time taken to process the message itself 20 times or to wait for other messages in the queue, the use of second-level retries with the default settings results in an entire minute of waiting before you see the message in an error queue. If your business stakeholders require the message to either succeed or fail in 30 seconds, then you cannot possibly meet those requirements. Due to the asynchronous nature of messaging, we should be careful never to assume that messages in a distributed system will arrive in any particular order. However, it is still good to note that the concept of retries exacerbates this problem. If Message A and then Message B are sent in order, and Message B succeeds immediately but Message A has to wait in an error queue for awhile, then they will most certainly be processed out of order. Luckily, second-level retries are completely configurable. The configuration element is shown here with the default settings: <section name="SecondLevelRetriesConfig" type="NServiceBus.Config.SecondLevelRetriesConfig,   NServiceBus.Core"/> ... <SecondLevelRetriesConfig Enabled="true"                          TimeIncrease="00:00:10"                          NumberOfRetries="3" /> You could also generate the SecondLevelRetriesConfig section using the Add-NServiceBus SecondLevelRetriesConfig PowerShell cmdlet. Keep in mind that you may want to disable second-level retries, like first-level retries, during development for convenience, and then enable them in production. Messages that expire Messages that lose their business value after a specific amount of time are an important consideration with respect to potential failures. Consider a weather reporting system that reports the current temperature every few minutes. How long is that data meaningful? Nobody seems to care what the temperature was 2 hours ago; they want to know what the temperature is now! NServiceBus provides a method to cause messages to automatically expire after a given amount of time. Unlike storing this information in a database, you don't have to run any batch jobs or take any other administrative action to ensure that old data is discarded. You simply mark the message with an expiration date and when that time arrives, the message simply evaporates into thin air: [TimeToBeReceived("01:00:00")] public class RecordCurrentTemperatureCmd : ICommand { public double Temperature { get; set; } } This example shows that the message must be received within one hour of being sent, or it is simply deleted by the queuing system. NServiceBus isn't actually involved in the deletion at all, it simply tells the queuing system how long to allow the message to live. If a message fails, however, and arrives at an error queue, NServiceBus will not include the expiration date in order to give you a chance to debug the problem. It would be very confusing to try to find an error message that had disappeared into thin air! Another valuable use for this attribute is for high-volume message types, where a communication failure between servers or extended downtime could cause a huge backlog of messages to pile up either at the sending or the receiving side. Running out of disk space to store messages is a show-stopper for most message-queuing systems, and the TimeToBeReceived attribute is the way to guard against it. However, this means we are throwing away data, so we need to be very careful when applying this strategy. It should not simply be used as a reaction to low disk space! Auditing messages At times, it can be difficult to debug a distributed system. Commands and events are sent all around, but after they are processed, they go away. We may be able to tell what will happen to a system in the future by examining queued messages, but how can we analyze what happened in the past? For this reason, NServiceBus contains an auditing function that will enable an endpoint to send a copy of every message it successfully processes to a secondary location, a queue that is generally hosted on a separate server. This is accomplished by adding an attribute or two to the UnicastBusConfig section of an endpoint's configuration: <UnicastBusConfig ForwardReceivedMessagesTo="audit@SecondaryServer" TimeToBeReceivedOnForwardedMessages="1.00:00:00"> <MessageEndpointMappings>    <!-- Mappings go here --> </MessageEndpointMappings> </UnicastBusConfig> In this example, the endpoint will forward a copy of all successfully processed messages to a queue named audit on a server named SecondaryServer, and those messages will expire after one day. While it is not required to use the TimeToBeReceivedOnForwardedMessages parameter, it is highly recommended. Otherwise, it is possible (even likely) that messages will build up in your audit queue until you run out of available storage, which you would really like to avoid. The exact time limit you use is dependent upon the volume of messages in your system and how much storage your queuing system has available. You don't even have to design your own tool to monitor these audit messages; the Particular Service Platform has that job covered for you. NServiceBus includes the auditing configuration in new endpoints by default so that ServiceControl, ServiceInsight, and ServicePulse can keep tabs on your system. Web service integration and idempotence When talking about managing failure, it's important to spend a few minutes discussing web services because they are such a special case; they are just too good at failing. When the message is processed, the email would either be sent or it won't; there really aren't any in-between cases. In reality, when sending an email, it is technically possible that we could call the SMTP server, successfully send an email, and then the server could fail before we are able to finish marking the message as processed. However, in practice, this chance is so infinitesimal that we generally assume it to be zero. Even if it is not zero, we can assume in most cases that sending a user a duplicate email one time in a few million won't be the end of the world. Web services are another story. There are just so many ways a web service can fail: A DNS or network failure may not let us contact the remote web server at all The server may receive our request, but then throw an error before any state is modified on the server The server may receive our request and successfully process it, but a communication problem prevents us from receiving the 200 OK response The connection times out, thus ignoring any response the server may have been about to send us For this reason, it makes our lives a lot easier if all the web services we ever have to deal with are idempotent, which means a process that can be invoked multiple times with no adverse effects. Any service that queries data without modifying it is inherently idempotent. We don't have to worry about how many times we call a service if doing so doesn't change any data. Where we start to get into trouble is when we begin mutating state. Sometimes, we can modify state safely. Consider an example used previously regarding registering for alert notifications. Let's assume that on the first try, the third-party service technically succeeds in registering our user for alerts, but it takes too long to do so and we receive a timeout error. When we retry, we ask to subscribe the email address to alerts again, and the web service call succeeds. What's the net effect? Either way, the user is subscribed for alerts. This web service satisfies idempotence. The classic example of a non-idempotent web service is a credit card transaction processor. If the first attempt to authorize a credit card succeeds on the server and we retry, we may double charge our customer! This is not an acceptable business case and you will quickly find many people angry with you. In these cases, we need to do a little work ourselves because unfortunately, it's impossible for NServiceBus to know whether your web service is idempotent or not. Generally, this work takes the form of recording each step we perform on durable storage in real time, and then query that storage to see which steps have been attempted. In our example of credit card processing, the happy path approach would look like this: Record our intent to make a web service call to durable storage. Make the actual web service call. Record the results of the web service call to durable storage. Send commands or publish events with the results of the web service call. Now, if the message is retried, we can inspect the durable storage and decide what step to jump to and whether any compensating actions need to be taken first. If we have recorded our intent to call the web service but do not see any evidence of a response, we can query the credit card processor based on an order or transaction identifier. Then we will know whether we need to retry the authorization or just get the results of the already completed authorization. If we see that we have already made the web service call and received the results, then we know that the web service call was successful but some exception happened before the resulting messages could be sent. In response, we can just take the results and send the messages without requiring any further web service invocations. It's important to be able to handle the case where our durable storage throws an exception, rendering us unable to make our state persist. This is why it's so important to record the intent to do something before attempting it—so that we know the difference between never having done something and attempting it but not necessarily knowing the results. The process we have just discussed is admittedly a bit abstract, and can be visualized much more easily with the help of the following diagram:   The choice of using the durable storage strategy for this process is up to you. If you choose to use a database, however, you must remember to exempt it from the message handler's ambient transaction, or those changes will also get rolled back if and when the handler fails. In order to escape the transaction to write to durable storage, use a new TransactionScope object to suppress the transaction, like this: public void Handle(CallNonIdempotentWebServiceCmdcmd) { // Under control of ambient transaction   using (var ts = new TransactionScope(TransactionScopeOption.Suppress)) {    // Not under transaction control    // Write updates to durable storage here    ts.Complete(); }   // Back under control of ambient transaction } Summary In this article, we considered the inevitable failure of our software and how NServiceBus can help us to be prepared for it. You learned how NServiceBus promises fault tolerance within every message handler so that messages are never dropped or forgotten, but instead retried and then held in an error queue if they cannot be successfully processed. Once we fix the error, or take some other administrative action, we can replay those messages. In order to avoid flooding our system with useless messages during a failure, you learned how to cause messages that lose their business value after a specific amount of time to expire. Finally, you learned how to build auditing in a system by forwarding a copy of all messages for later inspection, and how to properly deal with the challenges involved in calling external web services. In this article, we dealt exclusively with NServiceBus endpoints hosted by the NServiceBus Host process.
Read more
  • 0
  • 0
  • 3148

article-image-fronting-external-api-ruby-rails-part-1
Mike Ball
09 Feb 2015
6 min read
Save for later

Fronting an external API with Ruby on Rails: Part 1

Mike Ball
09 Feb 2015
6 min read
Historically, a conventional Ruby on Rails application leverages server-side business logic, a relational database, and a RESTful architecture to serve dynamically-generated HTML. JavaScript-intensive applications and the widespread use of external web APIs, however, somewhat challenge this architecture. In many cases, Rails is tasked with performing as an orchestration layer, collecting data from various backend services and serving re-formatted JSON or XML to clients. In such instances, how is Rails' model-view-controller architecture still relevant? In this two part post series, we'll create a simple Rails backend that makes requests to an external XML-based web service and serves JSON. We'll use RSpec for tests and Jbuilder for view rendering. What are we building? We'll create Noterizer, a simple Rails application that requests XML from externally hosted endpoints and re-renders the XML data as JSON at a single URL. To assist in this post, I've created NotesXmlService, a basic web application that serves two XML-based endpoints: http://NotesXmlService.herokuapp.com/note-onehttp://NotesXmlService.herokuapp.com/note-two Why is this necessary in a real-world scenario? Fronting external endpoints with an application like Noterizer opens up a few opportunities: Noterizer's endpoint could serve JavaScript clients who can't perform HTTP requests across domain names to the original, external API. Noterizer's endpoint could reformat the externally hosted data to better serve its own clients' data formatting preferences. Noterizer's endpoint is a single interface to the data; multiple requests are abstracted away by its backend. Noterizer provides caching opportunities. While it's beyond the scope of this series, Rails can cache external request data, thus offloading traffic to the external API and avoiding any terms of service or rate limit violations imposed by the external service. Setup For this series, I’m using Mac OS 10.9.4, Ruby 2.1.2, and Rails 4.1.4. I’m assuming some basic familiarity with Git and the command line. Clone and set up the repo I've created a basic Rails 4 Noterizer app. Clone its repo, enter the project directory, and check out its tutorial branch: $ git clone http://github.com/mdb/noterizer && cd noterizer && git checkout tutorial Install its dependencies: $ bundle install Set up the test framework Let’s install RSpec for testing. Add the following to the project's Gemfile: gem 'rspec-rails', '3.0.1' Install rspec-rails: $ bundle install There’s now an rspec generator available for the rails command. Let's generate a basic RSpec installation: $ rails generate rspec:install This creates a few new files in a spec directory: ├── spec│   ├── rails_helper.rb│   └── spec_helper.rb We’re going to make a few adjustments to our RSpec installation.  First, because Noterizer does not use a relational database, delete the following ActiveRecord reference in spec/rails_helper.rb: # Checks for pending migrations before tests are run. # If you are not using ActiveRecord, you can remove this line. ActiveRecord::Migration.maintain_test_schema! Next, configure RSpec to be less verbose in its warning output; such verbose warnings are beyond the scope of this series. Remove the following line from .rspec: --warnings The RSpec installation also provides a spec rake task. Test this by running the following: $ rake spec You should see the following output, as there aren’t yet any RSpec tests: No examples found. Finished in 0.00021 seconds (files took 0.0422 seconds to load) 0 examples, 0 failures Note that a default Rails installation assumes tests live in a tests directory. RSpec uses a spec directory. For clarity's sake, you’re free to delete the test directory from Noterizer. Building a basic route and controller Currently, Noterizer does not have any URLs; we’ll create a single/notes URL route.  Creating the controller First, generate a controller: $ rails g controller notes Note that this created quite a few files, including JavaScript files, stylesheet files, and a helpers module. These are not relevant to our NotesController; so let's undo our controller generation by removing all untracked files from the project. Note that you'll want to commit any changes you do want to preserve. $ git clean -f Now, open config/application.rb and add the following generator configuration: config.generators do |g| g.helper false g.assets false end Re-running the generate command will now create only the desired files: $ rails g controller notes Testing the controller Let's add a basic NotesController#index test to spec/controllers/notes_spec.rb. The test looks like this: require 'rails_helper' describe NotesController, :type => :controller do describe '#index' do before :each do get :index end it 'successfully responds to requests' do expect(response).to be_success end end end This test currently fails when running rake spec, as we haven't yet created a corresponding route. Add the following route to config/routes.rb get 'notes' => 'notes#index' The test still fails when running rake spec, because there isn't a proper #index controller action.  Create an empty index method in app/controllers/notes_controller.rb class NotesController < ApplicationController def index end end rake spec still yields failing tests, this time because we haven't yet created a corresponding view. Let's create a view: $ touch app/views/notes/index.json.jbuilder To use this view, we'll need to tweak the NotesController a bit. Let's ensure that requests to the /notes route always returns JSON via a before_filter run before each controller action: class NotesController < ApplicationController before_filter :force_json def index end private def force_json request.format = :json end end Now, rake spec yields passing tests: $ rake spec . Finished in 0.0107 seconds (files took 1.09 seconds to load) 1 example, 0 failures Let's write one more test, asserting that the response returns the correct content type. Add the following to spec/controllers/notes_controller_spec.rb it 'returns JSON' do expect(response.content_type).to eq 'application/json' end Assuming rake spec confirms that the second test passes, you can also run the Rails server via the rails server command and visit the currently-empty Noterizer http://localhost:3000/notes URL in your web browser. Conclusion In this first part of the series we have created the basic route and controller for Noterizer, which is a basic example of a Rails application that fronts an external API. In the next blog post (Part 2), you will learn how to build out the backend, test the model, build up and test the controller, and also test the app with JBuilder. About this Author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 4913

article-image-transformations-using-mapreduce
Packt
05 Feb 2015
19 min read
Save for later

Transformations Using Map/Reduce

Packt
05 Feb 2015
19 min read
In this article written by Adam Boduch, author of the book Lo-Dash Essentials, we'll be looking at all the interesting things we can do with Lo-Dash and the map/reduce programming model. We'll start off with the basics, getting our feet wet with some basic mappings and basic reductions. As we progress through the article, we'll start introducing more advanced techniques to think in terms of map/reduce with Lo-Dash. The goal, once you've reached the end of this article, is to have a solid understanding of the Lo-Dash functions available that aid in mapping and reducing collections. Additionally, you'll start to notice how disparate Lo-Dash functions work together in the map/reduce domain. Ready? (For more resources related to this topic, see here.) Plucking values Consider that as your informal introduction to mapping because that's essentially what it's doing. It's taking an input collection and mapping it to a new collection, plucking only the properties we're interested in. This is shown in the following example: var collection = [ { name: 'Virginia', age: 45 }, { name: 'Debra', age: 34 }, { name: 'Jerry', age: 55 }, { name: 'Earl', age: 29 } ]; _.pluck(collection, 'age'); // → [ 45, 34, 55, 29 ] This is about as simple a mapping operation as you'll find. In fact, you can do the same thing with map(): var collection = [ { name: 'Michele', age: 58 }, { name: 'Lynda', age: 23 }, { name: 'William', age: 35 }, { name: 'Thomas', age: 41 } ]; _.map(collection, 'name'); // → // [ // "Michele", // "Lynda", // "William", // "Thomas" // ] As you'd expect, the output here is exactly the same as it would be with pluck(). In fact, pluck() is actually using the map() function under the hood. The callback passed to map() is constructed using property(), which just returns the specified property value. The map() function falls back to this plucking behavior when a string instead of a function is passed to it. With that brief introduction to the nature of mapping, let's dig a little deeper and see what's possible in mapping collections. Mapping collections In this section, we'll explore mapping collections. Mapping one collection to another ranges from composing really simple—as we saw in the preceding section—to sophisticated callbacks. These callbacks that map each item in the collection can include or exclude properties and can calculate new values. Besides, we can apply functions to these items. We'll also address the issue of filtering collections and how this can be done in conjunction with mapping. Including and excluding properties When applied to an object, the pick() function generates a new object containing only the specified properties. The opposite of this function, omit(), generates an object with every property except those specified. Since these functions work fine for individual object instances, why not use them in a collection? You can use both of these functions to shed properties from collections by mapping them to new ones, as shown in the following code: var collection = [ { first: 'Ryan', last: 'Coleman', age: 23 }, { first: 'Ann', last: 'Sutton', age: 31 }, { first: 'Van', last: 'Holloway', age: 44 }, { first: 'Francis', last: 'Higgins', age: 38 } ]; _.map(collection, function(item) { return _.pick(item, [ 'first', 'last' ]); }); // → // [ // { first: "Ryan", last: "Coleman" }, // { first: "Ann", last: "Sutton" }, // { first: "Van", last: "Holloway" }, // { first: "Francis", last: "Higgins" } // ] Here, we're creating a new collection using the map() function. The callback function supplied to map() is applied to each item in the collection. The item argument is the original item from the collection. The callback is expected to return the mapped version of that item and this version could be anything, including the original item itself. Be careful when manipulating the original item in map() callbacks. If the item is an object and it's referenced elsewhere in your application, it could have unintended consequences. We're returning a new object as the mapped item in the preceding code. This is done using the pick() function. We only care about the first and the last properties. Our newly mapped collection looks identical to the original, except that no item has an age property. This newly mapped collection is seen in the following code: var collection = [ { first: 'Clinton', last: 'Park', age: 19 }, { first: 'Dana', last: 'Hines', age: 36 }, { first: 'Pete', last: 'Ross', age: 31 }, { first: 'Annie', last: 'Cross', age: 48 } ]; _.map(collection, function(item) { return _.omit(item, 'first'); }); // → // [ // { last: "Park", age: 19 }, // { last: "Hines", age: 36 }, // { last: "Ross", age: 31 }, // { last: "Cross", age: 48 } // ] The preceding code follows the same approach as the pick() code. The only difference is that we're excluding the first property from the newly created collection. You'll also notice that we're passing a string containing a single property name instead of an array of property names. In addition to passing strings or arrays as the argument to pick() or omit(), we can pass in a function callback. This is suitable when it's not very clear which objects in a collection should have which properties. Using a callback like this inside a map() callback lets us perform detailed comparisons and transformations on collections while using very little code: function invalidAge(value, key) { return key === 'age' && value < 40; } var collection = [ { first: 'Kim', last: 'Lawson', age: 40 }, { first: 'Marcia', last: 'Butler', age: 31 }, { first: 'Shawna', last: 'Hamilton', age: 39 }, { first: 'Leon', last: 'Johnston', age: 67 } ]; _.map(collection, function(item) { return _.omit(item, invalidAge); }); // → // [ // { first: "Kim", last: "Lawson", age: 40 }, // { first: "Marcia", last: "Butler" }, // { first: "Shawna", last: "Hamilton" }, // { first: "Leon", last: "Johnston", age: 67 } // ] The new collection generated by this code excludes the age property for items where the age value is less than 40. The callback supplied to omit() is applied to each key-value pair in the object. This code is a good illustration of the conciseness achievable with Lo-Dash. There's a lot of iterative code running here and there is no for or while statement in sight. Performing calculations It's time now to turn our attention to performing calculations in our map() callbacks. This entails looking at the item and, based on its current state, computing a new value that will be ultimately mapped to the new collection. This could mean extending the original item's properties or replacing one with a newly computed value. Whichever the case, it's a lot easier to map these computations than to write your own logic that applies these functions to every item in your collection. This is explained using the following example: var collection = [ { name: 'Valerie', jqueryYears: 4, cssYears: 3 }, { name: 'Alonzo', jqueryYears: 1, cssYears: 5 }, { name: 'Claire', jqueryYears: 3, cssYears: 1 }, { name: 'Duane', jqueryYears: 2, cssYears: 0 } ]; _.map(collection, function(item) { return _.extend({ experience: item.jqueryYears + item.cssYears, specialty: item.jqueryYears >= item.cssYears ? 'jQuery' : 'CSS' }, item); }); // → // [ // { // experience": 7, // specialty": "jQuery", // name": "Valerie", // jqueryYears": 4, // cssYears: 3 // }, // { // experience: 6, // specialty: "CSS", // name: "Alonzo", // jqueryYears: 1, // cssYears: 5 // }, // { // experience: 4, // specialty: "jQuery", // name: "Claire", // jqueryYears: 3, // cssYears: 1 // }, // { // experience: 2, // specialty: "jQuery", // name: "Duane", // jqueryYears: 2, // cssYears: 0 // } // ] Here, we're mapping each item in the original collection to an extended version of it. Particularly, we're computing two new values for each item—experience and speciality. The experience property is simply the sum of the jqueryYears and cssYears properties. The speciality property is computed based on the larger value of the jqueryYears and cssYears properties. Earlier, I mentioned the need to be careful when modifying items in map() callbacks. In general, it's a bad idea. It's helpful to try and remember that map() is used to generate new collections, not to modify existing collections. Here's an illustration of the horrific consequences of not being careful: var app = {}, collection = [ { name: 'Cameron', supervisor: false }, { name: 'Lindsey', supervisor: true }, { name: 'Kenneth', supervisor: false }, { name: 'Caroline', supervisor: true } ]; app.supervisor = _.find(collection, { supervisor: true }); _.map(collection, function(item) { return _.extend(item, { supervisor: false }); }); console.log(app.supervisor); // → { name: "Lindsey", supervisor: false } The destructive nature of this callback is not obvious at all and next to impossible for programmers to track down and diagnose. Its nature is essentially resetting the supervisor attribute for each item. If these items are used anywhere else in the application, the supervisor property value will be clobbered whenever this map job is executed. If you need to reset values like this, ensure that the change is mapped to the new value and not made to the original. Mapping also works with primitive values as the item. Often, we'll have an array of primitive values that we'd like transformed into an alternative representation. For example, let's say you have an array of sizes, expressed in bytes. You can map those arrays to a new collection with those sizes expressed as human-readable values, using the following code: function bytes(b) { var units = [ 'B', 'K', 'M', 'G', 'T', 'P' ], target = 0; while (b >= 1024) { b = b / 1024; target++; } return (b % 1 === 0 ? b : b.toFixed(1)) + units[target] + (target === 0 ? '' : 'B'); } var collection = [ 1024, 1048576, 345198, 120120120 ]; _.map(collection, bytes); // → [ "1KB", "1MB", "337.1KB", "114.6MB" ] The bytes() function takes a numerical argument, which is the number of bytes to be formatted. This is the starting unit. We just keep incrementing the target unit until we have something that is less than 1024. For example, the last item in our collection maps to '114.6MB'. The bytes() function can be passed directly to map() since it's expecting values in our collection as they are. Calling functions We don't always have to write our own callback functions for map(). Wherever it makes sense, we're free to leverage Lo-Dash functions to map our collection items. For example, let's say we have a collection and we'd like to know the size of each item. There's a size() Lo-Dash function we can use as our map() callback, as follows: var collection = [ [ 1, 2 ], [ 1, 2, 3 ], { first: 1, second: 2 }, { first: 1, second: 2, third: 3 } ]; _.map(collection, _.size); // → [ 2, 3, 2, 3 ] This code has the added benefit that the size() function returns consistent results, no matter what kind of argument is passed to it. In fact, any function that takes a single argument and returns a new value based on that argument is a valid candidate for a map() callback. For instance, we could also map the minimum and maximum value of each item: var source = _.range(1000), collection = [ _.sample(source, 50), _.sample(source, 100), _.sample(source, 150) ]; _.map(collection, _.min); // → [ 20, 21, 1 ] _.map(collection, _.max); // → [ 931, 985, 991 ] What if we want to map each item of our collection to a sorted version? Since we do not sort the collection itself, we don't care about the item positions within the collection, but the items themselves, if they're arrays, for instance. Let's see what happens with the following code: var collection = [ [ 'Evan', 'Veronica', 'Dana' ], [ 'Lila', 'Ronald', 'Dwayne' ], [ 'Ivan', 'Alfred', 'Doug' ], [ 'Penny', 'Lynne', 'Andy' ] ]; _.map(collection, _.compose(_.first, function(item) { return _.sortBy(item); })); // → [ "Dana", "Dwayne", "Alfred", "Andy" ] This code uses the compose() function to construct a map() callback. The first function returns the sorted version of the item by passing it to sortBy(). The first() item of this sorted list is then returned as the mapped item. The end result is a new collection containing the alphabetically first item from each array in our collection, with three lines of code. This is not bad. Filtering and mapping Filtering and mapping are two closely related collection operations. Filtering extracts only those collection items that are of particular interest in a given context. Mapping transforms collections to produce new collections. But what if you only want to map a certain subset of your collection? Then it would make sense to chain together the filtering and mapping operations, right? Here's an example of what that might look like: var collection = [ { name: 'Karl', enabled: true }, { name: 'Sophie', enabled: true }, { name: 'Jerald', enabled: false }, { name: 'Angie', enabled: false } ]; _.compose( _.partialRight(_.map, 'name'), _.partialRight(_.filter, 'enabled') )(collection); // → [ "Karl", "Sophie" ] This map is executed using compose() to build a function that is called right away, with our collection as the argument. The function is composed of two partials. We're using partialRight() on both arguments because we want the collection supplied as the leftmost argument in both cases. The first partial function is filter(). We're partially applying the enabled argument. So this function will filter our collection before it's passed to map(). This brings us to our next partial in the function composition. The result of filtering the collection is passed to map(), which has the name argument partially applied. The end result is a collection with enabled name strings. The important thing to note about the preceding code is that the filtering operation takes place before the map() function is run. We could have stored the filtered collection in an intermediate variable instead of streamlining with compose(). Regardless of flavor, it's important that the items in your mapped collection correspond to the items in the source collection. It's conceivable to filter out the items in the map() callback by not returning anything, but this is ill-advised as it doesn't map well, both figuratively and literally. Mapping objects The previous section focused on collections and how to map them. But wait, objects are collections too, right? That is indeed correct, but it's worth differentiating between the more traditional collections, arrays, and plain objects. The main reason is that there are implications with ordering and keys when performing map/reduce. At the end of the day, arrays and objects serve different use cases with map/reduce, and this article tries to acknowledge these differences. Now we'll start looking at some techniques Lo-Dash programmers employ when working with objects and mapping them to collections. There are a number of factors to consider such as the keys within an object and calling methods on objects. We'll take a look at the relationship between key-value pairs and how they can be used in a mapping context. Working with keys We can use the keys of a given object in interesting ways to map the object to a new collection. For example, we can use the keys() function to extract the keys of an object and map them to values other than the property value, as shown in the following example: var object = { first: 'Ronald', last: 'Walters', employer: 'Packt' }; _.map(_.sortBy(_.keys(object)), function(item) { return object[item]; }); // → [ "Packt", "Ronald", "Walters" ] The preceding code builds an array of property values from object. It does so using map(), which is actually mapping the keys() array of object. These keys are sorted using sortBy(). So Packt is the first element of the resulting array because employer is alphabetically first in the object keys. Sometimes, it's desirable to perform lookups in other objects and map those values to a target object. For example, not all APIs return everything you need for a given page, packaged in a neat little object. You have to do joins and build the data you need. This is shown in the following code: var users = {}, preferences = {}; _.each(_.range(100), function() { var id = _.uniqueId('user-'); users[id] = { type: 'user' }; preferences[id] = { emailme: !!(_.random()) }; }); _.map(users, function(value, key) { return _.extend({ id: key }, preferences[key]); }); // → // [ // { id: "user-1", emailme: true }, // { id: "user-2", emailme: false }, // ... // ] This example builds two objects, users and preferences. In the case of each object, the keys are user identifiers that we're generating with uniqueId(). The user objects just have some dummy attribute in them, while the preferences objects have an emailme attribute, set to a random Boolean value. Now let's say we need quick access to this preference for all users in the users object. As you can see, it's straightforward to implement using map() on the users object. The callback function returns a new object with the user ID. We extend this object with the preference for that particular user by looking at them by key. Calling methods Objects aren't limited to storing primitive strings and numbers. Properties can store functions as their values, or methods, as they're commonly referred. However, depending on the context where you're using your object, methods aren't always callable, especially if you have little or no control over the context where your objects are used. One technique that's helpful in situations such as these is mapping the result of calling these methods and using this result in the context in question. Let's see how this can be done with the following code: var object = { first: 'Roxanne', last: 'Elliot', name: function() { return this.first + ' ' + this.last; }, age: 38, retirement: 65, working: function() { return this.retirement - this.age; } }; _.map(object, function(value, key) { var item = {}; item[key] = _.isFunction(value) ? object[key]() : value return item; }); // → // [ // { first: "Roxanne" }, // { last: "Elliot" }, // { name: "Roxanne Elliot" }, // { age: 38 }, // { retirement: 65 }, // { working: 27 } // ] _.map(object, function(value, key) { var item = {}; item[key] = _.result(object, key); return item; }); // → // [ // { first: "Roxanne" }, // { last: "Elliot" }, // { name: "Roxanne Elliot" }, // { age: 38 }, // { retirement: 65 }, // { working: 27 } // ] Here, we have an object with both primitive property values and methods that use these properties. Now we'd like to map the results of calling those methods and we will experiment with two different approaches. The first approach uses the isFunction() function to determine whether the property value is callable or not. If it is, we call it and return that value. The second approach is a little easier to implement and achieves the same outcome. The result() function is applied to the object using the current key. This tests whether we're working with a function or not, so our code doesn't have to. In the first approach to mapping method invocations, you might have noticed that we're calling the method using object[key]() instead of value(). The former retains the context as the object variable, but the latter loses the context, since it is invoked as a plain function without any object. So when you're writing mapping callbacks that call methods and not getting the expected results, make sure the method's context is intact. Perhaps, you have an object but you're not sure which properties are methods. You can use functions() to figure this out and then map the results of calling each method to an array, as shown in the following code: var object = { firstName: 'Fredrick', lastName: 'Townsend', first: function() { return this.firstName; }, last: function() { return this.lastName; } }; var methods = _.map(_.functions(object), function(item) { return [ _.bindKey(object, item) ]; }); _.invoke(methods, 0); // → [ "Fredrick", "Townsend" ] The object variable has two methods, first() and last(). Assuming we didn't know about these methods, we can find them using functions(). Here, we're building a methods array using map(). The input is an array containing the names of all the methods of the given object. The value we're returning is interesting. It's a single-value array; you'll see why in a moment. The value of this array is a function built by passing the object and the name of the method to bindKey(). This function, when invoked, will always use object as its context. Lastly, we use invoke() to invoke each method in our methods array, building a new result array. Recall that our map() callback returned an array. This was a simple hack to make invoke() work, since it's a convenient way to call methods. It generally expects a key as the second argument, but a numerical index works just as well, since they're both looked up as same. Mapping key-value pairs Just because you're working with an object doesn't mean it's ideal, or even necessary. That's what map() is for—mapping what you're given to what you need. For instance, the property values are sometimes all that matter for what you're doing, and you can dispense with the keys entirely. For that, we have the values() function and we feed the values to map(): var object = { first: 'Lindsay', last: 'Castillo', age: 51 }; _.map(_.filter(_.values(object), _.isString), function(item) { return '<strong>' + item + '</strong>'; }); // → [ "<strong>Lindsay</strong>", "<strong>Castillo</strong>" ] All we want from the object variable here is a list of property values, which are strings, so that we can format them. In other words, the fact that the keys are first, last, and age is irrelevant. So first, we call values() to build an array of values. Next, we pass that array to filter(), removing anything that's not a string. We then pass the output of this to map, where we're able to map the string using <strong/> tags. The opposite might also be true—the value is completely meaningless without its key. If that's the case, it may be fitting to map key-value pairs to a new collection, as shown in the following example: function capitalize(s) { return s.charAt(0).toUpperCase() + s.slice(1); } function format(label, value) { return '<label>' + capitalize(label) + ':</label>' + '<strong>' + value + '</strong>'; } var object = { first: 'Julian', last: 'Ramos', age: 43 }; _.map(_.pairs(object), function(pair) { return format.apply(undefined, pair); }); // → // [ // "<label>First:</label><strong>Julian</strong>", // "<label>Last:</label><strong>Ramos</strong>", // "<label>Age:</label><strong>43</strong>" // ] We're passing the result of running our object through the pairs() function to map(). The argument passed to our map callback function is an array, the first element being the key and the second being the value. It so happens that the format() function expects a key and a value to format the given string, so we're able to use format.apply() to call the function, passing it the pair array. This approach is just a matter of taste. There's no need to call pairs() before map(). We could just as easily have called format directly. But sometimes, this approach is preferred, and the reasons, not least of which is the style of the programmer, are wide and varied. Summary This article introduced you to the map/reduce programming model and how Lo-Dash tools help realize it in your application. First, we examined mapping collections, including how to choose which properties get included and how to perform calculations. We then moved on to mapping objects. Keys can have an important role in how objects get mapped to new objects and collections. There are also methods and functions to consider when mapping. Resources for Article: Further resources on this subject: The First Step [article] Recursive directives [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 755
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-first-step
Packt
04 Feb 2015
16 min read
Save for later

The First Step

Packt
04 Feb 2015
16 min read
The First Step In this article by Tim Chaplin, author of the book AngularJS Test-driven Development, provides an initial introductory walk-through of how to use TDD to build an AngularJS application with a controller, model, and scope. You will be able to begin the TDD journey and see the fundamentals in action. Now, we will switch gears and dive into TDD with AngularJS. This article will be the first step of TDD. This article will focus on the creation of social media comments. It will also focus on the testing associated with controllers and the use of Angular mocks to AngularJS components in a test. (For more resources related to this topic, see here.) Preparing the application's specification Create an application to enter comments. The specification of the application is as follows: Given I am posting a new comment, when I click on the submit button, the comment should be added to the to-do list Given a comment, when I click on the like button, the number of likes for the comment should be increased Now that we have the specification of application, we can create our development to-do list. It won't be easy to create an entire to-do list of the whole application. Based on the user specifications, we have an idea of what needs to be developed. Here is a rough sketch of the UI: Hold yourself back from jumping into the implementation and thinking about how you will use a controller with a service, ng-repeat, and so on. Resist, resist, resist! Although you can think of how this will be developed in the future, it is never clear until you delve into the code, and that is where you start getting into trouble. TDD and its principles are here to help you get your mind and focus in the right place. Setting up the project I will provide a list in the following section for the initial actions to get the project set up. Setting up the directory The following instructions are specific to setting up the project directory: Create a new project directory. Get angular into the project using Bower: bower install angular Get angular-mocks for testing using Bower: bower install angular-mocks Initialize the application's source directory: mkdir app Initialize the test directory: mkdir spec Initialize the unit test directory: mkdir spec/unit Initialize the end-to-end test directory: mkdir spec/e2e Once the initialization is complete, your folder structure should look as follows: Setting up Protractor In this article, we will just discuss the steps at a higher level: Install Protractor in the project: $ npm install protractor Update Selenium WebDriver: $ ./node_modules/protractor/bin/webdriver-manager update Make sure that Selenium has been installed. Copy the example chromeOnly configuration into the root of the project: $ cp ./node_modules/protractor/example/chromeOnlyConf.js . Configure the Protractor configuration using the following steps: Open the Protractor configuration. Edit the Selenium WebDriver location to reflect the relative directory to chromeDriver: chromeDriver: './node_modules/protractor/selenium/chromedriver', Edit the files section to reflect the test directory: specs: ['spec/e2e/**/*.js'], Set the default base URL: baseUrl: 'http://localhost:8080/', Excellent! Protractor should now be installed and set up. Here is the complete configuration: exports.config = { chromeOnly: true, chromeDriver: './node_modules/protractor/selenium/chromedriver', capabilities: { 'browserName': 'chrome' }, baseUrl: 'http://localhost:8080/', specs: ['spec/e2e/**/*.js'], }; Setting up Karma Here is a brief summary of the steps required to install and get your new project set up: Install Karma using the following command: npm install karma -g Initialize the Karma configuration: karma init Update the Karma configuration: files: [ 'bower_components/angular/angular.js', 'bower_components/angular-mocks/angular-mocks.js', 'spec/unit/**/*.js' ], Now that we have set up the project directory and initialized Protractor and Karma, we can dive into the code. Here is the complete karma.conf.js file: module.exports = function(config) { config.set({ basePath: '', frameworks: ['jasmine'], files: [ 'bower_components/angular/angular.js', 'bower_components/angular-mocks/angular-mocks.js', 'spec/unit/**/*.js' ], reporters: ['progress'], port: 9876, autoWatch: true, browsers: ['Chrome'], singleRun: false }); }; Setting up http-server A web server will be used to host the application. As this will just be for local development only, you can use http-server. The http-server module is a simple HTTP server that serves static content. It is available as an npm module. To install http-server in your project, type the following command: $ npm install http-server Once http-server is installed, you can run the server by providing it with the root directory of the web page. Here is an example: $ ./node_modules/http-server/bin/http-server Now that you have http-server installed, you can move on to the next step. Top-down or bottom-up approach From our development perspective, we have to determine where to start. The approaches that we will discuss in this article are as follows: The bottom-up approach: With this approach, we think about the different components we will need (controller, service, module, and so on) and then pick the most logical one and start coding. The top-down approach: With this approach, we work from the user scenario and UI. We then create the application around the components in the application. There are merits to both types of approaches and the choice can be based on your team, existing components, requirements, and so on. In most cases, it is best for you to make the choice based on the least resistance. In this article, the approach of specification is top-down, everything is laid out for us from the user scenario and will allow you to organically build the application around the UI. Testing a controller Before getting into the specification, and the mind-set of the feature being delivered, it is important to see the fundamentals of testing a controller. An AngularJS controller is a key component used in most applications. A simple controller test setup When testing a controller, tests are centered on the controller's scope. The tests confirm either the objects or methods in the scope. Angular mocks provide inject, which finds a particular reference and returns it for you to use. When inject is used for the controller, the controllers scope can be assigned to an outer reference for the entire test to use. Here is an example of what this would look like: describe('',function(){ var scope = {}; beforeEach(function(){ module('anyModule'); inject(function($controller){ $controller('AnyController',{$scope:scope}); }); }); }); In the preceding case, the test's scope object is assigned to the actual scope of the controller within the inject function. The scope object can now be used throughout the test, and is also reinitialized before each test. Initializing the scope In the preceding example, scope is initialized to an object {}. This is not the best approach; just like a page, a controller might be nested within another controller. This will cause inheritance of a parent scope as follows: <body ng-app='anyModule'> <div ng-controller='ParentController'> <div ng-controller='ChildController'> </div> </div> </body> As seen in the preceding code, we have this hierarchy of scopes that the ChildController function has access to. In order to test this, we have to initialize the scope object properly in the inject function. Here is how the preceding scope hierarchy can be recreated: inject(function($controller,$rootScope){ var parentScope = $rootScope.$new(); $controller('ParentController',{$scope:parentScope}); var childScope = parentScope.$new(); $controller('AnyController',{$scope: childScope}); }); There are two main things that the preceding code does: The $rootScope scope is injected into the test. The $rootScope scope is the highest level of scope that exists. Each level of scope is created with the $new() method. This method creates the child scope. In this article, we will use the simplified version and initialize the scope to an empty object; however, it is important to understand how to create the scope when required. Bring on the comments Now that the setup and approach have been decided, we can start our first test. From a testing point of view, as we will be using a top-down approach, we will write our Protractor tests first and then build the application. We will follow the same TDD life cycle we have already reviewed, that is, test first, make it run, and make it better. Test first The scenario given is in a well-specified format already and fits our Protractor testing template: describe('',function(){ beforeEach(function(){ }); it('',function(){ }); }); Placing the scenario in the template, we get the following code: describe('Given I am posting a new comment',function(){ describe('When I push the submit button',function(){ beforeEach(function(){ }); it('Should then add the comment',function(){ }); }); }); Following the 3 A's (Assemble, Act, Assert), we will fit the user scenario in the template. Assemble The browser will need to point to the first page of the application. As the base URL has already been defined, we can add the following to the test: beforeEach(function(){ browser.get('/'); }); Now that the test is prepared, we can move on to the next step, Act. Act The next thing we need to do, based on the user specification, is add an actual comment. The easiest thing is to just put some text into an input box. The test for this, again without knowing what the element will be called or what it will do, is to write it based on what it should be. Here is the code to add the comment section for the application: beforeEach(function(){ ... var commentInput = $('input'); commentInput.sendKeys('a comment'); }); The last assemble component, as part of the test, is to push the Submit button. This can be easily achieved in Protractor using the click function. Even though we don't have a page yet, or any attributes, we can still name the button that will be created: beforeEach(function(){ ... var submitButton = element.all(by.buttonText('Submit')).click(); }); Finally, we will hit the crux of the test and assert the users' expectations. Assert The user expectation is that once the Submit button is clicked, the comment is added. This is a little ambiguous, but we can determine that somehow the user needs to get notified that the comment was added. The simplest approach is to display all comments on the page. In AngularJS, the easiest way to do this is to add an ng-repeat object that displays all comments. To test this, we will add the following: it('Should then add the comment',function(){ var comments = element(by.repeater('comment in comments')).first(); expect(comment.getText()).toBe('a comment'); }); Now, the test has been constructed and meets the user specifications. It is small and concise. Here is the completed test: describe('Given I am posting a new comment',function(){ describe('When I push the submit button',function(){ beforeEach(function(){ //Assemble browser.get('/'); var commentInput = $('input'); commentInput.sendKeys('a comment'); //Act //Act var submitButton = element.all(by.buttonText('Submit')). click(); }); //Assert it('Should then add the comment',function(){ var comments = element(by.repeater('comment in comments')).first(); expect(comment.getText()).toBe('a comment'); }); }); }); Make it run Based on the errors and output of the test, we will build our application as we go. The first step to make the code run is to identify the errors. Before starting off the site, let's create a bare bones index.html page: <!DOCTYPE html> <html> <head> <title></title> </head> <body> </body> </html> Already anticipating the first error, add AngularJS as a dependency in the page: <script type='text/javascript' src='bower_components/angular/angular.js'></script> </body> Now, starting the web server using the following command: $ ./node_modules/http-server/bin/http-server -p 8080 Run Protractor to see the first error: $ ./node_modules/.bin/protractor chromeOnlyConf.js Our first error states that AngularJS could not be found: Error: Angular could not be found on the page http://localhost:8080/ : angular never provided resumeBootstrap This is because we need to add ng-app to the page. Let's create a module and add it to the page. The complete HTML page now looks as follows: <!DOCTYPE html> <html> <head> <title></title> </head> <body> <script src="bower_components/angular/angular.js"></script> </body> </html> Adding the module The first component that you need to define is an ng-app attribute in the index.html page. Use the following steps to add the module: Add ng-app as an attribute to the body tag: <body ng-app='comments'> Now, we can go ahead and create a simple comments module and add it to a file named comments.js: angular.module('comments',[]); Add this new file to index.html: <script src='app/commentController.js'></script> Rerun the Protractor test to get the next error: $ Error: No element found using locator: By.cssSelector('input') The test couldn't find our input locator. You need to add the input to the page. Adding the input Here are the steps you need to follow to add the input to the page: All we have to do is add a simple input tag to the page: <input type='text' /> Run the test and see what the new output is: $ Error: No element found using locator: by.buttonText('Submit') Just like the previous error, we need to add a button with the appropriate text: <button type='button'>Submit</button> Run the test again and the next error is as follows: $ Error: No element found using locator: by.repeater('comment in comments') This appears to be from our expectation that a submitted comment will be available on the page through ng-repeat. To add this to the page, we will use a controller to provide the data for the repeater. Controller As we mentioned in the preceding section, the error is because there is no comments object. In order to add the comments object, we will use a controller that has an array of comments in its scope. Use the following steps to add a comments object in the scope: Create a new file in the app directory named commentController.js: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = []; }]) Add it to the web page after the AngularJS script: <script src='app/commentController.js'></script> Now, we can add commentController to the page: <div ng-controller='CommentController'> Then, add a repeater for the comments as follows: <ul ng-repeat='comment in comments'> <li>{{comment}}</li> </ul> Run the Protractor test and let's see where we are: $ Error: No element found using locator: by.repeater('comment in comments') Hmmm! We get the same error. Let's look at the actual page that gets rendered and see what's going on. In Chrome, go to http://localhost:8080 and open the console to see the page source (Ctrl + Shift + J). You should see something like what's shown in the following screenshot: Notice that the repeater and controller are both there; however, the repeater is commented out. Since Protractor is only looking at visible elements, it won't find the repeater. Great! Now we know why the repeater isn't visible, but we have to fix it. In order for a comment to show up, it has to exist on the controller's comments scope. The smallest change is to add something to the array to initialize it as shown in the following code snippet: .controller('CommentController',['$scope',function($scope){ $scope.comments = ['anything']; }]); Now run the test and we get the following: $ Expected 'anything' to be 'a comment'. Wow! We finally tackled all the errors and reached the expectation. Here is what the HTML code looks like so far: <!DOCTYPE html> <html> <head> <title></title> </head> <body ng-app='comments'> <div ng-controller='CommentController'> <input type='text' /> <ul> <li ng-repeat='comment in comments'> {{comment.value}} </li> </ul> </div> <script src='bower_components/angular/angular.js'></script> <script src='app/comments.js'></script> <script src='app/commentController.js'></script> </body> </html> The comments.js module looks as follows: angular.module('comments',[]); Here is commentController.js: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = []; }]) Make it pass With TDD, you want to add the smallest possible component to make the test pass. Since we have hardcoded, for the moment, the comments to be initialized to anything, change anything to a comment; this should make the test pass. Here is the code to make the test pass: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = ['a comment']; }]); … Run the test, and bam! We get a passing test: $ 1 test, 1 assertion, 0 failures Wait a second! We still have some work to do. Although we got the test to pass, it is not done. We added some hacks just to get the test passing. The two things that stand out are: Clicking on the Submit button, which really doesn't have any functionality Hardcoded initialization of the expected value for a comment The preceding changes are critical steps we need to perform before we move forward. They will be tackled in the next phase of the TDD life cycle, that is, make it better (refactor). Summary In this article, we walked through the TDD techniques of using Protractor and Karma together. As the application was developed, you were able to see where, why, and how to apply the TDD testing tools and techniques. With the bottom-up approach, the specifications are used to build unit tests and then build the UI layer on top of that. In this article, a top-down approach was shown to focus on the user's behavior. The top-down approach tests the UI and then filters the development through the other layers. Resources for Article: Further resources on this subject: AngularJS Project [Article] Role of AngularJS [Article] Creating Our First Animation in AngularJS [Article]
Read more
  • 0
  • 0
  • 1231

article-image-servicestack-applications
Packt
21 Jan 2015
9 min read
Save for later

ServiceStack applications

Packt
21 Jan 2015
9 min read
In this article by Kyle Hodgson and Darren Reid, authors of the book ServiceStack 4 Cookbook, we'll learn about unit testing ServiceStack applications. (For more resources related to this topic, see here.) Unit testing ServiceStack applications In this recipe, we'll focus on simple techniques to test individual units of code within a ServiceStack application. We will use the ServiceStack testing helper BasicAppHost as an application container, as it provides us with some useful helpers to inject a test double for our database. Our goal is small; fast tests that test one unit of code within our application. Getting ready We are going to need some services to test, so we are going to use the PlacesToVisit application. How to do it… Create a new testing project. It's a common convention to name the testing project <ProjectName>.Tests—so in our case, we'll call it PlacesToVisit.Tests. Create a class within this project to contain the tests we'll write—let's name it PlacesServiceTests as the tests within it will focus on the PlacesService class. Annotate this class with the [TestFixture] attribute, as follows: [TestFixture]public class PlaceServiceTests{ We'll want one method that runs whenever this set of tests begins to set up the environment and another one that runs afterwards to tear the environment down. These will be annotated with the NUnit attributes of TestFixtureSetUp and TextFixtureTearDown, respectively. Let's name them FixtureInit and FixtureTearDown. In the FixtureInit method, we will use BasicAppHost to initialize our appHost test container. We'll make it a field so that we can easily access it in each test, as follows: ServiceStackHost appHost; [TestFixtureSetUp]public void FixtureInit(){appHost = new BasicAppHost(typeof(PlaceService).Assembly){   ConfigureContainer = container =>   {     container.Register<IDbConnectionFactory>(c =>       new OrmLiteConnectionFactory(         ":memory:", SqliteDialect.Provider));     container.RegisterAutoWiredAs<PlacesToVisitRepository,       IPlacesToVisitRepository>();   }}.Init();} The ConfigureContainer property on BasicAppHost allows us to pass in a function that we want AppHost to run inside of the Configure method. In this case, you can see that we're registering OrmLiteConnectionFactory with an in-memory SQLite instance. This allows us to test code that uses a database without that database actually running. This useful technique could be considered a classic unit testing approach—the mockist approach might have been to mock the database instead. The FixtureTearDown method will dispose of appHost as you might imagine. This is how the code will look: [TestFixtureTearDown]public void FixtureTearDown(){appHost.Dispose();} We haven't created any data in our in memory database yet. We'll want to ensure the data is the same prior to each test, so our TestInit method is a good place to do that—it will be run once before each and every test run as we'll annotate it with the [SetUp] attribute, as follows: [SetUp]public void TestInit(){using (var db = appHost.Container     .Resolve<IDbConnectionFactory>().Open()){   db.DropAndCreateTable<Place>();   db.InsertAll(PlaceSeedData.GetSeedPlaces());}} As our tests all focus on PlaceService, we'll make sure to create Place data. Next, we'll begin writing tests. Let's start with one that asserts that we can create new places. The first step is to create the new method, name it appropriately, and annotate it with the [Test] attribute, as follows: [Test]public void ShouldAddNewPlaces(){ Next, we'll create an instance of PlaceService that we can test against. We'll use the Funq IoC TryResolve method for this: var placeService = appHost.TryResolve<PlaceService>(); We'll want to create a new place, then query the database later to see whether the new one was added. So, it's useful to start by getting a count of how many places there are based on just the seed data. Here's how you can get the count based on the seed data: var startingCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places               .Count; Since we're testing the ability to handle a CreatePlaceToVisit request, we'll need a test object that we can send the service to. Let's create one and then go ahead and post it: var melbourne = new CreatePlaceToVisit{   Name = "Melbourne",   Description = "A nice city to holiday"}; placeService.Post(melbourne); Having done that, we can get the updated count and then assert that there is one more item in the database than there were before: var newCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places              .Count;Assert.That(newCount == startingCount + 1); Next, let's fetch the new record that was created and make an assertion that it's the one we want: var newPlace = placeService.Get(new PlaceToVisitRequest{   Id = startingCount + 1});Assert.That(newPlace.Place.Name == melbourne.Name);} With this in place, if we run the test, we'll expect it to pass both assertions. This proves that we can add new places via PlaceService registered with Funq, and that when we do that we can go and retrieve them later as expected. We can also build a similar test that asserts that on our ability to update an existing place. Adding the code is simple, following the pattern we set out previously. We'll start with the arrange section of the test, creating the variables and objects we'll need: [Test]public void ShouldUpdateExistingPlaces(){var placeService = appHost.TryResolve<PlaceService>();var startingPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var startingCount = startingPlaces.Count;  var canberra = startingPlaces     .First(c => c.Name.Equals("Canberra")); const string canberrasNewName = "Canberra, ACT";canberra.Name = canberrasNewName; Once they're in place, we'll act. In this case, the Put method on placeService has the responsibility for update operations: placeService.Put(canberra.ConvertTo<UpdatePlaceToVisit>()); Think of the ConvertTo helper method from ServiceStack as an auto-mapper, which converts our Place object for us. Now that we've updated the record for Canberra, we'll proceed to the assert section of the test, as follows: var updatedPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var updatedCanberra = updatedPlaces     .First(p => p.Id.Equals(canberra.Id));var updatedCount = updatedPlaces.Count; Assert.That(updatedCanberra.Name == canberrasNewName);Assert.That(updatedCount == startingCount);} How it works… These unit tests are using a few different patterns that help us write concise tests, including the development of our own test helpers, and with helpers from the ServiceStack.Testing namespace, for instance BasicAppHost allows us to set up an application host instance without actually hosting a web service. It also lets us provide a custom ConfigureContainer action to mock any of our dependencies for our services and seed our testing data, as follows: appHost = new BasicAppHost(typeof(PlaceService).Assembly){ConfigureContainer = container =>{   container.Register<IDbConnectionFactory>(c =>     new OrmLiteConnectionFactory(     ":memory:", SqliteDialect.Provider));    container.RegisterAutoWiredAs<PlacesToVisitRepository,     IPlacesToVisitRepository>();}}.Init(); To test any ServiceStack service, you can resolve it through the application host via TryResolve<ServiceType>().This will have the IoC container instantiate an object of the type requested. This gives us the ability to test the Get method independent of other aspects of our web service, such as validation. This is shown in the following code: var placeService = appHost.TryResolve<PlaceService>(); In this example, we are using an in-memory SQLite instance to mock our use of OrmLite for data access, which IPlacesToVisitRepository will also use as well as seeding our test data in our ConfigureContainer hook of BasicAppHost. The use of both in-memory SQLite and BasicAppHost provide fast unit tests to very quickly iterate our application services while ensuring we are not breaking any functionality specifically associated with this component. In the example provided, we are running three tests in less than 100 milliseconds. If you are using the full version of Visual Studio, extensions such as NCrunch can allow you to regularly run your unit tests while you make changes to your code. The performance of ServiceStack components and the use of these extensions results in a smooth developer experience with productivity and quality of code. There's more… In the examples in this article, we wrote out tests that would pass, ran them, and saw that they passed (no surprise). While this makes explaining things a bit simpler, it's not really a best practice. You generally want to make sure your tests fail when presented with wrong data at some point. The authors have seen many cases where subtle bugs in test code were causing a test to pass that should not have passed. One best practice is to write tests so that they fail first and then make them pass—this guarantees that the test can actually detect the defect you're guarding against. This is commonly referred to as the red/green/refactor pattern. Summary In this article, we covered some techniques to unit test ServiceStack applications. Resources for Article: Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [article] Web API and Client Integration [article] WebSockets in Wildfly [article]
Read more
  • 0
  • 0
  • 1128

article-image-creating-photo-sharing-application
Packt
16 Jan 2015
34 min read
Save for later

Creating a Photo-sharing Application

Packt
16 Jan 2015
34 min read
In this article by Rob Foster, the author of CodeIgniter Web Application Blueprints, we will create a photo-sharing application. There are quite a few image-sharing websites around at the moment. They all share roughly the same structure: the user uploads an image and that image can be shared, allowing others to view that image. Perhaps limits or constraints are placed on the viewing of an image, perhaps the image only remains viewable for a set period of time, or within set dates, but the general structure is the same. And I'm happy to announce that this project is exactly the same. We'll create an application allowing users to share pictures; these pictures are accessible from a unique URL. To make this app, we will create two controllers: one to process image uploading and one to process the viewing and displaying of images stored. We'll create a language file to store the text, allowing you to have support for multiple languages should it be needed. We'll create all the necessary view files and a model to interface with the database. In this article, we will cover: Design and wireframes Creating the database Creating the models Creating the views Creating the controllers Putting it all together So without further ado, let's get on with it. (For more resources related to this topic, see here.) Design and wireframes As always, before we start building, we should take a look at what we plan to build. First, a brief description of our intent: we plan to build an app to allow the user to upload an image. That image will be stored in a folder with a unique name. A URL will also be generated containing a unique code, and the URL and code will be assigned to that image. The image can be accessed via that URL. The idea of using a unique URL to access that image is so that we can control access to that image, such as allowing an image to be viewed only a set number of times, or for a certain period of time only. Anyway, to get a better idea of what's happening, let's take a look at the following site map: So that's the site map. The first thing to notice is how simple the site is. There are only three main areas to this project. Let's go over each item and get a brief idea of what they do: create: Imagine this as the start point. The user will be shown a simple form allowing them to upload an image. Once the user presses the Upload button, they are directed to do_upload. do_upload: The uploaded image is validated for size and file type. If it passes, then a unique eight-character string is generated. This string is then used as the name of a folder we will make. This folder is present in the main upload folder and the uploaded image is saved in it. The image details (image name, folder name, and so on) are then passed to the database model, where another unique code is generated for the image URL. This unique code, image name, and folder name are then saved to the database. The user is then presented with a message informing them that their image has been uploaded and that a URL has been created. The user is also presented with the image they have uploaded. go: This will take a URL provided by someone typing into a browser's address bar, or an img src tag, or some other method. The go item will look at the unique code in the URL, query the database to see if that code exists, and if so, fetch the folder name and image name and deliver the image back to the method that called it. Now that we have a fairly good idea of the structure and form of the site, let's take a look at the wireframes of each page. The create item The following screenshot shows a wireframe for the create item discussed in the previous section. The user is shown a simple form allowing them to upload an image. Image2 The do_upload item The following screenshot shows a wireframe from the do_upload item discussed in the previous section. The user is shown the image they have uploaded and the URL that will direct other users to that image. The go item The following screenshot shows a wireframe from the go item described in the previous section. The go controller takes the unique code in a URL, attempts to find it in the database table images, and if found, supplies the image associated with it. Only the image is supplied, not the actual HTML markup. File overview This is a relatively small project, and all in all we're only going to create seven files, which are as follows: /path/to/codeigniter/application/models/image_model.php: This provides read/write access to the images database table. This model also takes the upload information and unique folder name (which we store the uploaded image in) from the create controller and stores this to the database. /path/to/codeigniter/application/views/create/create.php: This provides us with an interface to display a form allowing the user to upload a file. This also displays any error messages to the user, such as wrong file type, file size too big, and so on. /path/to/codeigniter/application/views/create/result.php: This displays the image to the user after it has been successfully uploaded, as well as the URL required to view that image. /path/to/codeigniter/application/views/nav/top_nav.php: This provides a navigation bar at the top of the page. /path/to/codeigniter/application/controllers/create.php: This performs validation checks on the image uploaded by the user, creates a uniquely named folder to store the uploaded image, and passes this information to the model. /path/to/codeigniter/application/controllers/go.php: This performs validation checks on the URL input by the user, looks for the unique code in the URL and attempts to find this record in the database. If it is found, then it will display the image stored on disk. /path/to/codeigniter/application/language/english/en_admin_lang.php: This provides language support for the application. The file structure of the preceding seven files is as follows: application/ ├── controllers/ │   ├── create.php │   ├── go.php ├── models/ │   ├── image_model.php ├── views/create/ │   ├── create.php │   ├── result.php ├── views/nav/ │   ├── top_nav.php ├── language/english/ │   ├── en_admin_lang.php Creating the database First, we'll build the database. Copy the following MySQL code into your database: CREATE DATABASE `imagesdb`; USE `imagesdb`;   DROP TABLE IF EXISTS `images`; CREATE TABLE `images` ( `img_id` int(11) NOT NULL AUTO_INCREMENT, `img_url_code` varchar(10) NOT NULL, `img_url_created_at` timestamp NOT NULL DEFAULT     CURRENT_TIMESTAMP, `img_image_name` varchar(255) NOT NULL, `img_dir_name` varchar(8) NOT NULL, PRIMARY KEY (`img_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; Right, let's take a look at each item in every table and see what they mean: Table: images Element Description img_id This is the primary key. img_url_code This stores the unique code that we use to identify the image in the database. img_url_created_at This is the MySQL timestamp for the record. img_image_name This is the filename provided by the CodeIgniter upload functionality. img_dir_name This is the name of the directory we store the image in. We'll also need to make amends to the config/database.php file, namely setting the database access details, username, password, and so on. Open the config/database.php file and find the following lines: $db['default']['hostname'] = 'localhost'; $db['default']['username'] = 'your username'; $db['default']['password'] = 'your password'; $db['default']['database'] = 'imagesdb'; Edit the values in the preceding code ensuring you substitute those values for the ones more specific to your setup and situation—so enter your username, password, and so on. Adjusting the config.php and autoload.php files We don't actually need to adjust the config.php file in this project as we're not really using sessions or anything like that. So we don't need an encryption key or database information. So just ensure that you are not autoloading the session in the config/autoload.php file or you will get an error, as we've not set any session variables in the config/config.php file. Adjusting the routes.php file We want to redirect the user to the create controller rather than the default CodeIgniter welcome controller. To do this, we will need to amend the default controller settings in the routes.php file to reflect this. The steps are as follows: Open the config/routes.php file for editing and find the following lines (near the bottom of the file): $route['default_controller'] = "welcome"; $route['404_override'] = ''; First, we need to change the default controller. Initially, in a CodeIgniter application, the default controller is set to welcome. However, we don't need that, instead we want the default controller to be create, so find the following line: $route['default_controller'] = "welcome"; Replace it with the following lines: $route['default_controller'] = "create"; $route['404_override'] = ''; Then we need to add some rules to govern how we handle URLs coming in and form submissions. Leave a few blank lines underneath the preceding two lines of code (default controller and 404 override) and add the following three lines of code: $route['create'] = "create/index"; $route['(:any)'] = "go/index"; $route['create/do_upload'] = "create/do_upload"; Creating the model There is only one model in this project, image_model.php. It contains functions specific to creating and resetting passwords. Create the /path/to/codeigniter/application/models/image_model.php file and add the following code to it: <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');   class Image_model extends CI_Model { function __construct() {    parent::__construct(); }   function save_image($data) {    do {       $img_url_code = random_string('alnum', 8);        $this->db->where('img_url_code = ', $img_url_code);      $this->db->from('images');      $num = $this->db->count_all_results();    } while ($num >= 1);      $query = "INSERT INTO `images` (`img_url_code`,       `img_image_name`, `img_dir_name`) VALUES (?,?,?) ";    $result = $this->db->query($query, array($img_url_code,       $data['image_name'], $data['img_dir_name']));      if ($result) {      return $img_url_code;    } else {      return flase;    } }   function fetch_image($img_url_code) {    $query = "SELECT * FROM `images` WHERE `img_url_code` = ? ";    $result = $this->db->query($query, array($img_url_code));      if ($result) {      return $result;    } else {      return false;    } } } There are two main functions in this model, which are as follows: save_image(): This generates a unique code that is associated with the uploaded image and saves it, with the image name and folder name, to the database. fetch_image(): This fetches an image's details from the database according to the unique code provided. Okay, let's take save_image() first. The save_image() function accepts an array from the create controller containing image_name (from the upload process) and img_dir_name (this is the folder that the image is stored in). A unique code is generated using a do…while loop as shown here: $img_url_code = random_string('alnum', 8); First a string is created, eight characters in length, containing alpha-numeric characters. The do…while loop checks to see if this code already exists in the database, generating a new code if it is already present. If it does not already exist, this code is used: do { $img_url_code = random_string('alnum', 8);   $this->db->where('img_url_code = ', $img_url_code); $this->db->from('images'); $num = $this->db->count_all_results(); } while ($num >= 1); This code and the contents of the $data array are then saved to the database using the following code: $query = "INSERT INTO `images` (`img_url_code`, `img_image_name`,   `img_dir_name`) VALUES (?,?,?) "; $result = $this->db->query($query, array($img_url_code,   $data['image_name'], $data['img_dir_name'])); The $img_url_code is returned if the INSERT operation was successful, and false if it failed. The code to achieve this is as follows: if ($result) { return $img_url_code; } else { return false; } Creating the views There are only three views in this project, which are as follows: /path/to/codeigniter/application/views/create/create.php: This displays a form to the user allowing them to upload an image. /path/to/codeigniter/application/views/create/result.php: This displays a link that the user can use to forward other people to the image, as well as the image itself. /path/to/codeigniter/application/views/nav/top_nav.php: This displays the top-level menu. In this project it's very simple, containing a project name and a link to go to the create controller. So those are our views, as I said, there are only three of them as it's a simple project. Now, let's create each view file. Create the /path/to/codeigniter/application/views/create/create.php file and add the following code to it: <div class="page-header"> <h1><?php echo $this->lang->line('system_system_name');     ?></h1> </div>   <p><?php echo $this->lang->line('encode_instruction_1');   ?></p>   <?php echo validation_errors(); ?>   <?php if (isset($success) && $success == true) : ?> <div class="alert alert-success">    <strong><?php echo $this->lang->line('     common_form_elements_success_notifty'); ?></strong>     <?php echo $this->lang->     line('encode_encode_now_success'); ?> </div> <?php endif ; ?>   <?php if (isset($fail) && $fail == true) : ?> <div class="alert alert-danger">    <strong><?php echo $this->lang->line('     common_form_elements_error_notifty'); ?> </strong>     <?php echo $this->lang->line('encode_encode_now_error     '); ?>    <?php echo $fail ; ?> </div> <?php endif ; ?>   <?php echo form_open_multipart('create/do_upload');?> <input type="file" name="userfile" size="20" /> <br /> <input type="submit" value="upload" /> <?php echo form_close() ; ?> <br /> <?php if (isset($result) && $result == true) : ?> <div class="alert alert-info">    <strong><?php echo $this->lang->line('     encode_upload_url'); ?> </strong>    <?php echo anchor($result, $result) ; ?> </div> <?php endif ; ?> This view file can be thought of as the main view file; it is here that the user can upload their image. Error messages are displayed here too. Create the /path/to/codeigniter/application/views/create/result.php file and add the following code to it: <div class="page-header"> <h1><?php echo $this->lang->line('system_system_name');     ?></h1> </div>   <?php if (isset($result) && $result == true) : ?>    <strong><?php echo $this->lang->line('     encode_encoded_url'); ?> </strong>    <?php echo anchor($result, $result) ; ?>    <br />    <img src="<?php echo base_url() . 'upload/' .       $img_dir_name . '/' . $file_name ;?>" /> <?php endif ; ?> This view will display the encoded image resource URL to the user (so they can copy and share it) and the actual image itself. Create the /path/to/codeigniter/application/views/nav/top_nav.php file and add the following code to it: <!-- Fixed navbar --> <div class="navbar navbar-inverse navbar-fixed-top"   role="navigation"> <div class="container">    <div class="navbar-header">      <button type="button" class="navbar-toggle" data- toggle="collapse" data-target=".navbar-collapse">        <span class="sr-only">Toggle navigation</span>        <span class="icon-bar"></span>        <span class="icon-bar"></span>        <span class="icon-bar"></span>      </button>      <a class="navbar-brand" href="#"><?php echo $this-       >lang->line('system_system_name'); ?></a>  </div>    <div class="navbar-collapse collapse">      <ul class="nav navbar-nav">        <li class="active"><?php echo anchor('create',           'Create') ; ?></li>      </ul>    </div><!--/.nav-collapse --> </div> </div>   <div class="container theme-showcase" role="main"> This view is quite basic but still serves an important role. It displays an option to return to the index() function of the create controller. Creating the controllers We're going to create two controllers in this project, which are as follows: /path/to/codeigniter/application/controllers/create.php: This handles the creation of unique folders to store images and performs the upload of a file. /path/to/codeigniter/application/controllers/go.php: This fetches the unique code from the database, and returns any image associated with that code. These are two of our controllers for this project, let's now go ahead and create them. Create the /path/to/codeigniter/application/controllers/create.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access   allowed');   class Create extends MY_Controller { function __construct() {    parent::__construct();      $this->load->helper(array('string'));      $this->load->library('form_validation');      $this->load->library('image_lib');      $this->load->model('Image_model');      $this->form_validation->set_error_delimiters('<div         class="alert alert-danger">', '</div>');    }   public function index() {    $page_data = array('fail' => false,                        'success' => false);    $this->load->view('common/header');    $this->load->view('nav/top_nav');    $this->load->view('create/create', $page_data);    $this->load->view('common/footer'); }   public function do_upload() {    $upload_dir = '/filesystem/path/to/upload/folder/';    do {      // Make code      $code = random_string('alnum', 8);        // Scan upload dir for subdir with same name      // name as the code      $dirs = scandir($upload_dir);        // Look to see if there is already a      // directory with the name which we      // store in $code      if (in_array($code, $dirs)) { // Yes there is        $img_dir_name = false; // Set to false to begin again      } else { // No there isn't        $img_dir_name = $code; // This is a new name      }      } while ($img_dir_name == false);      if (!mkdir($upload_dir.$img_dir_name)) {      $page_data = array('fail' => $this->lang->       line('encode_upload_mkdir_error'),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);      $this->load->view('common/footer');    }      $config['upload_path'] = $upload_dir.$img_dir_name;    $config['allowed_types'] = 'gif|jpg|jpeg|png';    $config['max_size'] = '10000';    $config['max_width'] = '1024';    $config['max_height'] = '768';      $this->load->library('upload', $config);      if ( ! $this->upload->do_upload()) {      $page_data = array('fail' => $this->upload->       display_errors(),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);       $this->load->view('common/footer');    } else {      $image_data = $this->upload->data();      $page_data['result'] = $this->Image_model->save_image(       array('image_name' => $image_data['file_name'],         'img_dir_name' => $img_dir_name));    $page_data['file_name'] = $image_data['file_name'];      $page_data['img_dir_name'] = $img_dir_name;        if ($page_data['result'] == false) {        // success - display image and link        $page_data = array('fail' => $this->lang->         line('encode_upload_general_error'));        $this->load->view('common/header');        $this->load->view('nav/top_nav');        $this->load->view('create/create', $page_data);        $this->load->view('common/footer');      } else {        // success - display image and link        $this->load->view('common/header');        $this->load->view('nav/top_nav');        $this->load->view('create/result', $page_data);        $this->load->view('common/footer');      }    } } } Let's start with the index() function. The index() function sets the fail and success elements of the $page_data array to false. This will suppress any initial messages from being displayed to the user. The views are loaded, specifically the create/create.php view, which contains the image upload form's HTML markup. Once the user submits the form in create/create.php, the form will be submitted to the do_upload() function of the create controller. It is this function that will perform the task of uploading the image to the server. First off, do_upload() defines an initial location for the upload folder. This is stored in the $upload_dir variable. Next, we move into a do…while structure. It looks something like this: do { // something } while ('…a condition is not met'); So that means do something while a condition is not being met. Now with that in mind, think about our problem—we have to save the image being uploaded in a folder. That folder must have a unique name. So what we will do is generate a random string of eight alpha-numeric characters and then look to see if a folder exists with that name. Keeping that in mind, let's look at the code in detail: do { // Make code $code = random_string('alnum', 8);   // Scan uplaod dir for subdir with same name // name as the code $dirs = scandir($upload_dir);   // Look to see if there is already a // directory with the name which we // store in $code if (in_array($code, $dirs)) { // Yes there is    $img_dir_name = false; // Set to false to begin again } else { // No there isn't    $img_dir_name = $code; // This is a new name } } while ($img_dir_name == false); So we make a string of eight characters, containing only alphanumeric characters, using the following line of code: $code = random_string('alnum', 8); We then use the PHP function scandir() to look in $upload_dir. This will store all directory names in the $dirs variable, as follows: $dirs = scandir($upload_dir); We then use the PHP function in_array() to look for the value in $code in the list of directors from scandir(): If we don't find a match, then the value in $code must not be taken, so we'll go with that. If the value is found, then we set $img_dir_name to false, which is picked up by the final line of the do…while loop: ... } while ($img_dir_name == false); Anyway, now that we have our unique folder name, we'll attempt to create it. We use the PHP function mkdir(), passing to it $upload_dir concatenated with $img_dir_name. If mkdir() returns false, the form is displayed again along with the encode_upload_mkdir_error message set in the language file, as shown here: if (!mkdir($upload_dir.$img_dir_name)) { $page_data = array('fail' => $this->lang->   line('encode_upload_mkdir_error'),                      'success' => false); $this->load->view('common/header'); $this->load->view('nav/top_nav'); $this->load->view('create/create', $page_data); $this->load->view('common/footer'); } Once the folder has been made, we then set the configuration variables for the upload process, as follows: $config['upload_path'] = $upload_dir.$img_dir_name; $config['allowed_types'] = 'gif|jpg|jpeg|png'; $config['max_size'] = '10000'; $config['max_width'] = '1024'; $config['max_height'] = '768'; Here we are specifying that we only want to upload .gif, .jpg, .jpeg, and .png files. We also specify that an image cannot be above 10,000 KB in size (although you can set this to any value you wish—remember to adjust the upload_max_filesize and post_max_size PHP settings in your php.ini file if you want to have a really big file). We also set the minimum dimensions that an image must be. As with the file size, you can adjust this as you wish. We then load the upload library, passing to it the configuration settings, as shown here: $this->load->library('upload', $config); Next we will attempt to do the upload. If unsuccessful, the CodeIgniter function $this->upload->do_upload() will return false. We will look for this and reload the upload page if it does return false. We will also pass the specific error as a reason why it failed. This error is stored in the fail item of the $page_data array. This can be done as follows:    if ( ! $this->upload->do_upload()) {      $page_data = array('fail' => $this->upload-       >display_errors(),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);      $this->load->view('common/footer');    } else { ... If, however, it did not fail, we grab the information generated by CodeIgniter from the upload. We'll store this in the $image_data array, as follows: $image_data = $this->upload->data(); Then we try to store a record of the upload in the database. We call the save_image function of Image_model, passing to it file_name from the $image_data array, as well as $img_dir_name, as shown here: $page_data['result'] = $this->Image_model-> save_image(array('image_name' => $image_data['file_name'],   'img_dir_name' => $img_dir_name)); We then test for the return value of the save_image() function; if it is successful, then Image_model will return the unique URL code generated in the model. If it is unsuccessful, then Image_model will return the Boolean false. If false is returned, then the form is loaded with a general error. If successful, then the create/result.php view file is loaded. We pass to it the unique URL code (for the link the user needs), and the folder name and image name, necessary to display the image correctly. Create the /path/to/codeigniter/application/controllers/go.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access allowed'); class Go extends MY_Controller {function __construct() {parent::__construct();   $this->load->helper('string');} public function index() {   if (!$this->uri->segment(1)) {     redirect (base_url());   } else {     $image_code = $this->uri->segment(1);     $this->load->model('Image_model');     $query = $this->Image_model->fetch_image($image_code);      if ($query->num_rows() == 1) {       foreach ($query->result() as $row) {         $img_image_name = $row->img_image_name;         $img_dir_name = $row->img_dir_name;       }          $url_address = base_url() . 'upload/' . $img_dir_name .'/' . $img_image_name;        redirect (prep_url($url_address));      } else {        redirect('create');      }    } } } The go controller has only one main function, index(). It is called when a user clicks on a URL or a URL is called (perhaps as the src value of an HTML img tag). Here we grab the unique code generated and assigned to an image when it was uploaded in the create controller. This code is in the first value of the URI. Usually it would occupy the third parameter—with the first and second parameters normally being used to specify the controller and controller function respectively. However, we have changed this behavior using CodeIgniter routing. This is explained fully in the Adjusting the routes.php file section of this article. Once we have the unique code, we pass it to the fetch_image() function of Image_model: $image_code = $this->uri->segment(1); $this->load->model('Image_model'); $query = $this->Image_model->fetch_image($image_code); We test for what is returned. We ask if the number of rows returned equals exactly 1. If not, we will then redirect to the create controller. Perhaps you may not want to do this. Perhaps you may want to do nothing if the number of rows returned does not equal 1. For example, if the image requested is in an HTML img tag, then if an image is not found a redirect may send someone away from the site they're viewing to the upload page of this project—something you might not want to happen. If you want to remove this functionality, remove the following lines in bold from the code excerpt: ....        $img_dir_name = $row->img_dir_name;        }          $url_address = base_url() . 'upload/' . $img_dir_name .'/'           . $img_image_name;        redirect (prep_url($url_address));      } else {        redirect('create');      }    } } } .... Anyway, if the returned value is exactly 1, then we'll loop over the returned database object and find img_image_name and img_dir_name, which we'll need to locate the image in the upload folder on the disk. This can be done as follows: foreach ($query->result() as $row) { $img_image_name = $row->img_image_name; $img_dir_name = $row->img_dir_name; } We then build the address of the image file and redirect the browser to it, as follows: $url_address = base_url() . 'upload/' . $img_dir_name .'/'   . $img_image_name; redirect (prep_url($url_address)); Creating the language file We make use of the language file to serve text to users. In this way, you can enable multiple region/multiple language support. Create the /path/to/codeigniter/application/language/english/en_admin_lang.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access   allowed');   // General $lang['system_system_name'] = "Image Share";   // Upload $lang['encode_instruction_1'] = "Upload your image to share it"; $lang['encode_upload_now'] = "Share Now"; $lang['encode_upload_now_success'] = "Your image was uploaded, you   can share it with this URL"; $lang['encode_upload_url'] = "Hey look at this, here's your   image:"; $lang['encode_upload_mkdir_error'] = "Cannot make temp folder"; $lang['encode_upload_general_error'] = "The Image cannot be saved   at this time"; Putting it all together Let's look at how the user uploads an image. The following is the sequence of events: CodeIgniter looks in the routes.php config file and finds the following line: $route['create'] = "create/index"; It directs the request to the create controller's index() function. The index() function loads the create/create.php view file that displays the upload form to the user. The user clicks on the Choose file button, navigates to the image file they wish to upload, and selects it. The user presses the Upload button and the form is submitted to the create controller's index() function. The index() function creates a folder in the main upload directory to store the image in, then does the actual upload. On a successful upload, index() sends the details of the upload (the new folder name and image name) to the save_image() model function. The save_model() function also creates a unique code and saves it in the images table along with the folder name and image name passed to it by the create controller. The unique code generated during the database insert is then returned to the controller and passed to the result view, where it will form part of a success message to the user. Now, let's see how an image is viewed (or fetched). The following is the sequence of events: A URL with the syntax www.domain.com/226KgfYH comes into the application—either when someone clicks on a link or some other call (<img src="">). CodeIgniter looks in the routes.php config file and finds the following line: $route['(:any)'] = "go/index"; As the incoming request does not match the other two routes, the preceding route is the one CodeIgniter applies to this request. The go controller is called and the code of 226KgfYH is passed to it as the 1st segment of uri. The go controller passes this to the fetch_image() function of the Image_model.php file. The fetch_image() function will attempt to find a matching record in the database. If found, it returns the folder name marking the saved location of the image, and its filename. This is returned and the path to that image is built. CodeIgniter then redirects the user to that image, that is, supplies that image resource to the user that requested it. Summary So here we have a basic image sharing application. It is capable of accepting a variety of images and assigning them to records in a database and unique folders in the filesystem. This is interesting as it leaves things open to you to improve on. For example, you can do the following: You can add limits on views. As the image record is stored in the database, you could adapt the database. Adding two columns called img_count and img_count_limit, you could allow a user to set a limit for the number of views per image and stop providing that image when that limit is met. You can limit views by date. Similar to the preceding point, but you could limit image views to set dates. You can have different URLs for different dimensions. You could add functionality to make several dimensions of image based on the initial upload, offering several different URLs for different image dimensions. You can report abuse. You could add an option allowing viewers of images to report unsavory images that might be uploaded. You can have terms of service. If you are planning on offering this type of application as an actual web service that members of the public could use, then I strongly recommend you add a terms of service document, perhaps even require that people agree to terms before they upload an image. In those terms, you'll want to mention that in order for someone to use the service, they first have to agree that they do not upload and share any images that could be considered illegal. You should also mention that you'll cooperate with any court if information is requested of you. You really don't want to get into trouble for owning or running a web service that stores unpleasant images; as much as possible you want to make your limits of liability clear and emphasize that it is the uploader who has provided the images. Resources for Article: Further resources on this subject: UCodeIgniter MVC – The Power of Simplicity! [article] Navigating Your Site using CodeIgniter 1.7: Part 1 [article] Navigating Your Site using CodeIgniter 1.7: Part 2 [article]
Read more
  • 0
  • 0
  • 2269

article-image-websockets-wildfly
Packt
30 Dec 2014
22 min read
Save for later

WebSockets in Wildfly

Packt
30 Dec 2014
22 min read
In this article by the authors, Michał Ćmil and Michał Matłoka, of Java EE 7 Development with WildFly, we will cover WebSockets and how they are one of the biggest additions in Java EE 7. In this article, we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking if someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most nontrivial web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), whose peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server would recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it, and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its WebSocket clients connected, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an upgraded, WebSocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows to reuse the HTTP port (80/8080) for other protocols and therefore, minimise the number of required ports that should be configured. If the server can understand the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080 Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see it in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first POJO @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7)UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment](MSC service thread 1-7) Deploying javax.ws.rs.core.Application: classcom.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow](MSC service thread 1-7) JBAS017534: Registered web context:/ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen Connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when some errors occur. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object. There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. We created the ticket booking application based on the REST API and AngularJS framework. It was clearly missing one important feature; the application did not show information concerning ticket purchases of other users. This is a perfect use case for WebSockets! Since we're just adding a feature to our previous app, we will describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries but it's a great chance to remember what we described earlier about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets") public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));    }    private String toJson(Seat seat) {        final JsonObject jsonObject = Json.createObjectBuilder()                .add("id", seat.getId())                .add("booked", seat.isBooked())                .build();        return jsonObject.toString();    } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. That's the reason why we don't use the automatic JSON serialization here. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the String, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {           if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot: However, if the server fails after opening the website, you might get an error as shown in the following screenshot: Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the GSON object before creation of the GsonBuilder class. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. That's all! Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only IDs and be booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot: Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients also. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows for one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST introduced earlier. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Various subsystem configurations [Article] Running our first web application [Article] Creating Java EE Applications [Article]
Read more
  • 0
  • 0
  • 11123
article-image-using-phpstorm-team
Packt
26 Dec 2014
11 min read
Save for later

Using PhpStorm in a Team

Packt
26 Dec 2014
11 min read
In this article by Mukund Chaudhary and Ankur Kumar, authors of the book PhpStorm Cookbook, we will cover the following recipes: Getting a VCS server Creating a VCS repository Connecting PhpStorm to a VCS repository Storing a PhpStorm project in a VCS repository (For more resources related to this topic, see here.) Getting a VCS server The first action that you have to undertake is to decide which version of VCS you are going to use. There are a number of systems available, such as Git and Subversion (commonly known as SVN). It is free and open source software that you can download and install on your development server. There is another system named concurrent versions system (CVS). Both are meant to provide a code versioning service to you. SVN is newer and supposedly faster than CVS. Since SVN is the newer system and in order to provide information to you on the latest matters, this text will concentrate on the features of Subversion only. Getting ready So, finally that moment has arrived when you will start off working in a team by getting a VCS system for you and your team. The installation of SVN on the development system can be done in two ways: easy and difficult. The difficult step can be skipped without consideration because that is for the developers who want to contribute to the Subversion system. Since you are dealing with PhpStorm, you need to remember the easier way because you have a lot more to do. How to do it... The installation step is very easy. There is this aptitude utility available with Debian-based systems, and there is the Yum utility available with Red Hat-based systems. Perform the following steps: You just need to issue the command apt-get install subversion. The operating system's package manager will do the remaining work for you. In a very short time, after flooding the command-line console with messages, you will have the Subversion system installed. To check whether the installation was successful, you need to issue the command whereis svn. If there is a message, it means that you installed Subversion successfully. If you do not want to bear the load of installing Subversion on your development system, you can use commercial third-party servers. But that is more of a layman's approach to solving problems, and no PhpStorm cookbook author will recommend that you do that. You are a software engineer; you should not let go easily. How it works... When you install the version control system, you actually install a server that provides the version control service to a version control client. The subversion control service listens for incoming connections from remote clients on port number 3690 by default. There's more... If you want to install the older companion, CVS, you can do that in a similar way, as shown in the following steps: You need to download the archive for the CVS server software. You need to unpack it from the archive using your favorite unpacking software. You can move it to another convenient location since you will not need to disturb this folder in the future. You then need to move into the directory, and there will start your compilation process. You need to do #. /configure to create the make targets. Having made the target, you need to enter #make install to complete the installation procedure. Due to it being older software, you might have to compile from the source code as the only alternative. Creating a VCS repository More often than not, a PHP programmer is expected to know some system concepts because it is often required to change settings for the PHP interpreter. The changes could be in the form of, say, changing the execution time or adding/removing modules, and so on. In order to start working in a team, you are going to get your hands dirty with system actions. Getting ready You will have to create a new repository on the development server so that PhpStorm can act as a client and get connected. Here, it is important to note the difference between an SVN client and an SVN server—an SVN client can be any of these: a standalone client or an embedded client such as an IDE. The SVN server, on the other hand, is a single item. It is a continuously running process on a server of your choice. How to do it... You need to be careful while performing this activity as a single mistake can ruin your efforts. Perform the following steps: There is a command svnadmin that you need to know. Using this command, you can create a new directory on the server that will contain the code base in it. Again, you should be careful when selecting a directory on the server as it will appear in your SVN URL for the rest part of your life. The command should be executed as: svnadmin create /path/to/your/repo/ Having created a new repository on the server, you need to make certain settings for the server. This is just a normal phenomenon because every server requires a configuration. The SVN server configuration is located under /path/to/your/repo/conf/ with the name svnserve.conf. Inside the file, you need to make three changes. You need to add these lines at the bottom of the file: anon-access = none auth-access = write password-db = passwd There has to be a password file to authorize a list of users who will be allowed to use the repository. The password file in this case will be named passwd (the default filename). The contents in the file will be a number of lines, each containing a username and the corresponding password in the form of username = password. Since these files are scanned by the server according to a particular algorithm, you don't have the freedom to leave deliberate spaces in the file—there will be error messages displayed in those cases. Having made the appropriate settings, you can now make the SVN service run so that an SVN client can access it. You need to issue the command svnserve -d to do that. It is always good practice to keep checking whether what you do is correct. To validate proper installation, you need to issue the command svn ls svn://user@host/path/to/subversion/repo/. The output will be as shown in the following screenshot:   How it works... The svnadmin command is used to perform admin tasks on the Subversion server. The create option creates a new folder on the server that acts as the repository for access from Subversion clients. The configuration file is created by default at the time of server installation. The contents that are added to the file are actually the configuration directives that control the behavior of the Subversion server. Thus, the settings mentioned prevent anonymous access and restrict the write operations to certain users whose access details are mentioned in a file. The command svnserve is again a command that needs to be run on the server side and which starts the instance of the server. The -d switch mentions that the server should be run as a daemon (system process). This also means that your server will continue running until you manually stop it or the entire system goes down. Again, you can skip this section if you have opted for a third-party version control service provider. Connecting PhpStorm to a VCS repository The real utility of software is when you use it. So, having installed the version control system, you need to be prepared to use it. Getting ready With SVN being client-server software, having installed the server, you now need a client. Again, you will have difficulty searching for a good SVN client. Don't worry; the client has been factory-provided to you inside PhpStorm. The PhpStorm SVN client provides you with features that accelerate your development task by providing you detailed information about the changes made to the code. So, go ahead and connect PhpStorm to the Subversion repository you created. How to do it... In order to connect PhpStorm to the Subversion repository, you need to activate the Subversion view. It is available at View | Tool Windows | Svn Repositories. Perform the following steps to activate the Subversion view: 1. Having activated the Subversion view, you now need to add the repository location to PhpStorm. To do that, you need to use the + symbol in the top-left corner in the view you have opened, as shown in the following screenshot: Upon selecting the Add option, there is a question asked by PhpStorm about the location of the repository. You need to provide the full location of the repository. Once you provide the location, you will be able to see the repository in the same Subversion view in which you have pressed the Add button. Here, you should always keep in mind the correct protocol to use. This depends on the way you installed the Subversion system on the development machine. If you used the default installation by installing from the installer utility (apt-get or aptitude), you need to specify svn://. If you have configured SVN to be accessible via SSH, you need to specify svn+ssh://. If you have explicitly configured SVN to be used with the Apache web server, you need to specify http://. If you configured SVN with Apache over the secure protocol, you need to specify https://. Storing a PhpStorm project in a VCS repository Here comes the actual start of the teamwork. Even if you and your other team members have connected to the repository, what advantage does it serve? What is the purpose solved by merely connecting to the version control repository? Correct. The actual thing is the code that you work on. It is the code that earns you your bread. Getting ready You should now store a project in the Subversion repository so that the other team members can work and add more features to your code. It is time to add a project to version control. It is not that you need to start a new project from scratch to add to the repository. Any project, any work that you have done and you wish to have the team work on now can be added to the repository. Since the most relevant project in the current context is the cooking project, you can try adding that. There you go. How to do it... In order to add a project to the repository, perform the following steps: You need to use the menu item provided at VCS | Import into version control | Share project (subversion). PhpStorm will ask you a question, as shown in the following screenshot: Select the correct hierarchy to define the share target—the correct location where your project will be saved. If you wish to create the tags and branches in the code base, you need to select the checkbox for the same. It is good practice to provide comments to the commits that you make. The reason behind this is apparent when you sit down to create a release document. It also makes the change more understandable for the other team members. PhpStorm then asks you the format you want the working copy to be in. This is related to the version of the version control software. You just need to smile and select the latest version number and proceed, as shown in the following screenshot:   Having done that, PhpStorm will now ask you to enter your credentials. You need to enter the same credentials that you saved in the configuration file (see the Creating a VCS repository recipe) or the credentials that your service provider gave you. You can ask PhpStorm to save the credentials for you, as shown in the following screenshot:   How it works... Here it is worth understanding what is going on behind the curtains. When you do any Subversion related task in PhpStorm, there is an inbuilt SVN client that executes the commands for you. Thus, when you add a project to version control, the code is given a version number. This makes the version system remember the state of the code base. In other words, when you add the code base to version control, you add a checkpoint that you can revisit at any point in future for the time the code base is under the same version control system. Interesting phenomenon, isn't it? There's more... If you have installed the version control software yourself and if you did not make the setting to store the password in encrypted text, PhpStorm will provide you a warning about it, as shown in the following screenshot: Summary We got to know about version control systems, step-by-step process to create a VCS repository, and connecting PhpStorm to a VCS repository. Resources for Article:  Further resources on this subject: FuelPHP [article] A look into the high-level programming operations for the PHP language [article] PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 1 [article]
Read more
  • 0
  • 0
  • 3016

article-image-adding-websockets
Packt
22 Dec 2014
22 min read
Save for later

Adding WebSockets

Packt
22 Dec 2014
22 min read
In this article, Michal Cmil, Michal Matloka and Francesco Marchioni, authors for the book Java EE 7 Development with WildFly we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking whether someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), the peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we have used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server will recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its connected WebSocket clients, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an Upgrade: websocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows you to reuse the HTTP port (80/8080) for other protocols and therefore minimise the number of required ports that should be configured. If the server can "understand" the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see this in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first annotation @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7) UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-7) Deploying javax.ws.rs.core.Application: class com.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow] (MSC service thread 1-7) JBAS017534: Registered web context: /ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen The connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when an error occurs. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object.  There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. Since we're just adding a feature to our previous app, we will only describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets")public class TicketEndpoint {   @Inject   private SessionRegistry sessionRegistry;   @OnOpen   public void open(Session session, EndpointConfig conf) {       sessionRegistry.add(session);   }   @OnClose   public void close(Session session, CloseReason reason) {       sessionRegistry.remove(session);   }   public void send(@Observes Seat seat) {       sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));   }   private String toJson(Seat seat) {       final JsonObject jsonObject = Json.createObjectBuilder()               .add("id", seat.getId())             .add("booked", seat.isBooked())               .build();       return jsonObject.toString();   } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the string, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {            if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot:   However, if the server fails after opening the website, you might get an error as shown in the following screenshot:   Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON command to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the Gson object before creation. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. And that's all. Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only id and booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot:   Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients too. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Using the WebRTC Data API [Article] Implementing Stacks using JavaScript [Article] Applying WebRTC for Education and E-learning [Article]
Read more
  • 0
  • 0
  • 810

article-image-building-remote-controlled-tv-node-webkit
Roberto González
04 Dec 2014
14 min read
Save for later

Building a Remote-controlled TV with Node-Webkit

Roberto González
04 Dec 2014
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html   // This is the starting point for our desktop app- css         // Our desktop app styles- js           // This is where the magic happens- remote       // This is where the magic happens (Part 2)- libraries   // FFMPEG libraries, which give you H.264 video support in Node-Webkit- player      // Our youtube player- Gruntfile.js // Build scripts- run.bat     // run.bat runs the app on Windows- run.sh       // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -gsudo npm install grunt-cli -g  On Windows: npm install node-gyp -gnpm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: {"//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!","name": "Remote","//": "A simple description of what the app does.","description": "An example of node-webkit","//": "This is the first html the app will load. Just leave this this way","main": "app://host/index.html","//": "The version number. 0.0.1 is a good start :D","version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.","window": {"//": "The Window Title for the app","title": "Remote","//": "The Icon for the app","icon": "css/images/icon.png","//": "Do you want the File/Edit/Whatever toolbar?","toolbar": false,"//": "Do you want a standard window around your app (a title bar and some borders)?","frame": true,"//": "Can you resize the window?","resizable": true},"webkit": {"plugin": false,"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36"}, "//": "These are the libraries we’ll be using:","//": "Express is a web server, which will handle the files for the remote","//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.","dependencies": {"express": "^4.9.5","socket.io": "^1.1.0"}, "//": "And these are just task handlers to make things easier","devDependencies": {"grunt": "^0.4.5","grunt-contrib-copy": "^0.6.0","grunt-node-webkit-builder": "^0.1.21"}} You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm installgrunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html><head><metacharset="utf-8"/><title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/><linkhref="css/normalize.css"rel="stylesheet"type="text/css"/><linkhref="css/styles.css"rel="stylesheet"type="text/css"/></head><body> <divid="serverInfo"><h1>Youtube TV</h1></div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script><script src="js/app.js"></script> </body></html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express');var app = express();var server = require('http').Server(app);var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080;server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) {// video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video   Youtube.watchVideo(video);}); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () {   Youtube.playVideo();});socket.on('pause', function () {   Youtube.pauseVideo();}); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) {io.emit('statusChange', status);}; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html><head><metacharset=“utf-8”/><title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/><linkrel="stylesheet"href="/css/styles.css"/></head><body> <divclass="controls"><divclass="search"><inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/></div><divclass="playback"><buttonclass="play">&gt;</button><buttonclass="pause">||</button></div></div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"><articleclass="video"><figure><imgsrc=""alt=""/></figure> <divclass="info"><h2></h2></div> </article></div>  <script src="/socket.io/socket.io.js"></script><script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script><script src="/js/remote.js"></script> </body></html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null;$('#searchQuery').on('keyup', function(event){clearTimeout(searchTimeout);searchTimeout = setTimeout(function(){   searchYoutube($('#searchQuery').val());}, 500);}); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){// Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data());});  // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){if( status ==='play' ) {   $('.playback .pause').show();   $('.playback .play').hide();}elseif( status ==='pause'|| status ==='stop' ) {   $('.playback .pause').hide();   $('.playback .play').show();}});  // Notify the app when we hit the play button$('.playback .play').on('click', function(event){socket.emit('play');}); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){socket.emit('pause');}); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) {require('dns').lookup( require('os').hostname(),function (err, add, fam) {typeof callback =='function'? callback(add) :null;   });} // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){$('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote');}); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 2064
article-image-dealing-upstream-proxies
Packt
27 Nov 2014
6 min read
Save for later

Dealing with Upstream Proxies

Packt
27 Nov 2014
6 min read
This article is written by Akash Mahajan, the author of Burp Suite Essentials. We know that setting up Mozilla Firefox with the FoxyProxy Standard add-on to create a selective, pattern-based forwarding process allows us to ensure that only white-listed traffic from our browser reaches Burp. This is something that Burp allows us to set with its configuration options itself. Think of it like this: less traffic reaching Burp ensures that Burp is dealing with legitimate traffic, and its filters are working on ensuring that we remain within our scope. (For more resources related to this topic, see here.) As a security professional testing web application, scope is a term you hear and read about everywhere. Many times, we are expected to test only parts of an application, and usually, the scope is limited by domain, subdomain, folder name, and even certain filenames. Burp gives a nice, simple-to-use interface to add, edit, and remove targets from the scope. Dealing with upstream proxies and SOCKS proxies Sometimes, the application that we need to test lies inside some corporate network. The clients give access to a specific IP address that is white-listed in the corporate firewall. At other times, we work inside the client location but it requires us to provide an internal proxy to get access to the staging site for testing. In all such cases and more, we need to be able to add an additional proxy that Burp can send data to before it reaches our target. In some cases, this proxy can be the one that the browser requires to reach the intranet or even the Internet. Since we would like to intercept all the browser traffic and Burp has become the proxy for the browser, we need to be able to chain the proxy to set the same in Burp. Types of proxies supported by Burp We can configure additional proxies by navigating to Options | Connections. If you notice carefully, the upstream proxy rule editor looks like the FoxyProxy add-on proxy window. That is not surprising as both of them operate with URL patterns. We can carefully add the target as the destination that will require a proxy to reach to. Most standard proxies that support authentication are supported in Burp. Out of these, NTLM flavors are regularly found in networks with the Microsoft Active Directory infrastructure. The usage is straightforward. Add the destination and the other details that should be provided to you by the network administrators. Working with SOCKS proxies SOCKS proxies are another common form of proxies in use. The most popular SOCKS-based proxy is TOR, which allows your entire browser traffic, including DNS lookups, to occur at the proxy end. Since the SOCKS proxy protocol works by taking all the traffic through it, the destination server can see the IP address of the SOCKS proxy. You can give this a whirl by running the Tor browser bundle http://www.torproject.org/projects/torbrowser.html.en. Once the Tor browser bundle is running successfully, just add the following values in the SOCKS proxy settings of Burp. Make sure you check Use SOCKS proxy after adding the correct values. Have a look at the following screenshot: Using SSH tunneling as a SOCKS proxy Using SSH tunneling as a SOCKS proxy is quite useful when we want to give a white-listed IP address to a firewall administrator to access an application. So, the scenario here requires you to have access to a GNU/Linux server with a static IP address, which you can connect to using Secure Shell Server (SSH). In Mac OS X and GNU/Linux shell, the following command will start a local SOCKS proxy: ssh -D 12345 [email protected] Once you are successfully logged in to your server, leave it on so that Burp can keep using it. Now add localhost as SOCKS proxy host and 12345 as SOCKS proxy port, and you are good to go. In Windows, if we use a command-line SSH client that comes with GNU, the process remains the same. Otherwise, if you are a PuTTY fan, let's see how we can configure the same thing in it. In PuTTY, follow these steps to get the SSH tunnel working, which will be our SOCKS proxy: Start PuTTY and click on SSH and then on Tunnels. Here, add a newly forwarded port. Give it the value of 12345. Under Destination, there is a bunch of radio buttons; choose Auto and Dynamic, and then click on the Add button: Once this is set, connect to the server. Add the values localhost and 12345 in the Host and Port fields, respectively, in the Burp options for the SOCKS proxy. You can verify that your traffic is going through the SOCKS proxy by visiting any site that gives you your external IP address. I personally use my own web page for that http://akashm.com/ip.php; you might want to try http://icanhazip.com or http://whatismyip.com. Burp allows maximum connectivity with upstream and SOCKS proxies to make our job easier. By adding URL patterns, we can choose which proxy is connected in upstream proxy providers. SOCKS proxies, due to their nature, take all the traffic and send it to another computer, so we can't choose which URL to use it for. But this allows a simple-to-use workflow to test applications, which are behind corporate firewalls and need to white-list our static IP before allowing access. Setting up Burp to be a proxy server for other devices So far, we have run Burp on our computer. This is good enough when we want to intercept the traffic of browsers running on our computer. But what if we would like to intercept traffic from our television, from our iOS, or Android devices? Currently, in the default configuration, Burp has started one listener on an internal interface on port number 8080. We can start multiple listeners on different ports and interfaces. We can do this in the Options subtab under the Proxy tab. Note that this is different from the main Options tab. We can add more than one proxy listener at the same time by following these steps: Click on the Add button under Proxy Listeners. Enter a port number. It can be the same 8080, but if it confuses you, can give the number 8081. Specify an interface and choose your LAN IP address. Once you click on Ok, click on Running, and now you have started an external listener for Burp: You can add the LAN IP address and the port number you added as the proxy server on your mobile device, and all HTTP traffic will get intercepted by Burp. Have a look at the following screenshot: Summary In this article, you learned how to use the SOCKS proxy server, especially in a SSH tunnel kind of scenario. You also learned how simple it is to create multiple listeners for Burp, which allows other devices in the network to send their HTTP traffic to the Burp interception proxy. Resources for Article: Further resources on this subject: Quick start – Using Burp Proxy [article] Nginx proxy module [article] Using Nginx as a Reverse Proxy [article]
Read more
  • 0
  • 0
  • 3687

article-image-deployment-and-post-deployment
Packt
17 Nov 2014
30 min read
Save for later

Deployment and Post Deployment

Packt
17 Nov 2014
30 min read
In this article by Shalabh Aggarwal, the author of Flask Framework Cookbook, we will talk about various application-deployment techniques, followed by some monitoring tools that are used post-deployment. (For more resources related to this topic, see here.) Deployment of an application and managing the application post-deployment is as important as developing it. There can be various ways of deploying an application, where choosing the best way depends on the requirements. Deploying an application correctly is very important from the points of view of security and performance. There are multiple ways of monitoring an application after deployment of which some are paid and others are free to use. Using them again depends on requirements and features offered by them. Each of the tools and techniques has its own set of features. For example, adding too much monitoring to an application can prove to be an extra overhead to the application and the developers as well. Similarly, missing out on monitoring can lead to undetected user errors and overall user dissatisfaction. Hence, we should choose the tools wisely and they will ease our lives to the maximum. In the post-deployment monitoring tools, we will discuss Pingdom and New Relic. Sentry is another tool that will prove to be the most beneficial of all from a developer's perspective. Deploying with Apache First, we will learn how to deploy a Flask application with Apache, which is, unarguably, the most popular HTTP server. For Python web applications, we will use mod_wsgi, which implements a simple Apache module that can host any Python applications that support the WSGI interface. Remember that mod_wsgi is not the same as Apache and needs to be installed separately. Getting ready We will start with our catalog application and make appropriate changes to it to make it deployable using the Apache HTTP server. First, we should make our application installable so that our application and all its libraries are on the Python load path. This can be done using a setup.py script. There will be a few changes to the script as per this application. The major changes are mentioned here: packages=[    'my_app',    'my_app.catalog', ], include_package_data=True, zip_safe = False, First, we mentioned all the packages that need to be installed as part of our application. Each of these needs to have an __init__.py file. The zip_safe flag tells the installer to not install this application as a ZIP file. The include_package_data statement reads from a MANIFEST.in file in the same folder and includes any package data mentioned here. Our MANIFEST.in file looks like: recursive-include my_app/templates * recursive-include my_app/static * recursive-include my_app/translations * Now, just install the application using the following command: $ python setup.py install Installing mod_wsgi is usually OS-specific. Installing it on a Debian-based distribution should be as easy as just using the packaging tool, that is, apt or aptitude. For details, refer to https://code.google.com/p/modwsgi/wiki/InstallationInstructions and https://github.com/GrahamDumpleton/mod_wsgi. How to do it… We need to create some more files, the first one being app.wsgi. This loads our application as a WSGI application: activate_this = '<Path to virtualenv>/bin/activate_this.py' execfile(activate_this, dict(__file__=activate_this))   from my_app import app as application import sys, logging logging.basicConfig(stream = sys.stderr) As we perform all our installations inside virtualenv, we need to activate the environment before our application is loaded. In the case of system-wide installations, the first two statements are not needed. Then, we need to import our app object as application, which is used as the application being served. The last two lines are optional, as they just stream the output to the the standard logger, which is disabled by mod_wsgi by default. The app object needs to be imported as application, because mod_wsgi expects the application keyword. Next comes a config file that will be used by the Apache HTTP server to serve our application correctly from specific locations. The file is named apache_wsgi.conf: <VirtualHost *>      WSGIScriptAlias / <Path to application>/flask_catalog_deployment/app.wsgi      <Directory <Path to application>/flask_catalog_deployment>        Order allow,deny        Allow from all    </Directory>   </VirtualHost> The preceding code is the Apache configuration, which tells the HTTP server about the various directories where the application has to be loaded from. The final step is to add the apache_wsgi.conf file to apache2/httpd.conf so that our application is loaded when the server runs: Include <Path to application>/flask_catalog_deployment/ apache_wsgi.conf How it works… Let's restart the Apache server service using the following command: $ sudo apachectl restart Open up http://127.0.0.1/ in the browser to see the application's home page. Any errors coming up can be seen at /var/log/apache2/error_log (this path can differ depending on OS). There's more… After all this, it is possible that the product images uploaded as part of the product creation do not work. For this, we should make a small modification to our application's configuration: app.config['UPLOAD_FOLDER'] = '<Some static absolute path>/flask_test_uploads' We opted for a static path because we do not want it to change every time the application is modified or installed. Now, we will include the path chosen in the preceding code to apache_wsgi.conf: Alias /static/uploads/ "<Some static absolute path>/flask_test_uploads/" <Directory "<Some static absolute path>/flask_test_uploads">    Order allow,deny    Options Indexes    Allow from all    IndexOptions FancyIndexing </Directory> After this, install the application and restart apachectl. See also http://httpd.apache.org/ https://code.google.com/p/modwsgi/ http://wsgi.readthedocs.org/en/latest/ https://pythonhosted.org/setuptools/setuptools.html#setting-the-zip-safe-flag Deploying with uWSGI and Nginx For those who are already aware of the usefulness of uWSGI and Nginx, there is not much that can be explained. uWSGI is a protocol as well as an application server and provides a complete stack to build hosting services. Nginx is a reverse proxy and HTTP server that is very lightweight and capable of handling virtually unlimited requests. Nginx works seamlessly with uWSGI and provides many under-the-hood optimizations for better performance. Getting ready We will use our application from the last recipe, Deploying with Apache, and use the same app.wsgi, setup.py, and MANIFEST.in files. Also, other changes made to the application's configuration in the last recipe will apply to this recipe as well. Disable any other HTTP servers that might be running, such as Apache and so on. How to do it… First, we need to install uWSGI and Nginx. On Debian-based distributions such as Ubuntu, they can be easily installed using the following commands: # sudo apt-get install nginx # sudo apt-get install uWSGI You can also install uWSGI inside a virtualenv using the pip install uWSGI command. Again, these are OS-specific, so refer to the respective documentations as per the OS used. Make sure that you have an apps-enabled folder for uWSGI, where we will keep our application-specific uWSGI configuration files, and a sites-enabled folder for Nginx, where we will keep our site-specific configuration files. Usually, these are already present in most installations in the /etc/ folder. If not, refer to the OS-specific documentations to figure out the same. Next, we will create a file named uwsgi.ini in our application: [uwsgi] http-socket   = :9090 plugin   = python wsgi-file = <Path to application>/flask_catalog_deployment/app.wsgi processes   = 3 To test whether uWSGI is working as expected, run the following command: $ uwsgi --ini uwsgi.ini The preceding file and command are equivalent to running the following command: $ uwsgi --http-socket :9090 --plugin python --wsgi-file app.wsgi Now, point your browser to http://127.0.0.1:9090/; this should open up the home page of the application. Create a soft link of this file to the apps-enabled folder mentioned earlier using the following command: $ ln -s <path/to/uwsgi.ini> <path/to/apps-enabled> Before moving ahead, edit the preceding file to replace http-socket with socket. This changes the protocol from HTTP to uWSGI (read more about it at http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html). Now, create a new file called nginx-wsgi.conf. This contains the Nginx configuration needed to serve our application and the static content: location /{    include uwsgi_params;    uwsgi_pass 127.0.0.1:9090; } location /static/uploads/{    alias <Some static absolute path>/flask_test_uploads/; } In the preceding code block, uwsgi_pass specifies the uWSGI server that needs to be mapped to the specified location. Create a soft link of this file to the sites-enabled folder mentioned earlier using the following command: $ ln -s <path/to/nginx-wsgi.conf> <path/to/sites-enabled> Edit the nginx.conf file (usually found at /etc/nginx/nginx.conf) to add the following line inside the first server block before the last }: include <path/to/sites-enabled>/*; After all of this, reload the Nginx server using the following command: $ sudo nginx -s reload Point your browser to http://127.0.0.1/ to see the application that is served via Nginx and uWSGI. The preceding instructions of this recipe can vary depending on the OS being used and different versions of the same OS can also impact the paths and commands used. Different versions of these packages can also have some variations in usage. Refer to the documentation links provided in the next section. See also Refer to http://uwsgi-docs.readthedocs.org/en/latest/ for more information on uWSGI. Refer to http://nginx.com/ for more information on Nginx. There is a good article by DigitalOcean on this. I advise you to go through this to have a better understanding of the topic. It is available at https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-applications-using-uwsgi-web-server-with-nginx. To get an insight into the difference between Apache and Nginx, I think the article by Anturis at https://anturis.com/blog/nginx-vs-apache/ is pretty good. Deploying with Gunicorn and Supervisor Gunicorn is a WSGI HTTP server for Unix. It is very simple to implement, ultra light, and fairly speedy. Its simplicity lies in its broad compatibility with various web frameworks. Supervisor is a monitoring tool that controls various child processes and handles the starting/restarting of these child processes when they exit abruptly due to some reason. It can be extended to control the processes via the XML-RPC API over remote locations without logging in to the server (we won't discuss this here as it is out of the scope of this book). One thing to remember is that these tools can be used along with the other tools mentioned in the applications in the previous recipe, such as using Nginx as a proxy server. This is left to you to try on your own. Getting ready We will start with the installation of both the packages, that is, gunicorn and supervisor. Both can be directly installed using pip: $ pip install gunicorn $ pip install supervisor How to do it… To check whether the gunicorn package works as expected, just run the following command from inside our application folder: $ gunicorn -w 4 -b 127.0.0.1:8000 my_app:app After this, point your browser to http://127.0.0.1:8000/ to see the application's home page. Now, we need to do the same using Supervisor so that this runs as a daemon and will be controlled by Supervisor itself rather than human intervention. First of all, we need a Supervisor configuration file. This can be achieved by running the following command from virtualenv. Supervisor, by default, looks for an etc folder that has a file named supervisord.conf. In system-wide installations, this folder is /etc/, and in virtualenv, it will look for an etc folder in virtualenv and then fall back to /etc/: $ echo_supervisord_conf > etc/supervisord.conf The echo_supervisord_conf program is provided by Supervisor; it prints a sample config file to the location specified. This command will create a file named supervisord.conf in the etc folder. Add the following block in this file: [program:flask_catalog] command=<path/to/virtualenv>/bin/gunicorn -w 4 -b 127.0.0.1:8000 my_app:app directory=<path/to/virtualenv>/flask_catalog_deployment user=someuser # Relevant user autostart=true autorestart=true stdout_logfile=/tmp/app.log stderr_logfile=/tmp/error.log Make a note that one should never run the applications as a root user. This is a huge security flaw in itself as the application crashes, which can harm the OS itself. How it works… Now, run the following commands: $ supervisord $ supervisorctl status flask_catalog   RUNNING   pid 40466, uptime 0:00:03 The first command invokes the supervisord server, and the next one gives a status of all the child processes. The tools discussed in this recipe can be coupled with Nginx to serve as a reverse proxy server. I suggest that you try it by yourself. Every time you make a change to your application and then wish to restart Gunicorn in order for it to reflect the changes, run the following command: $ supervisorctl restart all You can also give specific processes instead of restarting everything: $ supervisorctl restart flask_catalog See also http://gunicorn-docs.readthedocs.org/en/latest/index.html http://supervisord.org/index.html Deploying with Tornado Tornado is a complete web framework and a standalone web server in itself. Here, we will use Flask to create our application, which is basically a combination of URL routing and templating, and leave the server part to Tornado. Tornado is built to hold thousands of simultaneous standing connections and makes applications very scalable. Tornado has limitations while working with WSGI applications. So, choose wisely! Read more at http://www.tornadoweb.org/en/stable/wsgi.html#running-wsgi-apps-on-tornado-servers. Getting ready Installing Tornado can be simply done using pip: $ pip install tornado How to do it… Next, create a file named tornado_server.py and put the following code in it: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from my_app import app   http_server = HTTPServer(WSGIContainer(app)) http_server.listen(5000) IOLoop.instance().start() Here, we created a WSGI container for our application; this container is then used to create an HTTP server, and the application is hosted on port 5000. How it works… Run the Python file created in the previous section using the following command: $ python tornado_server.py Point your browser to http://127.0.0.1:5000/ to see the home page being served. We can couple Tornado with Nginx (as a reverse proxy to serve static content) and Supervisor (as a process manager) for the best results. It is left for you to try this on your own. Using Fabric for deployment Fabric is a command-line tool in Python; it streamlines the use of SSH for application deployment or system-administration tasks. As it allows the execution of shell commands on remote servers, the overall process of deployment is simplified, as the whole process can now be condensed into a Python file, which can be run whenever needed. Therefore, it saves the pain of logging in to the server and manually running commands every time an update has to be made. Getting ready Installing Fabric can be simply done using pip: $ pip install fabric We will use the application from the Deploying with Gunicorn and Supervisor recipe. We will create a Fabric file to perform the same process to the remote server. For simplicity, let's assume that the remote server setup has been already done and all the required packages have also been installed with a virtualenv environment, which has also been created. How to do it… First, we need to create a file called fabfile.py in our application, preferably at the application's root directory, that is, along with the setup.py and run.py files. Fabric, by default, expects this filename. If we use a different filename, then it will have to be explicitly specified while executing. A basic Fabric file will look like: from fabric.api import sudo, cd, prefix, run   def deploy_app():    "Deploy to the server specified"    root_path = '/usr/local/my_env'      with cd(root_path):        with prefix("source %s/bin/activate" % root_path):            with cd('flask_catalog_deployment'):                run('git pull')                run('python setup.py install')              sudo('bin/supervisorctl restart all') Here, we first moved into our virtualenv, activated it, and then moved into our application. Then, the code is pulled from the Git repository, and the updated application code is installed using setup.py install. After this, we restarted the supervisor processes so that the updated application is now rendered by the server. Most of the commands used here are self-explanatory, except prefix, which wraps all the succeeding commands in its block with the command provided. This means that the command to activate virtualenv will run first and then all the commands in the with block will execute with virtualenv activated. The virtualenv will be deactivated as soon as control goes out of the with block. How it works… To run this file, we need to provide the remote server where the script will be executed. So, the command will look something like: $ fab -H my.remote.server deploy_app Here, we specified the address of the remote host where we wish to deploy and the name of the method to be called from the fab script. There's more… We can also specify the remote host inside our fab script, and this can be good idea if the deployment server remains the same most of the times. To do this, add the following code to the fab script: from fabric.api import settings   def deploy_app_to_server():    "Deploy to the server hardcoded"    with settings(host_string='my.remote.server'):        deploy_app() Here, we have hardcoded the host and then called the method we created earlier to start the deployment process. S3 storage for file uploads Amazon explains S3 as the storage for the Internet that is designed to make web-scale computing easier for developers. S3 provides a very simple interface via web services; this makes storage and retrieval of any amount of data very simple at any time from anywhere on the Internet. Until now, in our catalog application, we saw that there were issues in managing the product images uploaded as a part of the creating process. The whole headache will go away if the images are stored somewhere globally and are easily accessible from anywhere. We will use S3 for the same purpose. Getting ready Amazon offers boto, a complete Python library that interfaces with Amazon Web Services via web services. Almost all of the AWS features can be controlled using boto. It can be installed using pip: $ pip install boto How to do it… Now, we should make some changes to our existing catalog application to accommodate support for file uploads and retrieval from S3. First, we need to store the AWS-specific configuration to allow boto to make calls to S3. Add the following statements to the application's configuration file, that is, my_app/__init__.py: app.config['AWS_ACCESS_KEY'] = 'Amazon Access Key' app.config['AWS_SECRET_KEY'] = 'Amazon Secret Key' app.config['AWS_BUCKET'] = 'flask-cookbook' Next, we need to change our views.py file: from boto.s3.connection import S3Connection This is the import that we need from boto. Next, replace the following two lines in create_product(): filename = secure_filename(image.filename) image.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) Replace these two lines with: filename = image.filename conn = S3Connection(    app.config['AWS_ACCESS_KEY'], app.config['AWS_SECRET_KEY'] ) bucket = conn.create_bucket(app.config['AWS_BUCKET']) key = bucket.new_key(filename) key.set_contents_from_file(image) key.make_public() key.set_metadata(    'Content-Type', 'image/' + filename.split('.')[-1].lower() ) The last change will go to our product.html template, where we need to change the image src path. Replace the original img src statement with the following statement: <img src="{{ 'https://s3.amazonaws.com/' + config['AWS_BUCKET'] + '/' + product.image_path }}"/> How it works… Now, run the application as usual and create a product. When the created product is rendered, the product image will take a bit of time to come up as it is now being served from S3 (and not from a local machine). If this happens, then the integration with S3 has been successfully done. Deploying with Heroku Heroku is a cloud application platform that provides an easy and quick way to build and deploy web applications. Heroku manages the servers, deployment, and related operations while developers spend their time on developing applications. Deploying with Heroku is pretty simple with the help of the Heroku toolbelt, which is a bundle of some tools that make deployment with Heroku a cakewalk. Getting ready We will proceed with the application from the previous recipe that has S3 support for uploads. As mentioned earlier, the first step will be to download the Heroku toolbelt, which can be downloaded as per the OS from https://toolbelt.heroku.com/. Once the toolbelt is installed, a certain set of commands will be available at the terminal; we will see them later in this recipe. It is advised that you perform Heroku deployment from a fresh virtualenv where we have only the required packages for our application installed and nothing else. This will make the deployment process faster and easier. Now, run the following command to log in to your Heroku account and sync your machined SSH key with the server: $ heroku login Enter your Heroku credentials. Email: [email protected] Password (typing will be hidden): Authentication successful. You will be prompted to create a new SSH key if one does not exist. Proceed accordingly. Remember! Before all this, you need to have a Heroku account on available on https://www.heroku.com/. How to do it… Now, we already have an application that needs to be deployed to Heroku. First, Heroku needs to know the command that it needs to run while deploying the application. This is done in a file named Procfile: web: gunicorn -w 4 my_app:app Here, we will tell Heroku to run this command to run our web application. There are a lot of different configurations and commands that can go into Procfile. For more details, read the Heroku documentation. Heroku needs to know the dependencies that need to be installed in order to successfully install and run our application. This is done via the requirements.txt file: Flask==0.10.1 Flask-Restless==0.14.0 Flask-SQLAlchemy==1.0 Flask-WTF==0.10.0 Jinja2==2.7.3 MarkupSafe==0.23 SQLAlchemy==0.9.7 WTForms==2.0.1 Werkzeug==0.9.6 boto==2.32.1 gunicorn==19.1.1 itsdangerous==0.24 mimerender==0.5.4 python-dateutil==2.2 python-geoip==1.2 python-geoip-geolite2==2014.0207 python-mimeparse==0.1.4 six==1.7.3 wsgiref==0.1.2 This file contains all the dependencies of our application, the dependencies of these dependencies, and so on. An easy way to generate this file is using the pip freeze command: $ pip freeze > requirements.txt This will create/update the requirements.txt file with all the packages installed in virtualenv. Now, we need to create a Git repo of our application. For this, we will run the following commands: $ git init $ git add . $ git commit -m "First Commit" Now, we have a Git repo with all our files added. Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo. Now, we need to create a Heroku application and push our application to Heroku: $ heroku create Creating damp-tor-6795... done, stack is cedar http://damp-tor-6795.herokuapp.com/ | [email protected]:damp-tor- 6795.git Git remote heroku added $ git push heroku master After the last command, a whole lot of stuff will get printed on the terminal; this will indicate all the packages being installed and finally, the application being launched. How it works… After the previous commands have successfully finished, just open up the URL provided by Heroku at the end of deployment in a browser or run the following command: $ heroku open This will open up the application's home page. Try creating a new product with an image and see the image being served from Amazon S3. To see the logs of the application, run the following command: $ heroku logs There's more… There is a glitch with the deployment we just did. Every time we update the deployment via the git push command, the SQLite database gets overwritten. The solution to this is to use the Postgres setup provided by Heroku itself. I urge you to try this by yourself. Deploying with AWS Elastic Beanstalk In the last recipe, we saw how deployment to servers becomes easy with Heroku. Similarly, Amazon has a service named Elastic Beanstalk, which allows developers to deploy their application to Amazon EC2 instances as easily as possible. With just a few configuration options, a Flask application can be deployed to AWS using Elastic Beanstalk in a couple of minutes. Getting ready We will start with our catalog application from the previous recipe, Deploying with Heroku. The only file that remains the same from this recipe is requirement.txt. The rest of the files that were added as a part of that recipe can be ignored or discarded for this recipe. Now, the first thing that we need to do is download the AWS Elastic Beanstalk command-line tool library from the Amazon website (http://aws.amazon.com/code/6752709412171743). This will download a ZIP file that needs to be unzipped and placed in a suitable place, preferably your workspace home. The path of this tool should be added to the PATH environment so that the commands are available throughout. This can be done via the export command as shown: $ export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/ This can also be added to the ~/.profile or ~/.bash_profile file using: export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/ How to do it… There are a few conventions that need to be followed in order to deploy using Beanstalk. Beanstalk assumes that there will be a file called application.py, which contains the application object (in our case, the app object). Beanstalk treats this file as the WSGI file, and this is used for deployment. In the Deploying with Apache recipe, we had a file named app.wgsi where we referred our app object as application because apache/mod_wsgi needed it to be so. The same thing happens here too because Amazon, by default, deploys using Apache behind the scenes. The contents of this application.py file can be just a few lines as shown here: from my_app import app as application import sys, logging logging.basicConfig(stream = sys.stderr) Now, create a Git repo in the application and commit with all the files added: $ git init $ git add . $ git commit -m "First Commit" Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo. Now, we need to deploy to Elastic Beanstalk. Run the following command to do this: $ eb init The preceding command initializes the process for the configuration of your Elastic Beanstalk instance. It will ask for the AWS credentials followed by a lot of other configuration options needed for the creation of the EC2 instance, which can be selected as needed. For more help on these options, refer to http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_flask.html. After this is done, run the following command to trigger the creation of servers, followed by the deployment of the application: $ eb start Behind the scenes, the preceding command creates the EC2 instance (a volume), assigns an elastic IP, and then runs the following command to push our application to the newly created server for deployment: $ git aws.push This will take a few minutes to complete. When done, you can check the status of your application using the following command: $ eb status –verbose Whenever you need to update your application, just commit your changes using the git and push commands as follows: $ git aws.push How it works… When the deployment process finishes, it gives out the application URL. Point your browser to it to see the application being served. Yet, you will find a small glitch with the application. The static content, that is, the CSS and JS code, is not being served. This is because the static path is not correctly comprehended by Beanstalk. This can be simply fixed by modifying the application's configuration on your application's monitoring/configuration page in the AWS management console. See the following screenshots to understand this better: Click on the Configuration menu item in the left-hand side menu. Notice the highlighted box in the preceding screenshot. This is what we need to change as per our application. Open Software Settings. Change the virtual path for /static/, as shown in the preceding screenshot. After this change is made, the environment created by Elastic Beanstalk will be updated automatically, although it will take a bit of time. When done, check the application again to see the static content also being served correctly. Application monitoring with Pingdom Pingdom is a website-monitoring tool that has the USP of notifying you as soon as your website goes down. The basic idea behind this tool is to constantly ping the website at a specific interval, say, 30 seconds. If a ping fails, it will notify you via an e-mail, SMS, tweet, or push notifications to mobile apps, which inform that your site is down. It will keep on pinging at a faster rate until the site is back up again. There are other monitoring features too, but we will limit ourselves to uptime checks in this book. Getting ready As Pingdom is a SaaS service, the first step will be to sign up for an account. Pingdom offers a free trial of 1 month in case you just want to try it out. The website for the service is https://www.pingdom.com. We will use the application deployed to AWS in the Deploying with AWS Elastic Beanstalk recipe to check for uptime. Here, Pingdom will send an e-mail in case the application goes down and will send an e-mail again when it is back up. How to do it… After successful registration, create a check for time. Have a look at the following screenshot: As you can see, I already added a check for the AWS instance. To create a new check, click on the ADD NEW button. Fill in the details asked by the form that comes up. How it works… After the check is successfully created, try to break the application by consciously making a mistake somewhere in the code and then deploying to AWS. As soon as the faulty application is deployed, you will get an e-mail notifying you of this. This e-mail will look like: Once the application is fixed and put back up again, the next e-mail should look like: You can also check how long the application has been up and the downtime instances from the Pingdom administration panel. Application performance management and monitoring with New Relic New Relic is an analytics software that provides near real-time operational and business analytics related to your application. It provides deep analytics on the behavior of the application from various aspects. It does the job of a profiler as well as eliminating the need to maintain extra moving parts in the application. It actually works in a scenario where our application sends data to New Relic rather than New Relic asking for statistics from our application. Getting ready We will use the application from the last recipe, which is deployed to AWS. The first step will be to sign up with New Relic for an account. Follow the simple signup process, and upon completion and e-mail verification, it will lead to your dashboard. Here, you will have your license key available, which we will use later to connect our application to this account. The dashboard should look like the following screenshot: Here, click on the large button named Reveal your license key. How to do it… Once we have the license key, we need to install the newrelic Python library: $ pip install newrelic Now, we need to generate a file called newrelic.ini, which will contain details regarding the license key, the name of our application, and so on. This can be done using the following commands: $ newrelic-admin generate-config LICENSE-KEY newrelic.ini In the preceding command, replace LICENSE-KEY with the actual license key of your account. Now, we have a new file called newrelic.ini. Open and edit the file for the application name and anything else as needed. To check whether the newrelic.ini file is working successfully, run the following command: $ newrelic-admin validate-config newrelic.ini This will tell us whether the validation was successful or not. If not, then check the license key and its validity. Now, add the following lines at the top of the application's configuration file, that is, my_app/__init__.py in our case. Make sure that you add these lines before anything else is imported: import newrelic.agent newrelic.agent.initialize('newrelic.ini') Now, we need to update the requirements.txt file. So, run the following command: $ pip freeze > requirements.txt After this, commit the changes and deploy the application to AWS using the following command: $ git aws.push How it works… Once the application is successfully updated on AWS, it will start sending statistics to New Relic, and the dashboard will have a new application added to it. Open the application-specific page, and a whole lot of statistics will come across. It will also show which calls have taken the most amount of time and how the application is performing. You will also see multiple tabs that correspond to a different type of monitoring to cover all the aspects. Summary In this article, we have seen the various techniques used to deploy and monitor Flask applications. Resources for Article: Further resources on this subject: Understanding the Python regex engine [Article] Exploring Model View Controller [Article] Plotting Charts with Images and Maps [Article]
Read more
  • 0
  • 0
  • 1466