Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-frontend-soa-taming-beast-frontend-web-development
Wesley Cho
08 May 2015
6 min read
Save for later

Frontend SOA: Taming the beast of frontend web development

Wesley Cho
08 May 2015
6 min read
Frontend web development is a difficult domain for creating scalable applications.  There are many challenges when it comes to architecture, such as how to best organize HTML, CSS, and JavaScript files, or how to create build tooling to allow an optimal development & production environment. In addition, complexity has increased measurably. Templating & routing have been transplanted to the concern of frontend web engineers as a result of the push towards single page applications (SPAs).  A wealth of frameworks can be found as listed on todomvc.com.  AngularJS is one that rose to prominence almost two years ago on the back of declarative html, strong testability, and two-way data binding, but even now it is seeing some churn due to Angular 2.0 breaking backwards compatibility completely and the rise of React, which is Facebook’s new view layer bringing the idea of a virtual DOM for performance optimization not previously seen in frontend web architecture.  Angular 2.0 itself is also looking like a juggernaut with decoupled components that harkens to more pure JavaScript & is already boasting of performance gains of roughly 5x compared to Angular 1.x. With this much churn, frontend web apps have become difficult to architect for the long term.  This requires us to take a step back and think about the direction of browsers. The Future of Browsers We know that ECMAScript 6 (ES6) is already making its headway into browsers - ES6 changes how JavaScript is structured greatly with a proper module system, and adds a lot of syntactical sugar.  Web Components are also going to change how we build our views as well. Instead of: .home-view { ... } We will be writing: <template id=”home-view”> <style> … </style> <my-navbar></my-navbar> <my-content></my-content> <script> … </script> </template> <home-view></home-view> <script> var proto = Object.create(HTMLElement.prototype); proto.createdCallback = function () { var root = this.createRoot(); var template = document.querySelector(‘#home-view’); var clone = document.importNode(template.content, true); root.appendChild(clone); }; document.registerElement(‘home-view’, { prototype: proto }); </script> This is drastically different from how we build components now.  In addition, libraries & frameworks are already being built with this in mind.  Angular 2 is using annotations provided by Traceur, Google’s ES6 + ES7 to ES5 transpiler, to provide syntactical sugar for creating one way bindings to the DOM and to DOM events.  React and Ember also have plans to integrate Web Components into their workflows.  Aurelia is already structured in a way to take advantage of it when it drops. What can we do to future proof ourselves for when these technologies drop? Solution  For starters, it is important to realize that creating HTML and CSS is relatively cheap compared to managing a complex JavaScript codebase built on top of a framework or library.  Frontend web development is seeing architecture pains that have already been solved in other domains, except it has the additional problem of the standard challenge of integrating UI into that structure.  This seems to suggest that the solution is to create a frontend service-oriented architecture (SOA) where most of the heavy logic is offloaded to pure JavaScript with only utility library additions (i.e. Underscore/Lodash).  This would allow us to choose view layers with relative ease, and move fast in case it turns out a particular view library/framework turns out not to meet requirements.  It also prevents the endemic problem of having to rewrite whole codebases due to having to swap out libraries/frameworks. For example, consider this sample Angular controller (a similarly contrived example can be created using other pieces of tech as well): angular.module(‘DemoApp’) .controller(‘DemoCtrl’, function ($scope, $http) { $scope.getItems = function () { $http.get(‘/items/’) .then(function (response) { $scope.items = response.data.items; $scope.$emit(‘items:received’, $scope.items); }); }; }); This sample controller has a method getItems that fetches items, updates the model, and then emits the information so that parent views have access to that change.  This is ugly because it hardcodes application structure hierarchy and mixes it with server query logic, which is a separate concern.  In addition, it also mixes the usage of Angular’s internals into the application code, tying some pure abstract logic heavily in with the framework’s internals.  It is not all that uncommon to see developers make these simple architecture mistakes. With the proper module system that ES6 brings, this simplifies to (items.js): import {fetch} from ‘fetch’; export class items { getAll() { return fetch.get(‘/items’) .then(function (response) { return response.json(); }); } }; And demoCtrl.js: import {BaseCtrl} from ‘./baseCtrl.js’; import {items} from ‘./items’; export class DemoCtrl extends BaseCtrl { constructor() { super(); } getItems() { let self = this; return Items.getAll() .then(function (items) { self.items = items; return items; }); } }; And main.js: import {items} from ‘./items’; import {DemoCtrl} from ‘./DemoCtrl’; angular.module(‘DemoApp’, []) .factory(‘items’, items) .controller(‘DemoCtrl’, DemoCtrl);  If you want to use anything from $scope, you can modify the usage of DemoCtrl straight in the controller definition and just instantiate it inside the function.  With promises, which are also available natively in ES6, you can chain upon them in the implementation of DemoCtrl in the Angular code base. The kicker about this approach is that this can also be done currently in ES5, and is not limited with using Angular - it applies equally as well with any other library or framework, such as Backbone, Ember, and React!  It also allows you to churn out very testable code. I recommend this as a best practice for architecting complex frontend web apps - the only caveat is if the other aspects of engineering prevent this from being a possibility, such as the business requirements of time and people resources available.  This approach allows us to tame the beast of maintaining & scaling frontend web apps while still being able to adapt quickly to the constantly changing landscape. About this author Wesley Cho is a senior frontend engineer at Jiff (http://www.jiff.com/).  He has contributed features & bug fixes and reported numerous issues to numerous libraries in the Angular ecosystem, including AngularJS, Ionic, UI Bootstrap, and UI Router, as well as authored several libraries.
Read more
  • 0
  • 0
  • 3167

article-image-angularjs-web-application-development-cookbook
Packt
08 May 2015
2 min read
Save for later

AngularJS Web Application Development Cookbook

Packt
08 May 2015
2 min read
Architect performant applications and implement best practices in AngularJS. Packed with easy-to-follow recipes, this practical guide will show you how to unleash the full might of the AngularJS framework. Skip straight to practical solutions and quick, functional answers to your problems without hand-holding or slogging through the basics. (For more resources related to this topic, see here.) Some highlights include: Architecting recursive directives Extensively customizing your search filter Custom routing attributes Animating ngRepeat Animating ngInclude, ngView, and ngIf Animating ngSwitch Animating ngClass, and class attributes Animating ngShow, and ngHide The goal of this text is to have you walk away from reading about an AngularJS concept armed with a solid understanding of how it works, insight into the best ways to wield it in real-world applications, and annotated code examples to get you started. Why you should buy this book A collection of recipes demonstrating optimal organization, scaleable architecture, and best practices for use in small and large-scale production applications. Each recipe contains complete, functioning examples and detailed explanations on how and why they are organized and built that way, as well as alternative design choices for different situations. The author of this book is a full stack developer at DoorDash (YC S13), where he joined as the first engineer. He led their adoption of AngularJS, and he also focuses on the infrastructural, predictive, and data projects within the company. Matt has a degree in Computer Engineering from the University of Illinois at Urbana-Champaign. He is the author of the video series Learning AngularJS, available through O'Reilly Media. Previously, he worked as an engineer at several educational technology start-ups. Almost every example in this book has been added to JSFiddle, with the links provided in the book. This allows you to merely visit a URL in order to test and modify the code with no setup of any kind, on any major browser and on any major operating system. Resources for Article:  Further resources on this subject: Working with Live Data and AngularJS [article] Angular Zen [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 4389

article-image-nodejs-building-maintainable-codebase
Benjamin Reed
06 May 2015
8 min read
Save for later

NodeJS: Building a Maintainable Codebase

Benjamin Reed
06 May 2015
8 min read
NodeJS has become the most anticipated web development technology since Ruby on Rails. This is not an introduction to Node. First, you must realize that NodeJS is not a direct competitor to Rails or Django. Instead, Node is a collection of libraries that allow JavaScript to run on the v8 runtime. Node powers many tools, and some of the tools have nothing to do with a scaling web application. For instance, GitHub’s Atom editor is built on top of Node. Its web application frameworks, like Express, are the competitors. This article can apply to all environments using Node. Second, Node is designed under the asynchronous ideology. Not all of the operations in Node are asynchronous. Many libraries offer synchronous and asynchronous options. A Node developer must decipher the best operation for his or her needs. Third, you should have a solid understanding of the concept of a callback in Node. Over the course of two weeks, a team attempted to refactor a Rails app to be an Express application. We loved the concepts behind Node, and we truly believed that all we needed was a barebones framework. We transferred our controller logic over to Express routes in a weekend. As a beginning team, I will analyze some of the pitfalls that we came across. Hopefully, this will help you identify strategies to tackle Node with your team. First, attempt to structure callbacks and avoid anonymous functions. As we added more and more logic, we added more and more callbacks. Everything was beautifully asynchronous, and our code would successfully run. However, we soon found ourselves debugging an anonymous function nested inside of other anonymous functions. In other words, the codebase was incredibly difficult to follow. Anyone starting out with Node could potentially notice the novice “spaghetti code.” Here’s a simple example of nested callbacks: router.put('/:id', function(req, res) { console.log("attempt to update bathroom"); models.User.find({ where: {id: req.param('id')} }).success(function (user) { var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; var raw_digest = req.param('digest') ? req.param('digest') : user.digest; user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.digest = raw_digest; user.updated_on = new Date(); user.save().success(function () { res.json(user); }).error(function () { res.json({"status": "error"}); }); }) .error(function() { res.json({"status": "error"}); }) }); Notice that there are many success and error callbacks. Locating a specific callback is not difficult if the whitespace is perfect or the developer can count closing brackets back up to the destination. However, this is pretty nasty to any newcomer. And this illegibility will only increase as the application becomes more complex. A developer may get this response: {"status": "error"} Where did this response come from? Did the ORM fail to update the object? Did it fail to find the object in the first place? A developer could add descriptions to the json in the chained error callbacks, but there has to be a better way. Let’s extract some of the callbacks into separate methods: router.put('/:id', function(req, res) { var id = req.param('id'); var query = { where: {id: id} }; // search for user models.User.find(query).success(function (user) { // parse req parameters var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; // set user attributes user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.updated_on = new Date(); // attempt to save user user.save() .success(SuccessHandler.userSaved(res, user)) .error(ErrorHandler.userNotSaved(res, id)); }) .error(ErrorHandler.userNotFound(res, id)) }); var ErrorHandler = { userNotFound: function(res, user_id) { res.json({"status": "error", "description": "The user with the specified id could not be found.", "user_id": user_id}); }, userNotSaved: function(res, user_id) { res.json({"status": "error", "description": "The update to the user with the specified id could not be completed.", "user_id": user_id}); } }; var SuccessHandler = { userSaved: function(res, user) { res.json(user); } } This seemed to help clean up our minimal sample. There is now only one anonymous function. The code seems to be a lot more readable and independent. However, our code is still cluttered by chaining success and error callbacks. One could make these global mutable variables, or, perhaps we can consider another approach. Futures, also known as promises, are becoming more prominent. Twitter has adopted them in Scala. It is definitely something to consider. Next, do what makes your team comfortable and productive. At the same time, do not compromise the integrity of the project. There are numerous posts that encourage certain styles over others. There are also extensive posts on the subject of CoffeeScript. If you aren’t aware, CoffeeScript is a language with some added syntactic flavor that compiles to JavaScript. Our team was primarily ruby developers, and it definitely appealed to us. When we migrated some of the project over to CoffeeScript, we found that our code was a lot shorter and appeared more legible. GitHub uses CoffeeScript for the Atom text editor to this day, and the Rails community has openly embraced it. The majority of node module documentation will use JavaScript, so CoffeeScript developers will have to become acquainted with translation. There are some problems with CoffeeScript being ES6 ready, and there are some modules that are clearly not meant to be utilized in CoffeeScript. CoffeeScript is an open source project, but it has appears to have a good backbone and a stable community. If your developers are more comfortable with it, utilize it. When it comes to open source projects, everyone tends to trust them. In the purest form, open source projects are absolutely beautiful. They make the lives of all of the developers better. Nobody has to re-implement the wheel unless they choose. Obviously, both Node and CoffeeScript are open source. However, the community is very new, and it is dangerous to assume that any package you find on NPM is stable. For us, the problem occurred when we searched for an ORM. We truly missed ActiveRecord, and we assumed that other projects would work similarly.  We tried several solutions, and none of them interacted the way we wanted. Besides expressing our entire schema in a JavaScript format, we found relations to be a bit of a hack. Settling on one, we ran our server. And our database cleared out. That’s fine in development, but we struggled to find a way to get it into production. We needed more documentation. Also, the module was not designed with CoffeeScript in mind. We practically needed to revert to JavaScript. In contrast, the Node community has openly embraced some NoSQL databases, such as MongoDB. They are definitely worth considering.   Either way, make sure that your team’s dependencies are very well documented. There should be a written documentation for each exposed object, function, etc. To sum everything up, this article comes down to two fundamental things learned in any computer science class: write modular code and document everything. Do your research on Node and find a style that is legible for your team and any newcomers. A NodeJS project can only be maintained if developers utilizing the framework recognize the importance of the project in the future. If your code is messy now, it will only become messier. If you cannot find necessary information in a module’s documentation, you probably will miss other information when there is a problem in production. Don’t take shortcuts. A node application can only be as good as its developers and dependencies. About the Author Benjamin Reed began Computer Science classes at a nearby university in Nashville during his sophomore year in high school. Since then, he has become an advocate for open source. He is now pursing degrees in Computer Science and Mathematics fulltime. The Ruby community has intrigued him, and he openly expresses support for the Rails framework. When asked, he believes that studying Rails has led him to some of the best practices and, ultimately, has made him a better programmer. iOS development is one of his hobbies, and he enjoys scouting out new projects on GitHub. On GitHub, he’s appropriately named @codeblooded. On Twitter, he’s @benreedDev.
Read more
  • 0
  • 0
  • 2036
Banner background image

article-image-firebase
Packt
04 May 2015
8 min read
Save for later

Using Firebase: Learn how and why to use Firebase

Packt
04 May 2015
8 min read
In this article by Manoj Waikar, author of the book Data-oriented Development with AngularJS, we will learn a brief description about various types of persistence mechanisms, local versus hosted databases, what Firebase is, why to use it, and different use cases where Firebase can be useful. (For more resources related to this topic, see here.) We can write web applications by using the frameworks of our choice—be it server-side MVC frameworks, client-side MVC frameworks, or some combination of these. We can also use a persistence store (a database) of our choice—be it an RDBMS or a more modern NoSQL store. However, making our applications real time (meaning, if you are viewing a page and data related to that page gets updated, then the page should be updated or at least you should get a notification to refresh the page) is not a trivial task and we have to start thinking about push notifications and what not. This does not happen with Firebase. Persistence One of the very early decisions a developer or a team has to make when building any production-quality application is the choice of a persistent storage mechanism. Until a few years ago, this choice, more often than not, boiled down to a relational database such as Oracle, SQL Server, or PostgreSQL. However, the rise of NoSQL solutions such as MongoDB (http://www.mongodb.org/) and CouchDB (http://couchdb.apache.org/)—document-oriented databases or Redis (http://redis.io/), Riak (http://basho.com/riak/), keyvalue stores, Neo4j (http://www.neo4j.org/), and a graph database—has widened the choice for us. Please check the Wikipedia page on NoSQL (http://en.wikipedia.org/wiki/NoSQL) solutions for a detailed list of various NoSQL solutions including their classification and performance characteristics. There is one more buzzword that everyone must have already heard of, Cloud, the short form for cloud computing. Cloud computing briefly means that shared resources (or software) are provided to consumers on a paid/free basis over a network (typically, the Internet). So, we now have the luxury of choosing our preferred RDBMS or NoSQL database as a hosted solution. Consequently, we have one more choice to make—whether to install the database locally (on our own machine or inside the corporate network) or use a hosted solution (in the cloud). As with everything else, there are pros and cons to each of the approaches. The pros of a local database are fast access and one-time buying cost (if it's not an open source database), and the cons include the initial setup time. If you have to evaluate some another database, then you'll have to install the other database as well. The pros of a hosted solution are ease of use and minimal initial setup time, and the cons are the need for a reliable Internet connection, cost (again, if it's not a free option), and so on. Considering the preceding pros and cons, it's a safe bet to use a hosted solution when you are still evaluating different databases and only decide later between a local or a hosted solution, when you've finally zeroed in on your database of choice. What is Firebase? So, where does Firebase fit into all of this? Firebase is a NoSQL database that stores data as simple JSON documents. We can, therefore, compare it to other document-oriented databases such as CouchDB (which also stores data as JSON) or MongoDB (which stores data in the BSON, which stands for binary JSON, format). Although Firebase is a database with a RESTful API, it's also a real-time database, which means that the data is synchronized between different clients and with the backend server almost instantaneously. This implies that if the underlying data is changed by one of the clients, it gets streamed in real time to every connected client; hence, all the other clients automatically get updates with the newest set of data (without anyone having to refresh these clients manually). So, to summarize, Firebase is an API and a cloud service that gives us a real-time and scalable (NoSQL) backend. It has libraries for most server-side languages/frameworks such as Node.js, Java, Python, PHP, Ruby, and Clojure. It has official libraries for Node.js and Java and unofficial third-party libraries for Python, Ruby, and PHP. It also has libraries for most of the leading client-side frameworks such as AngularJS, Backbone, Ember, React, and mobile platforms such as iOS and Android. Firebase – Benefits and why to use? Firebase offers us the following benefits: It is a cloud service (a hosted solution), so there isn't any setup involved. Data is stored as native JSON, so what you store is what you see (on the frontend, fetched through a REST API)—WYSIWYS. Data is safe because Firebase requires 2048-bit SSL encryption for all data transfers. Data is replicated and backed-up to multiple secure locations, so there are minimal chances of data loss. When data changes, apps update instantly across devices. Our apps can work offline—as soon as we get connectivity, the data is synchronized instantly. Firebase gives us lightning fast data synchronization. So, combined with AngularJS, it gives us three-way data binding between HTML, JavaScript, and our backend (data). With two-way data binding, whenever our (JavaScript) model changes, the view (HTML) updates itself and vice versa. But, with three-way data binding, even when the data in our database changes, our JavaScript model gets updated, and consequently, the view gets updated as well. Last but not the least, it has libraries for the most popular server-side languages/frameworks (such as Node.js, Ruby, Java, and Python) as well as the popular client-side frameworks (such as Backbone, Ember, and React), including AngularJS. The Firebase binding for AngularJS is called AngularFire (https://www.firebase.com/docs/web/libraries/angular/). Firebase use cases Now that you've read how Firebase makes it easy to write applications that update in real time, you might still be wondering what kinds of applications are most suited for use with Firebase. Because, as often happens in the enterprise world, either you are not at liberty to choose all the components of your stack or you might have an existing application and you just have to add some new features to it. So, let's study the three main scenarios where Firebase can be a good fit for your needs. Apps with Firebase as the only backend This scenario is feasible if: You are writing a brand-new application or rewriting an existing one from scratch You don't have to integrate with legacy systems or other third-party services Your app doesn't need to do heavy data processing or it doesn't have complex user authentication requirements In such scenarios, Firebase is the only backend store you'll need and all dynamic content and user data can be stored and retrieved from it. Existing apps with some features powered by Firebase This scenario is feasible if you already have a site and want to add some real-time capabilities to it without touching other parts of the system. For example, you have a working website and just want to add chat capabilities, or maybe, you want to add a comment feed that updates in real time or you have to show some real-time notifications to your users. In this case, the clients can connect to your existing server (for existing features) and they can connect to Firebase for the newly added real-time capabilities. So, you can use Firebase together with the existing server. Both client and server code powered by Firebase In some use cases, there might be computationally intensive code that can't be run on the client. In situations like these, Firebase can act as an intermediary between the server and your clients. So, the server talks to the clients by manipulating data in Firebase. The server can connect to Firebase using either the Node.js library (for Node.js-based server-side applications) or through the REST API (for other server-side languages). Similarly, the server can listen to the data changes made by the clients and can respond appropriately. For example, the client can place tasks in a queue that the server will process later. One or more servers can then pick these tasks from the queue and do the required processing (as per their availability) and then place the results back in Firebase so that the clients can read them. Firebase is the API for your product You might not have realized by now (but you will once you see some examples) that as soon as we start saving data in Firebase, the REST API keeps building side-by-side for free because of the way data is stored as a JSON tree and is associated on different URLs. Think for a moment if you had a relational database as your persistence store; you would then need to specially write REST APIs (which are obviously preferable to old RPC-style web services) by using the framework available for your programming language to let external teams or customers get access to data. Then, if you wanted to support different platforms, you would need to provide libraries for all those platforms whereas Firebase already provides real-time SDKs for JavaScript, Objective-C, and Java. So, Firebase is not just a real-time persistence store, but it doubles up as an API layer too. Summary In this article, we learned a brief description about Firebase is, why to use it, and different use cases where Firebase can be useful. Resources for Article: Further resources on this subject: AngularJS Performance [article] An introduction to testing AngularJS directives [article] Our App and Tool Stack [article]
Read more
  • 0
  • 0
  • 3446

article-image-less-external-applications-and-frameworks
Packt
30 Apr 2015
11 min read
Save for later

Less with External Applications and Frameworks

Packt
30 Apr 2015
11 min read
In this article by Bass Jobsen, author of the book Less Web Development Essentials - Second Edition, we will cover the following topics: WordPress and Less Using Less with the Play framework, AngularJS, Meteor, and Rails (For more resources related to this topic, see here.) WordPress and Less Nowadays, WordPress is not only used for weblogs, but it can also be used as a content management system for building a website. The WordPress system, written in PHP, has been split into the core system, plugins, and themes. The plugins add additional functionalities to the system, and the themes handle the look and feel of a website built with WordPress. They work independently of each other and are also independent of the theme. The theme does not depend on plugins. WordPress themes define the global CSS for a website, but every plugin can also add its own CSS code. The WordPress theme developers can use Less to compile the CSS code of the themes and the plugins. Using the Sage theme by Roots with Less Sage is a WordPress starter theme. You can use it to build your own theme. The theme is based on HTML5 Boilerplate (http://html5boilerplate.com/) and Bootstrap. Visit the Sage theme website at https://roots.io/sage/. Sage can also be completely built using Gulp. More information about how to use Gulp and Bower for the WordPress development can be found at https://roots.io/sage/docs/theme-development/. After downloading Sage, the Less files can be found at assets/styles/. These files include Bootstrap's Less files. The assets/styles/main.less file imports the main Bootstrap Less file, bootstrap.less. Now, you can edit main.less to customize your theme. You will have to rebuild the Sage theme after the changes you make. You can use all of the Bootstrap's variables to customize your build. JBST with a built-in Less compiler JBST is also a WordPress starter theme. JBST is intended to be used with the so-called child themes. More information about the WordPress child themes can be found at https://codex.wordpress.org/Child_Themes. After installing JBST, you will find a Less compiler under Appearance in your Dashboard pane, as shown in the following screenshot: JBST's built-in Less compiler in the WordPress Dashboard The built-in Less compiler can be used to fully customize your website using Less. Bootstrap also forms the skeleton of JBST, and the default settings are gathered by the a11y bootstrap theme mentioned earlier. JBST's Less compiler can be used in the following different ways: First, the compiler accepts any custom-written Less (and CSS) code. For instance, to change the color of the h1 elements, you should simply edit and recompile the code as follows: h1 {color: red;} Secondly, you can edit Bootstrap's variables and (re)use Bootstrap's mixins. To set the background color of the navbar component and add a custom button, you can use the code block mentioned here in the Less compiler: @navbar-default-bg:             blue; .btn-colored { .button-variant(blue;red;green); } Thirdly, you can set JBST's built-in Less variables as follows: @footer_bg_color: black; Lastly, JBST has its own set of mixins. To set a custom font, you can edit the code as shown here: .include-custom-font(@family: arial,@font-path, @path:   @custom-font-dir, @weight: normal, @style: normal); In the preceding code, the parameters mentioned were used to set the font name (@family) and the path name to the font files (@path/@font-path). The @weight and @style parameters set the font's properties. For more information, visit https://github.com/bassjobsen/Boilerplate-JBST-Child-Theme. More Less code blocks can also be added to a special file (wpless2css/wpless2css.less or less/custom.less); these files will give you the option to add, for example, a library of prebuilt mixins. After adding the library using this file, the mixins can also be used with the built-in compiler. The Semantic UI WordPress theme The Semantic UI, as discussed earlier, offers its own WordPress plugin. The plugin can be downloaded from https://github.com/ProjectCleverWeb/Semantic-UI-WordPress. After installing and activating this theme, you can use your website directly with the Semantic UI. With the default setting, your website will look like the following screenshot: Website built with the Semantic UI WordPress theme WordPress plugins and Less As discussed earlier, the WordPress plugins have their own CSS. This CSS will be added to the page like a normal style sheet, as shown here: <link rel='stylesheet' id='plugin-name'   href='//domain/wp-content/plugin-name/plugin-name.css?ver=2.1.2'     type='text/css' media='all' /> Unless a plugin provides the Less files for their CSS code, it will not be easy to manage its styles with Less. The WP Less to CSS plugin The WP Less to CSS plugin, which can be found at http://wordpress.org/plugins/wp-less-to-css/, offers the possibility of styling your WordPress website with Less. As seen earlier, you can enter the Less code along with the built-in compiler of JBST. This code will then be compiled into the website's CSS. This plugin compiles Less with the PHP Less compiler, Less.php. Using Less with the Play framework The Play framework helps you in building lightweight and scalable web applications by using Java or Scala. It will be interesting to learn how to integrate Less with the workflow of the Play framework. You can install the Play framework from https://www.playframework.com/. To learn more about the Play framework, you can also read, Learning Play! Framework 2, Andy Petrella, Packt Publishing. To read Petrella's book, visit https://www.packtpub.com/web-development/learning-play-framework-2. To run the Play framework, you need JDK 6 or later. The easiest way to install the Play framework is by using the Typesafe activator tool. After installing the activator tool, you can run the following command: > activator new my-first-app play-scala The preceding command will install a new app in the my-first-app directory. Using the play-java option instead of the play-scala option in the preceding command will lead to the installation of a Java-based app. Later on, you can add the Scala code in a Java app or the Java code in a Scala app. After installing a new app with the activator command, you can run it by using the following commands: cd my-first-app activator run Now, you can find your app at http://localhost:9000. To enable the Less compilation, you should simply add the sbt-less plugin to your plugins.sbt file as follows: addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6") After enabling the plugin, you can edit the build.sbt file so as to configure Less. You should save the Less files into app/assets/stylesheets/. Note that each file in app/assets/stylesheets/ will compile into a separate CSS file. The CSS files will be saved in public/stylesheets/ and should be called in your templates with the HTML code shown here: <link rel="stylesheet"   href="@routes.Assets.at("stylesheets/main.css")"> In case you are using a library with more files imported into the main file, you can define the filters in the build.sbt file. The filters for these so-called partial source files can look like the following code: includeFilter in (Assets, LessKeys.less) := "*.less" excludeFilter in (Assets, LessKeys.less) := "_*.less" The preceding filters ensure that the files starting with an underscore are not compiled into CSS. Using Bootstrap with the Play framework Bootstrap is a CSS framework. Bootstrap's Less code includes many files. Keeping your code up-to-date by using partials, as described in the preceding section, will not work well. Alternatively, you can use WebJars with Play for this purpose. To enable the Bootstrap WebJar, you should add the code shown here to your build.sbt file: libraryDependencies += "org.webjars" % "bootstrap" % "3.3.2" When using the Bootstrap WebJar, you can import Bootstrap into your project as follows: @import "lib/bootstrap/less/bootstrap.less"; AngularJS and Less AngularJS is a structural framework for dynamic web apps. It extends the HTML syntax, and this enables you to create dynamic web views. Of course, you can use AngularJS with Less. You can read more about AngularJS at https://angularjs.org/. The HTML code shown here will give you an example of what repeating the HTML elements with AngularJS will look like: <!doctype html> <html ng-app> <head>    <title>My Angular App</title> </head> <body ng-app>      <ul>      <li ng-repeat="item in [1,2,3]">{{ item }}</li>    </ul> <script   src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.12/&    angular.min.js"></script> </body> </html> This code should make your page look like the following screenshot: Repeating the HTML elements with AngularJS The ngBoilerplate system The ngBoilerplate system is an easy way to start a project with AngularJS. The project comes with a directory structure for your application and a Grunt build process, including a Less task and other useful libraries. To start your project, you should simply run the following commands on your console: > git clone git://github.com/ngbp/ngbp > cd ngbp > sudo npm -g install grunt-cli karma bower > npm install > bower install > grunt watch And then, open ///path/to/ngbp/build/index.html in your browser. After installing ngBoilerplate, you can write the Less code into src/less/main.less. By default, only src/less/main.less will be compiled into CSS; other libraries and other codes should be imported into this file. Meteor and Less Meteor is a complete open-source platform for building web and mobile apps in pure JavaScript. Meteor focuses on fast development. You can publish your apps for free on Meteor's servers. Meteor is available for Linux and OS X. You can also install it on Windows. Installing Meteor is as simple as running the following command on your console: > curl https://install.meteor.com | /bin/sh You should install the Less package for compiling the CSS code of the app with Less. You can install the Less package by running the command shown here: > meteor add less Note that the Less package compiles every file with the .less extension into CSS. For each file with the .less extension, a separate CSS file is created. When you use the partial Less files that should only be imported (with the @import directive) and not compiled into the CSS code itself, you should give these partials the .import.less extension. When using the CSS frameworks or libraries with many partials, renaming the files by adding the .import.less extension will hinder you in updating your code. Also running postprocess tasks for the CSS code is not always possible. Many packages for Meteor are available at https://atmospherejs.com/. Some of these packages can help you solve the issue with using partials mentioned earlier. To use Bootstrap, you can use the meteor-bootstrap package. The meteor-bootstrap package can be found at https://github.com/Nemo64/meteor-bootstrap. The meteor-bootstrap package requires the installation of the Less package. Other packages provide you postprocsess tasks, such as autoprefixing your code. Ruby on rails and Less Ruby on Rails, or Rails, for short is a web application development framework written in the Ruby language. Those who want to start developing with Ruby on Rails can read the Getting Started with Rails guide, which can be found at http://guides.rubyonrails.org/getting_started.html. In this section, you can read how to integrate Less into a Ruby on Rails app. After installing the tools and components required for starting with Rails, you can launch a new application by running the following command on your console: > rails new blog Now, you should integrate Less with Rails. You can use less-rails (https://github.com/metaskills/less-rails) to bring Less to Rails. Open the Gemfile file, comment on the sass-rails gem, and add the less-rails gem, as shown here: #gem 'sass-rails', '~> 5.0' gem 'less-rails' # Less gem 'therubyracer' # Ruby Then, create a controller called welcome with an action called index by running the following command: > bin/rails generate controller welcome index The preceding command will generate app/views/welcome/index.html.erb. Open app/views/welcome/index.html.erb and make sure that it contains the HTML code as shown here: <h1>Welcome#index</h1> <p>Find me in app/views/welcome/index.html.erb</p> The next step is to create a file, app/assets/stylesheets/welcome.css.less, with the Less code. The Less code in app/assets/stylesheets/welcome.css.less looks as follows: @color: red; h1 { color: @color; } Now, start a web server with the following command: > bin/rails server Finally, you can visit the application at http://localhost:3000/. The application should look like the example shown here: The Rails app Summary In this article, you learned how to use Less WordPress, Play, Meteor, AngularJS, Ruby on Rails. Resources for Article: Further resources on this subject: Media Queries with Less [article] Bootstrap 3 and other applications [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 1646

article-image-angular-20
Packt
30 Apr 2015
12 min read
Save for later

Angular 2.0

Packt
30 Apr 2015
12 min read
Angular 2.0 was officially announced in ng-conference on October 2014. Angular 2.0 will not be a major update to the previous version, but it is a complete rewrite of the entire framework and will include major changes. In this article by Mohammad Wadood Majid, coauthor of the book Mastering AngularJS for .NET Developers, we will learn the following topics: Why Angular 2.0 Design and features of Angular 2.0 AtScript Routing solution Dependency injection Annotations Instance scope Child injector Data binding and templating (For more resources related to this topic, see here.) Why Angular 2.0 AngularJS is one of the most popular open source frameworks available for client-side web application development. From the last few years, AngularJS's adaption and community support has been remarkable. The current AngularJS Version 1.3 is stable and used by many developers. There are over 1600 applications inside Google that use AngularJS 1.2 or 1.3. In the last few years, the Web has changed significantly, such as in the past, it was very difficult to build a cross-browser application; however, today's browsers are more consistent in their DOM implementations and the Web will continue to change. Angular 2.0 will address the following concerns: Mobile: Angular 2.0 will focus on mobile application development. Modular: Different modules will be removed from the core AngularJS, which will result in a better performance. Angular 2.0 will provide us the ability to pick the module parts we need. Modern: Angular 2.0 will include ECMAScript 6 (ES6). ECMAScript is a scripting language standard developed by Ecma International. It is widely used in client-side scripting, such as JavaScript, JScript, and ActionScript on the Web. Performance: AngularJS was developed around 5 years ago and it was not developed for developers. It was a tool targeting developers to quickly create persistent HTML forms. However, over time, it has been used to build more complex applications. The Angular 1.x team worked over the years to make changes to the current design, allowing it to continue to be relevant as needed for modern web applications. However, there are limits to improve the current AngularJS framework. A number of these limits are related to the performance that results to the current binding and template infrastructure. In order to fix these problems, a new infrastructure is required. In these days, the modern browser already supports some of the features of ES6, but the final implementation in progress will be available in 2015. With new features, developers will be able to describe their own views (template element) and package them for distribution to other developers (HTML imports). When in 2015 all these new features are available in all the browsers, the developer will be able to create as many endeavors for reusable components as required to resolve common problems. However, most of the frameworks, such as AngularJS 1.x, are not prepared, the data binding of the AngularJS 1.x framework works on the assumption of a small number of known HTML elements. In order to take advantage of the new components, an implementation in Angular is required. Design and features of AngularJS 2.0 The current AngularJS framework design in an amalgamation of the changing Web and the general computing landscape; however, it still needs some changes. The current Angular 1.x framework cannot work with new web components, as it lends itself to mobile applications and pushes its own module and class API against the standards. To answer these issues, the AngularJS team is coming up with the AngularJS 2.0 framework. AngularJS 2.0 is a reimaging of AngularJS 1.x for the modern web browser. The following are the changes in Angular 2.0: AtScript AtScript is a language used to develop AngularJS 2.0, which is a superset of ES6. It's managed by the Traceur compiler with ES6 to produce the ES5 code and it will use the TypeScript's syntax to generate runtime type proclamations instead of compile time checks. However, the developer will still be able to use the plain JavaScript (ES5) instead to using AtScript to write AngularJS 2.0 applications. The following is an example of an AtScript code: import {Component} from 'angulat';   import {server} from './server';   @Componenet({selector: 'test'})   export class MyNewComponent {   constructor(serve:Server){      this.sever=server } } In the preceding code, the import and the class come from ES6. There are constructor functions and a server parameter that specifies a type. In AtScript, this type is used to generate a runtime type assertion. The reference is stored, so that the dependency injection framework can use it. The @Component annotation is a metadata annotation. When we decorate some code within @Component, the compiler generates code that instantiates the annotation and stores it in a known location, so that it can be accessed by the AngularJS 2.0 framework. Routing solution In AngularJS 1.x, the routing was designed to handle a few simple cases. As the framework grew, more features were added to it. AngularJS 2.0 includes the following basic routing features, but will still be able to extend: JSON route configuration Optional convention over configuration Static, parameterized, and splat route patterns URL resolver Query string Push state or hash change Navigation model Document title updates 404 route handling Location service History manipulation Child router Screen activate: canActivate activate deactivate Dependency Injection The main feature of AngularJS 1.x was Dependency Injection (DI). It is very easy to used DI and follow the divide and conquer software development approach. In this way, the complex problems can be abstracted together and the applications that are developed in this way can be assembled at runtime with the use of DI. However, there are few issues in the AngularJS 1.x framework. First, the DI implementation was associated with minification; DI was dependant on parsing parameter names from functions, and whenever the names were changed, they were no longer matching with the services, controllers, and other components. Second, the missing features, which are more common to advance server-side DI features, are available in .NET and Java. These two features add constraints to scope control and child injectors. Annotations With the use of AtScript in the AngularJS 2.0 framework, a way to relate metadata with any function was introduced. The formatting data for metadata with AtScript is strong in the face of minification and is easy to write by pointer with ES5. The instance scope In the AngularJS framework 1.x, in the DI container, all the instances were singletons. The same is the case with AngularJS 2.0 by default. However, to get different behavior, we need to use services, providers, constants, and so on. The following code can be used to create a new instance every time the DI. It will become more useful if you create your own scope identifiers for use in the combination with child injectors, as shown: @TransientScope   Export class MyClass{…} The child injector The child injector is a major new feature in AngularJS 2.0. The child injector is inherited from its parent; however, the child injector has the ability to override the parents at the child level. Using this new feature, we can call certain type of objects in the application that will be automatically overridden in various scopes. For example, when a new route has a child route ability, each child route creates its own child injector. This allows each route to inherit from parent routes or to override those services during different navigation scenarios. Data binding and templating The data binding and template are considered a single unit while developing the application. In other words, the data binding and templates go hand in hand while writing an application with the AngularJS framework. When we bind the DOM, the HTML is handed to the template compiler. The compiler goes across the HTML to find out any directives, binding expressions, event handlers, and so on. All of the data is extracted from the DOM to data structures, which can be used to instantiate the template. During this phase, some processing is done on the data, for example, parsing the binding expression. Every node that contains the instructions is tagged with the class to cache the result of the process, so that work does not need to be repeated. Dynamic loading Dynamic loading was missing in AngularJS 1.x. It is very hard to add new directives or controllers at runtime. However, dynamic loading is added to Angular 2.0. Whenever any template is compiled, not only is the compiler provided with a template, but it is also provided with a component definition. The component definition contains metadata of directives, filters, and so on. This confirms that the necessary dependencies are loaded before the template gets managed by the compiler. Directives Directives in the AngularJS framework are meant to extend the HTML. In AngularJS 1.x, the Directive Definition Object (DDO) is used to create directives. In AngularJS 2.0, the directives are made simpler. There are three types of directives in AngularJS 2.0: The component directive: This is a collection of a view and a controller to create custom components. It can be used as an HTML element as well as a router that can map routes to the components. The decorator directive: Use this directive to decorate the HTML element with additional behavior, such as ng-show. The template directive: This directive transforms HTML into a reusable template. The directive developer can control how the template is instantiated and inserted into the DOM, such as ng-if and ng-repeat. The controller in AngularJS 2.0 is not a part of the component. However, the component contains the view and controller, where view is an HTML and controller is JavaScript. In AngularJS 2.0, the developer creates a class with some annotations, as shown in the following code: @dirComponent({   Selector: 'divTabContainter'   Directives:[NgRepeat]   })   Export class TabContainer{      constructor (panes:Query<Pane>){      this.panes=panes      } select(selectPane:Pane){…}   } In the preceding code, the controller of the component is a class. The dependencies are injected automatically into the constructor because the child injectors will be used. It can get access to any service up to the DOM hierarchy as well as it will local to service element. It can be seen in the preceding code that Query is injected. This is a special collection that is automatically synchronized with the child elements and lets us know when anything is added or removed. Templates In the preceding section, we created a dirTabContainer directive using AngularJS 2.0. The following code shows how to use the preceding directive in the DOM: <template>      <div class="border">          <div class="tabs">              <div [ng-repeat|pane]="panes" class="tab"   (^click)="select(pane)">                  <img [src]="pane.icon"><span>${pane.name}</span>              </div>          </div>          <content>        </content>      </div> </template> As you can see in the preceding code, in the <img [src]="pane.icon"><span>${pane.name}</span> image tag, the src attribute is surrounded with [], which tells us that the attribute has binding express. When we see ${}, it means that there is an expression that should be interpolated in the content. These bindings are unidirectional from the model or controller to the view. If you see div in the preceding <div [ng-repeat|pane]="panes" class="tab" (^click)="select(pane)"> template code, it is noticeable that ng-repeat is a template directive and is included with | and the pane word, where pane is the local variable. (^click) indicates that there is an event handler, where ^ means that the event is not a directory attached to the DOM, rather, we let it bubble and will be handled at the document level. In the following code example, we will compare the code of the Angular framework 1.x and AngularJS 2.0; let's create a hello world example for this demonstration. The following code is used to enable the write option in the AngularJS framework 1.x: var module = angular.module("example", []);   module.controller("FormExample", function() {   this.username = "World";   });   <div ng-controller="FormExample as ctrl">   <input ng-model="ctrl.username"> Hello {{ctrl.username}}!   </div> The following code is used to write in the AngluraJS framework 1.x: @Component({   selector: 'form-example'   })   @Template({   // we are binding the input element to the control object // defined in the component's class   inline: '<input [control]="username">Hello            {{username.value}}!', directives: [forms]   })   class FormExample {   constructor() {      this.username = new Control('World');   }   } In the preceding code example, TypeScript 1.5 is used, which will support the metadata annotations. However, the preceding code can be written in the ES5/ES6 JavaScript. More information on annotations can be found in the annotation guide at https://docs.google.com/document/d/1uhs-a41dp2z0NLs-QiXYY-rqLGhgjmTf4iwBad2myzY/edit#heading=h.qbaubqkoiqds. Here are some explanations from TypeScript 1.5: Form behavior cannot be unit tested without compiling the associated template. This is required because certain parts of the application behavior are contained in the template. We want to enable the dynamically generated data-driven forms in AngularJS 2.0 although it is present in AngularJS 1.x. This is because in Angular 1.x, this is not easy. The difficulty to reason your template statically arises because the ng-model directive was built using a generic two-way data binding. An atomic form that can easily be validated or reverted to its original state is required, which is missing from AngularJS 1.x. Although AngularJS 2.0 uses an extra level of indirection, it grants major benefits. The control object decouples form behavior from the template, so that you can test it in isolation. Tests are simpler to write and faster to execute. Summary In this article, we introduced the Angular 2.0 framework; it may not be a major update to the previous version, but it is a complete rewrite of the entire framework and will include breaking changes. We also talked about certain AngularJS 2.0 changes. AngularJS 2.0 will hopefully be released by the end of 2015. Resources for Article: Further resources on this subject: Setting Up The Rig [article] AngularJS Project [article] Working with Live Data and AngularJS [article]
Read more
  • 0
  • 0
  • 1801
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-how-to-integrate-social-media-into-wordpress-website
Packt
29 Apr 2015
6 min read
Save for later

How to integrate social media with your WordPress website

Packt
29 Apr 2015
6 min read
In this article by Karol Krol, the author of the WordPress 4.x Complete, we will look at how we can integrate our website with social media. We will list some more ways in which you can make your site social media friendly, and also see why you'd want to do that in the first place. Let's start with the why. In this day and age, social media is one of the main drivers of traffic for many sites. Even if you just want to share your content with friends and family, or you have some serious business plans regarding your site, you need to have at least some level of social media integration. Even if you install just simple social media share buttons, you will effectively encourage your visitors to pass on your content to their followers, thus expanding your reach and making your content more popular. (For more resources related to this topic, see here.) Making your blog social media friendly There are a handful of ways to make your site social media friendly. The most common approaches are as follows: Social media share buttons, which allow your visitors to share your content with their friends and followers Social media APIs integration, which make your content look better on social media (design wise) Automatic content distribution to social media Social media metrics tracking Let's discuss these one by one. Setting up social media share buttons There are hundreds of social media plugins available out there that allow you to display a basic set of social media buttons on your site. The one I advise you to use is called Social Share Starter (http://bit.ly/sss-plugin). Its main advantage is that it's optimized to work on new and low-traffic sites, and doesn't show any negative social proof when displaying the buttons and their share numbers. Setting up social media APIs' integration The next step worth taking to make your content appear more attractive on social media is to integrate it with some social media APIs; particularly that of Twitter. What exactly their API is and how it works isn't very relevant for the WordPress discussion we're having here. So instead, let's just focus on what the outcome of integrating your site with this API is. Here's what a standard tweet mentioning a website usually looks like (please notice the overall design, not the text contents): Here's a different tweet, mentioning an article from a site that has Twitter's (Twitter Cards) API enabled: This looks much better. Luckily, having this level of Twitter integration is quite easy. All you need is a plugin called JM Twitter Cards (available at https://wordpress.org/plugins/jm-twitter-cards/). After installing and activating it, you will be guided through the process of setting everything up and approving your site with Twitter (mandatory step). Setting up automatic content distribution to social media The idea behind automatic social media distribution of your content is that you don't have to remember to do so manually whenever you publish a new post. Instead of copying and pasting the URL address of your new post by hand to each individual social media platform, you can have this done automatically. This can be done in many ways, but let's discuss the two most usable ones, the Jetpack and Revive Old Post plugins. The Jetpack plugin The Jetpack plugin is available at https://wordpress.org/plugins/jetpack/. One of Jetpack's modules is called Publicize. You can activate it by navigating to the Jetpack | Settings section of the wp-admin. After doing so, you will be able to go to Settings | Sharing and integrate your site with one of the six available social media platforms: After going through the process of authorizing the plugin with each service, your site will be fully capable of posting each of your new posts to social media automatically. The Revive Old Post plugin The Revive Old Post plugin is available at https://revive.social/plugins/revive-old-post. While the Jetpack plugin takes the newest posts on your site and distributes them to your various social media accounts, the Revive Old Post plugin does the same with your archived posts, ultimately giving them a new life. Hence the name Revive Old Post. After downloading and activating this plugin, go to its section in the wp-admin Revive Old Post. Then, switch to the Accounts tab. There, you can enable the plugin to work with your social media accounts by clicking on the authorization buttons: Then, go to the General settings tab and handle the time intervals and other details of how you want the plugin to work with your social media accounts. When you're done, just click on the SAVE button. At this point, the plugin will start operating automatically and distribute your random archived posts to your social media accounts. Note that it's probably a good idea not to share things too often if you don't want to anger your followers and make them unfollow you. For that reason, I wouldn't advise posting more than once a day. Setting up social media metrics tracking The final element in our social media integration puzzle is setting up some kind of tracking mechanism that would tell us how popular our content is on social media (in terms of shares). Granted, you can do this manually by going to each of your posts and checking their share numbers individually (provided you have the Social Share Starter plugin installed). However, there's a quicker method, and it involves another plugin. This one is called Social Metrics Tracker and you can get it at https://wordpress.org/plugins/social-metrics-tracker/. In short, this plugin collects social share data from a number of platforms and then displays them to you in a single readable dashboard view. After you install and activate the plugin, you'll need to give it a couple of minutes for it to crawl through your social media accounts and get the data. Soon after that, you will be able to visit the plugin's dashboard by going to the Social Metrics section in the wp-admin: For some webhosts and setups, this plugin might end up consuming too much of the server's resources. If this happens, consider activating it only occasionally to check your results and then deactivate it again. Doing this even once a week will still give you a great overview of how well your content is performing on social media. This closes our short guide on how to integrate your WordPress site with social media. I'll admit that we're just scratching the surface here and that there's a lot more that can be done. There are new social media plugins being released literally every week. That being said, the methods described here are more than enough to make your WordPress site social media friendly and enable you to share your content effectively with your friends, family, and audience. Summary Here, we talked about social media integration, tools, and plugins that can make your life a lot easier as an online content publisher. Resources for Article: Further resources on this subject: FAQs on WordPress 3 [article] Creating Blog Content in WordPress [article] Customizing WordPress Settings for SEO [article]
Read more
  • 0
  • 0
  • 2246

article-image-recording-your-first-test
Packt
24 Apr 2015
17 min read
Save for later

Recording Your First Test

Packt
24 Apr 2015
17 min read
JMeter comes with a built-in test script recorder, also referred to as a proxy server (http://en.wikipedia.org/wiki/Proxy_server), to aid you in recording test plans. The test script recorder, once configured, watches your actions as you perform operations on a website, creates test sample objects for them, and eventually stores them in your test plan, which is a JMX file. In addition, JMeter gives you the option to create test plans manually, but this is mostly impractical for recording nontrivial testing scenarios. You will save a whole lot of time using the proxy recorder, as you will be seeing in a bit. So without further ado, in this article by Bayo Erinle, author of Performance Testing with JMeter - Second Edition, let's record our first test! For this, we will record the browsing of JMeter's own official website as a user will normally do. For the proxy server to be able to watch your actions, it will need to be configured. This entails two steps: Setting up the HTTP(S) Test Script Recorder within JMeter. Setting the browser to use the proxy. (For more resources related to this topic, see here.) Configuring the JMeter HTTP(S) Test Script Recorder The first step is to configure the proxy server in JMeter. To do this, we perform the following steps: Start JMeter. Add a thread group, as follows: Right-click on Test Plan and navigate to Add | Threads (User) | Thread Group. Add the HTTP(S) Test Script Recorder element, as follows: Right-click on WorkBench and navigate to Add | Non-Test Elements | HTTP(S) Test Script Recorder. Change the port to 7000 (1) (under Global Settings). You can use a different port, if you choose to. What is important is to choose a port that is not currently used by an existing process on the machine. The default is 8080. Under the Test plan content section, choose the option Test Plan > Thread Group (2) from the Target Controller drop-down. This allows the recorded actions to be targeted to the thread group we created in step 2. Under the Test plan content section, choose the option Put each group in a new transaction controller (3) from the Grouping drop-down. This allows you to group a series of requests constituting a page load. We will see more on this topic later. Click on Add suggested Excludes (under URL Patterns to Exclude). This instructs the proxy server to bypass recording requests of a series of elements that are not relevant to test execution. These include JavaScript files, stylesheets, and images. Thankfully, JMeter provides a handy button that excludes the often excluded elements. Click on the Start button at the bottom of the HTTP(S) Test Script Recorder component. Accept the Root CA certificate by clicking on the OK button. With these settings, the proxy server will start on port 7000, and monitor all requests going through that port and record them to a test plan using the default recording controller. For details, refer to the following screenshot: Configuring the JMeter HTTP(S) Test Script Recorder   In older versions of JMeter (before version 2.10), the now HTTP(S) Test Script Recorder was referred to as HTTP Proxy Server. While we have configured the HTTP(S) Test Script Recorder manually, the newer versions of JMeter (version 2.10 and later) come with prebundled templates that make commonly performed tasks, such as this, a lot easier. Using the bundled recorder template, we can set up the script recorder with just a few button clicks. To do this, click on the Templates…(1) button right next to the New file button on the toolbar. Then select Select Template as Recording (2). Change the port to your desired port (for example, 7000) and click on the Create (3) button. Refer to the following screenshot: Configuring the JMeter HTTP(S) Test Script Recorder through the template Recorder Setting up your browser to use the proxy server There are several ways to set up the browser of your choice to use the proxy server. We'll go over two of the most common ways, starting with my personal favorite, which is using a browser extension. Using a browser extension Google Chrome and Firefox have vibrant browser plugin ecosystems that allow you to extend the capabilities of your browser with each plugin that you choose. For setting up a proxy, I really like FoxyProxy (http://getfoxyproxy.org/). It is a neat add-on to the browser that allows you to set up various proxy settings and toggle between them on the fly without having to mess around with setting systems on the machine. It really makes the work hassle free. Thankfully, FoxyProxy has a plugin for Internet Explorer, Chrome, and Firefox. If you are using any of these, you are lucky! Go ahead and grab it! Changing the machine system settings For those who would rather configure the proxy natively on their operating system, we have provided the following steps for Windows and Mac OS. On Windows OS, perform the following steps for configuring a proxy: Click on Start, then click on Control Panel. Click on Network and Internet. Click on Internet Options. In the Internet Options dialog box, click on the Connections tab. Click on the Local Area Network (LAN) Settings button. To enable the use of a proxy server, select the checkbox for Use a proxy server for your LAN (These settings will not apply to dial-up or VPN connections), as shown in the following screenshot. In the proxy Address box, enter localhost in the IP address. In the Port number text box, enter 7000 (to match the port you set up for your JMeter proxy earlier). If you want to bypass the proxy server for local IP addresses, select the Bypass proxy server for local addresses checkbox. Click on OK to complete the proxy configuration process. Manually setting proxy on Windows 7 On Mac OS, perform the following steps to configure a proxy: Go to System Preference. Click on Network. Click on the Advanced… button. Go to the Proxies tab. Select the Web Proxy (HTTP) checkbox. Under Web Proxy Server, enter localhost. For port, enter 7000 (to match the port you set up for your JMeter proxy earlier). Do the same for Secure Web Proxy (HTTPS). Click on OK. Manually setting proxy on Mac OS For all other systems, please consult the related operating system documentation. Now that is all out of the way and the connections have been made, let's get to recording using the following steps: Point your browser to http://jmeter.apache.org/. Click on the Changes link under About. Click on the User Manual link under Documentation. Stop the HTTP(S) Test Script Recorder by clicking on the Stop button, so that it doesn't record any more activities. If you have done everything correctly, your actions will be recorded under the test plan. Refer to the following screenshot for details. Congratulations! You have just recorded your first test plan. Admittedly, we have just scrapped the surface of recording test plans, but we are off to a good start. Recording your first scenario Running your first recorded scenario We can go right ahead and replay or run our recorded scenario now, but before that let's add a listener or two to give us feedback on the results of the execution. There is no limit to the amount of listeners we can attach to a test plan, but we will often use only one or two. For our test plan, let's add three listeners for illustrative purposes. Let's add a Graph Results listener, a View Results Tree listener, and an Aggregate Report listener. Each listener gathers a different kind of metric that can help analyze performance test results as follows: Right-click on Test Plan and navigate to Add | Listener | View Results Tree. Right-click on Test Plan and navigate to Add | Listener | Aggregate Report. Right-click on Test Plan and navigate to Add | Listener | Graph Results. Just as we can see more interesting data, let's change some settings at the thread group level, as follows: Click on Thread Group. Under Thread Properties set the values as follows:     Number of Threads (users): 10     Ramp-Up Period (in seconds): 15     Loop Count: 30 This will set our test plan up to run for ten users, with all users starting their test within 15 seconds, and have each user perform the recorded scenario 30 times. Before we can proceed with test execution, save the test plan by clicking on the save icon. Once saved, click on the start icon (the green play icon on the menu) and watch the test run. As the test runs, you can click on the Graph Results listener (or any of the other two) and watch results gathering in real time. This is one of the many features of JMeter. From the Aggregate Report listener, we can deduce that there were 600 requests made to both the changes link and user manual links, respectively. Also, we can see that most users (90% Line) got very good responses below 200 milliseconds for both. In addition, we see what the throughput is per second for the various links and see that there were no errors during our test run. Results as seen through this Aggregate Report listener Looking at the View Results Tree listener, we can see exactly the changes link requests that failed and the reasons for their failure. This can be valuable information to developers or system engineers in diagnosing the root cause of the errors.   Results as seen via the View Results Tree Listener The Graph Results listener also gives a pictorial representation of what is seen in the View Tree listener in the preceding screenshot. If you click on it as the test goes on, you will see the graph get drawn in real time as the requests come in. The graph is a bit self-explanatory with lines representing the average, median, deviation, and throughput. The Average, Median, and Deviation all show average, median, and deviation of the number of samplers per minute, respectively, while the Throughput shows the average rate of network packets delivered over the network for our test run in bits per minute. Please consult a website, for example, Wikipedia for further detailed explanation on the precise meanings of these terms. The graph is also interactive and you can go ahead and uncheck/check any of the irrelevant/relevant data. For example, we mostly care about the average and throughput. Let's uncheck Data, Median, and Deviation and you will see that only the data plots for Average and Throughput remain. Refer to the following screenshot for details. With our little recorded scenario, you saw some major components that constitute a JMeter test plan. Let's record another scenario, this time using another application that will allow us to enter form values. Excilys Bank case study We'll borrow a website created by the wonderful folks at Excilys, a company focused on delivering skills and services in IT (http://www.excilys.com/). It's a light banking web application created for illustrative purposes. Let's start a new test plan, set up the test script recorder like we did previously, and start recording. Results as seen through this Graph Results Listener Let's start with the following steps: Point your browser to http://excilysbank.aws.af.cm/public/login.html. Enter the username and password in the login form, as follows: Username: user1 Password: password1 Click on the PERSONNAL CHECKING link. Click on the Transfers tab. Click on My Accounts. Click on the Joint Checking link. Click on the Transfers tab. Click on the Cards tab. Click on the Operations tab. Click on the Log out button. Stop the proxy server by clicking on the Stop button. This concludes our recorded scenario. At this point, we can add listeners for gathering results of our execution and then replay the recorded scenario as we did earlier. If we do, we will be in for a surprise (that is, if we don't use the bundled recorder template). We will have several failed requests after login, since we have not included the component to manage sessions and cookies needed to successfully replay this scenario. Thankfully, JMeter has such a component and it is called HTTP Cookie Manager. This seemingly simple, yet powerful component helps maintain an active session through HTTP cookies, once our client has established a connection with the server after login. It ensures that a cookie is stored upon successful authentication and passed around for subsequent requests, hence allowing those to go through. Each JMeter thread (that is, user) has its own cookie storage area. That is vital since you won't want a user gaining access to the site under another user's identity. This becomes more apparent when we test for websites requiring authentication and authorization (like the one we just recorded) for multiple users. Let's add this to our test plan by right-clicking on Test Plan and navigating to Add | Config Element | HTTP Cookie Manager. Once added, we can now successfully run our test plan. At this point, we can simulate more load by increasing the number of threads at the thread group level. Let's go ahead and do that. If executed, the test plan will now pass, but this is not realistic. We have just emulated one user, repeating five times essentially. All threads will use the credentials of user1, meaning that all threads log in to the system as user1. That is not what we want. To make the test realistic, what we want is each thread authenticating as a different user of the application. In reality, your bank creates a unique user for you, and only you or your spouse will be privileged to see your account details. Your neighbor down the street, if he used the same bank, won't get access to your account (at least we hope not!). So with that in mind, let's tweak the test to accommodate such a scenario. Parameterizing the script We begin by adding a CSV Data Set Config component (Test Plan | Add | Config Element | CSV Data Set Config) to our test plan. Since it is expensive to generate unique random values at runtime due to high CPU and memory consumption, it is advisable to define that upfront. The CSV Data Set Config component is used to read lines from a file and split them into variables that can then be used to feed input into the test plan. JMeter gives you a choice for the placement of this component within the test plan. You would normally add the component at the HTTP request level of the request that needs values fed from it. In our case, this will be the login HTTP request, where the username and password are entered. Another is to add it at the thread group level, that is, as a direct child of the thread group. If a particular dataset is applied to only a thread group, it makes sense to add it at this level. The third place where this component can be placed is at the Test Plan root level. If a dataset applies to all running threads, then it makes sense to add it at the root level. In our opinion, this also makes your test plans more readable and maintainable, as it is easier to see what is going on when inspecting or troubleshooting a test plan since this component can easily be seen at the root level rather than being deeply nested at other levels. So for our scenario, let's add this at the Test Plan root level. You can always move the components around using drag and drop even after adding them to the test plan. CSV Data Set Config Once added, the Filename entry is all that is needed if you have included headers in the input file. For example, if the input file is defined as follows: user, password, account_id user1, password1, 1 If the Variable Names field is left blank, then JMeter will use the first line of the input file as the variable names for the parameters. In cases where headers are not included, the variable names can be entered here. The other interesting setting here is Sharing mode. By default, this defaults to All threads, meaning all running threads will use the same set of data. So in cases where you have two threads running, Thread1 will use the first line as input data, while Thread2 will use the second line. If the number of running threads exceeds the input data then entries will be reused from the top of the file, provided that Recycle on EOF is set to True (the default). The other options for sharing modes include Current thread group and Current thread. Use the former for cases where the dataset is specific for a certain thread group and the latter for cases where the dataset is specific to each thread. The other properties of the component are self-explanatory and additional information can be found in JMeter's online user guide. Now that the component is added, we need to parameterize the login HTTP request with the variable names defined in our file (or the csvconfig component) so that the values can be dynamically bound during test execution. We do this by changing the value of the username to ${user} and password to ${password}, respectively, on the HTTP login request. The values between the ${} match the headers defined in the input file or the values specified in the Variable Names entry of the CSV Data Set Config component. Binding parameter values for HTTP requests We can now run our test plan and it should work as earlier, only this time the values are dynamically bound through the configuration we have set up. So far, we have run for a single user. Let's increase the thread group properties and run for ten users, with a ramp-up of 30 seconds, for one iteration. Now let's rerun our test. Examining the test results, we notice some requests failed with a status code of 403 (http://en.wikipedia.org/wiki/HTTP_403), which is an access denied error. This is because we are trying to access an account that does not seem to be the logged-in user. In our sample, all users made a request for account number 4, which only one user (user1) is allowed to see. You can trace this by adding a View Tree listener to the test plan and returning the test. If you closely examine some of the HTTP requests in the Request tab of the View Results Treelistener, you'll notice requests as follows: /private/bank/account/ACC1/operations.html /private/bank/account/ACC1/year/2013/month/1/page/0/operations.json … Observant readers would have noticed that our input data file also contains an account_id column. We can leverage this column so that we can parameterize all requests containing account numbers to pick the right accounts for each logged-in user. To do this, consider the following line of code: /private/bank/account/ACC1/operations.html Change this to the following line of code: /private/bank/account/ACC${account_id}/operations.html Now, consider the following line of code: /private/bank/account/ACC1/year/2013/month/1/page/0/operations.json Change this to the following line of code: /private/bank/account/ACC${account_id}/year/2013/month/1/page/0/operations.json Make similar changes to the rest of the code. Go ahead and do this for all such requests. Once completed, we can now rerun our test plan and, this time, things are logically correct and will work fine. You can also verify that if all works as expected after the test execution by examining the View Results Tree listener, clicking on some account requests URL, and changing the response display from text to HTML, you should see an account other than ACCT1. Summary We have covered quite a lot in this article. You learned how to configure JMeter and our browsers to help record test plans. In addition, you learned about some built-in components that can help us feed data into our test plan and/or extract data from server responses. Resources for Article:   Further resources on this subject: Execution of Test Plans [article] Performance Testing Fundamentals [article] Data Acquisition and Mapping [article]
Read more
  • 0
  • 0
  • 1143

article-image-using-mock-objects-test-interactions
Packt
23 Apr 2015
25 min read
Save for later

Using Mock Objects to Test Interactions

Packt
23 Apr 2015
25 min read
In this article by Siddharta Govindaraj, author of the book Test-Driven Python Development, we will look at the Event class. The Event class is very simple: receivers can register with the event to be notified when the event occurs. When the event fires, all the receivers are notified of the event. (For more resources related to this topic, see here.) A more detailed description is as follows: Event classes have a connect method, which takes a method or function to be called when the event fires When the fire method is called, all the registered callbacks are called with the same parameters that are passed to the fire method Writing tests for the connect method is fairly straightforward—we just need to check that the receivers are being stored properly. But, how do we write the tests for the fire method? This method does not change any state or store any value that we can assert on. The main responsibility of this method is to call other methods. How do we test that this is being done correctly? This is where mock objects come into the picture. Unlike ordinary unit tests that assert on object state, mock objects are used to test that the interactions between multiple objects occurs as it should. Hand writing a simple mock To start with, let us look at the code for the Event class so that we can understand what the tests need to do. The following code is in the file event.py in the source directory: class Event:    """A generic class that provides signal/slot functionality"""      def __init__(self):        self.listeners = []      def connect(self, listener):        self.listeners.append(listener)      def fire(self, *args, **kwargs):        for listener in self.listeners:            listener(*args, **kwargs) The way this code works is fairly simple. Classes that want to get notified of the event should call the connect method and pass a function. This will register the function for the event. Then, when the event is fired using the fire method, all the registered functions will be notified of the event. The following is a walk-through of how this class is used: >>> def handle_event(num): ...   print("I got number {0}".format(num)) ... >>> event = Event() >>> event.connect(handle_event) >>> event.fire(3) I got number 3 >>> event.fire(10) I got number 10 As you can see, every time the fire method is called, all the functions that registered with the connect method get called with the given parameters. So, how do we test the fire method? The walk-through above gives a hint. What we need to do is to create a function, register it using the connect method, and then verify that the method got notified when the fire method was called. The following is one way to write such a test: import unittest from ..event import Event   class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        called = False        def listener():            nonlocal called            called = True          event = Event()        event.connect(listener)        event.fire()        self.assertTrue(called) Put this code into the test_event.py file in the tests folder and run the test. The test should pass. The following is what we are doing: First, we create a variable named called and set it to False. Next, we create a dummy function. When the function is called, it sets called to True. Finally, we connect the dummy function to the event and fire the event. If the dummy function was successfully called when the event was fired, then the called variable would be changed to True, and we assert that the variable is indeed what we expected. The dummy function we created above is an example of a mock. A mock is simply an object that is substituted for a real object in the test case. The mock then records some information such as whether it was called, what parameters were passed, and so on, and we can then assert that the mock was called as expected. Talking about parameters, we should write a test that checks that the parameters are being passed correctly. The following is one such test:    def test_a_listener_is_passed_right_parameters(self):        params = ()        def listener(*args, **kwargs):            nonlocal params            params = (args, kwargs)        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape":"square"}), params) This test is the same as the previous one, except that it saves the parameters that are then used in the assert to verify that they were passed properly. At this point, we can see some repetition coming up in the way we set up the mock function and then save some information about the call. We can extract this code into a separate class as follows: class Mock:    def __init__(self):        self.called = False        self.params = ()      def __call__(self, *args, **kwargs):        self.called = True        self.params = (args, kwargs) Once we do this, we can use our Mock class in our tests as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called)      def test_a_listener_is_passed_right_parameters(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape": "square"}),             listener.params) What we have just done is to create a simple mocking class that is quite lightweight and good for simple uses. However, there are often times when we need much more advanced functionality, such as mocking a series of calls or checking the order of specific calls. Fortunately, Python has us covered with the unittest.mock module that is supplied as a part of the standard library. Using the Python mocking framework The unittest.mock module provided by Python is an extremely powerful mocking framework, yet at the same time it is very easy to use. Let us redo our tests using this library. First, we need to import the mock module at the top of our file as follows: from unittest import mock Next, we rewrite our first test as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called) The only change that we've made is to replace our own custom Mock class with the mock.Mock class provided by Python. That is it. With that single line change, our test is now using the inbuilt mocking class. The unittest.mock.Mock class is the core of the Python mocking framework. All we need to do is to instantiate the class and pass it in where it is required. The mock will record if it was called in the called instance variable. How do we check that the right parameters were passed? Let us look at the rewrite of the second test as follows:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_called_with(5, shape="square") The mock object automatically records the parameters that were passed in. We can assert on the parameters by using the assert_called_with method on the mock object. The method will raise an assertion error if the parameters don't match what was expected. In case we are not interested in testing the parameters (maybe we just want to check that the method was called), then we can pass the value mock.ANY. This value will match any parameter passed. There is a subtle difference in the way normal assertions are called compared to assertions on mocks. Normal assertions are defined as a part of the unittest.Testcase class. Since our tests inherit from that class, we call the assertions on self, for example, self.assertEquals. On the other hand, the mock assertion methods are a part of the mock object, so you call them on the mock object, for example, listener.assert_called_with. Mock objects have the following four assertions available out of the box: assert_called_with: This method asserts that the last call was made with the given parameters assert_called_once_with: This assertion checks that the method was called exactly once and was with the given parameters assert_any_call: This checks that the given call was made at some point during the execution assert_has_calls: This assertion checks that a list of calls occurred The four assertions are very subtly different, and that shows up when the mock has been called more than one. The assert_called_with method only checks the last call, so if there was more than one call, then the previous calls will not be asserted. The assert_any_call method will check if a call with the given parameters occurred anytime during execution. The assert_called_once_with assertion asserts for a single call, so if the mock was called more than once during execution, then this assert would fail. The assert_has_calls assertion can be used to assert that a set of calls with the given parameters occurred. Note that there might have been more calls than what we checked for in the assertion, but the assertion would still pass as long as the given calls are present. Let us take a closer look at the assert_has_calls assertion. Here is how we can write the same test using this assertion:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_has_calls([mock.call(5, shape="square")]) The mocking framework internally uses _Call objects to record calls. The mock.call function is a helper to create these objects. We just call it with the expected parameters to create the required call objects. We can then use these objects in the assert_has_calls assertion to assert that the expected call occurred. This method is useful when the mock was called multiple times and we want to assert only some of the calls. Mocking objects While testing the Event class, we only needed to mock out single functions. A more common use of mocking is to mock a class. Take a look at the implementation of the Alert class in the following: class Alert:    """Maps a Rule to an Action, and triggers the action if the rule    matches on any stock update"""      def __init__(self, description, rule, action):        self.description = description        self.rule = rule        self.action = action      def connect(self, exchange):        self.exchange = exchange        dependent_stocks = self.rule.depends_on()        for stock in dependent_stocks:            exchange[stock].updated.connect(self.check_rule)      def check_rule(self, stock):        if self.rule.matches(self.exchange):            self.action.execute(self.description) Let's break down how this class works as follows: The Alert class takes a Rule and an Action in the initializer. When the connect method is called, it takes all the dependent stocks and connects to their updated event. The updated event is an instance of the Event class that we saw earlier. Each Stock class has an instance of this event, and it is fired whenever a new update is made to that stock. The listener for this event is the self.check_rule method of the Alert class. In this method, the alert checks if the new update caused a rule to be matched. If the rule matched, it calls the execute method on the Action. Otherwise, nothing happens. This class has a few requirements, as shown in the following, that need to be met. Each of these needs to be made into a unit test. If a stock is updated, the class should check if the rule matches If the rule matches, then the corresponding action should be executed If the rule doesn't match, then nothing happens There are a number of different ways in which we could test this; let us go through some of the options. The first option is not to use mocks at all. We could create a rule, hook it up to a test action, and then update the stock and verify that the action was executed. The following is what such a test would look like: import unittest from datetime import datetime from unittest import mock   from ..alert import Alert from ..rule import PriceRule from ..stock import Stock   class TestAction:    executed = False      def execute(self, description):        self.executed = True   class AlertTest(unittest.TestCase):    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)       action = TestAction()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        self.assertTrue(action.executed) This is the most straightforward option, but it requires a bit of code to set up and there is the TestAction that we need to create just for the test case. Instead of creating a test action, we could instead replace it with a mock action. We can then simply assert on the mock that it got executed. The following code shows this variation of the test case:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)        action = mock.MagicMock()       alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") A couple of observations about this test: If you notice, alert is not the usual Mock object that we have been using so far, but a MagicMock object. A MagicMock object is like a Mock object but it has special support for Python's magic methods which are present on all classes, such as __str__, hasattr. If we don't use MagicMock, we may sometimes get errors or strange behavior if the code uses any of these methods. The following example illustrates the difference: >>> from unittest import mock >>> mock_1 = mock.Mock() >>> mock_2 = mock.MagicMock() >>> len(mock_1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'Mock' has no len() >>> len(mock_2) 0 >>>  In general, we will be using MagicMock in most places where we need to mock a class. Using Mock is a good option when we need to mock stand alone functions, or in rare situations where we specifically don't want a default implementation for the magic methods. The other observation about the test is the way methods are handled. In the test above, we created a mock action object, but we didn't specify anywhere that this mock class should contain an execute method and how it should behave. In fact, we don't need to. When a method or attribute is accessed on a mock object, Python conveniently creates a mock method and adds it to the mock class. Therefore, when the Alert class calls the execute method on our mock action object, that method is added to our mock action. We can then check that the method was called by asserting on action.execute.called. The downside of Python's behavior of automatically creating mock methods when they are accessed is that a typo or change in interface can go unnoticed. For example, suppose we rename the execute method in all the Action classes to run. But if we run our test cases, it still passes. Why does it pass? Because the Alert class calls the execute method, and the test only checks that the execute method was called, which it was. The test does not know that the name of the method has been changed in all the real Action implementations and that the Alert class will not work when integrated with the actual actions. To avoid this problem, Python supports using another class or object as a specification. When a specification is given, the mock object only creates the methods that are present in the specification. All other method or attribute accesses will raise an error. Specifications are passed to the mock at initialization time via the spec parameter. Both the Mock as well as MagicMock classes support setting a specification. The following code example shows the difference when a spec parameter is set compared to a default Mock object: >>> from unittest import mock >>> class PrintAction: ...     def run(self, description): ...         print("{0} was executed".format(description)) ...   >>> mock_1 = mock.Mock() >>> mock_1.execute("sample alert") # Does not give an error <Mock name='mock.execute()' id='54481752'>   >>> mock_2 = mock.Mock(spec=PrintAction) >>> mock_2.execute("sample alert") # Gives an error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 557, in __getattr__    raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'execute' Notice in the above example that mock_1 goes ahead and executes the execute method without any error, even though the method has been renamed in the PrintAction. On the other hand, by giving a spec, the method call to the nonexistent execute method raises an exception. Mocking return values The second variant above showed how we could use a mock Action class in the test instead of a real one. In the same way, we can also use a mock rule instead of creating a PriceRule in the test. The alert calls the rule to see whether the new stock update caused the rule to be matched. What the alert does depends on whether the rule returned True or False. All the mocks we've created so far have not had to return a value. We were just interested in whether the right call was made or not. If we mock the rule, then we will have to configure it to return the right value for the test. Fortunately, Python makes that very simple to do. All we have to do is to set the return value as a parameter in the constructor to the mock object as follows: >>> matches = mock.Mock(return_value=True) >>> matches() True >>> matches(4) True >>> matches(4, "abcd") True As we can see above, the mock just blindly returns the set value, irrespective of the parameters. Even the type or number of parameters is not considered. We can use the same procedure to set the return value of a method in a mock object as follows: >>> rule = mock.MagicMock() >>> rule.matches = mock.Mock(return_value=True) >>> rule.matches() True >>>  There is another way to set the return value, which is very convenient when dealing with methods in mock objects. Each mock object has a return_value attribute. We simply set this attribute to the return value and every call to the mock will return that value, as shown in the following: >>> from unittest import mock >>> rule = mock.MagicMock() >>> rule.matches.return_value = True >>> rule.matches() True >>>  In the example above, the moment we access rule.matches, Python automatically creates a mock matches object and puts it in the rule object. This allows us to directly set the return value in one statement without having to create a mock for the matches method. Now that we've seen how to set the return value, we can go ahead and change our test to use a mocked rule object, as shown in the following:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") There are two calls that the Alert makes to the rule: one to the depends_on method and the other to the matches method. We set the return value for both of them and the test passes. In case no return value is explicitly set for a call, the default return value is to return a new mock object. The mock object is different for each method that is called, but is consistent for a particular method. This means if the same method is called multiple times, the same mock object will be returned each time. Mocking side effects Finally, we come to the Stock class. This is the final dependency of the Alert class. We're currently creating Stock objects in our test, but we could replace it with a mock object just like we did for the Action and PriceRule classes. The Stock class is again slightly different in behavior from the other two mock objects. The update method doesn't just return a value—it's primary behavior in this test is to trigger the updated event. Only if this event is triggered will the rule check occur. In order to do this, we must tell our mock stock class to fire the event when the update event is called. Mock objects have a side_effect attribute to enable us to do just this. There are many reasons we might want to set a side effect. Some of them are as follows: We may want to call another method, like in the case of the Stock class, which needs to fire the event when the update method is called. To raise an exception: this is particularly useful when testing error situations. Some errors such as a network timeout might be very difficult to simulate, and it is better to test using a mock that simply raises the appropriate exception. To return multiple values: these may be different values each time the mock is called, or specific values, depending on the parameters passed. Setting the side effect is just like setting the return value. The only difference is that the side effect is a lambda function. When the mock is executed, the parameters are passed to the lambda function and the lambda is executed. The following is how we would use this with a mocked out Stock class:    def test_action_is_executed_when_rule_matches(self):        goog = mock.MagicMock(spec=Stock)        goog.updated = Event()        goog.update.side_effect = lambda date, value:                goog.updated.fire(self)        exchange = {"GOOG": goog}      rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)         exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") So what is going on in that test? First, we create a mock of the Stock class instead of using the real one. Next, we add in the updated event. We need to do this because the Stock class creates the attribute at runtime in the __init__ scope. Because the attribute is set dynamically, MagicMock does not pick up the attribute from the spec parameter. We are setting an actual Event object here. We could set it as a mock as well, but it is probably overkill to do that. Finally, we set the side effect for the update method in the mock stock object. The lambda takes the two parameters that the method does. In this particular example, we just want to fire the event, so the parameters aren't used in the lambda. In other cases, we might want to perform different actions based on the values of the parameters. Setting the side_effect attribute allows us to do that. Just like with the return_value attribute, the side_effect attribute can also be set in the constructor. Run the test and it should pass. The side_effect attribute can also be set to an exception or a list. If it is set to an exception, then the given exception will be raised when the mock is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = Exception() >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 941, in _mock_call    raise effect Exception If it is set to a list, then the mock will return the next element of the list each time it is called. This is a good way to mock a function that has to return different values each time it is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = [1, 2, 3] >>> m() 1 >>> m() 2 >>> m() 3 >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 944, in _mock_call    result = next(effect) StopIteration As we have seen, the mocking framework's method of handling side effects using the side_effect attribute is very simple, yet quite powerful. How much mocking is too much? In the previous few sections, we've seen the same test written with different levels of mocking. We started off with a test that didn't use any mocks at all, and subsequently mocked out each of the dependencies one by one. Which one of these solutions is the best? As with many things, this is a point of personal preference. A purist would probably choose to mock out all dependencies. My personal preference is to use real objects when they are small and self-contained. I would not have mocked out the Stock class. This is because mocks generally require some configuration with return values or side effects, and this configuration can clutter the test and make it less readable. For small, self-contained classes, it is simpler to just use the real object. At the other end of the spectrum, classes that might interact with external systems, or that take a lot of memory, or are slow are good candidates for mocking out. Additionally, objects that require a lot of dependencies on other object to initialize are candidates for mocking. With mocks, you just create an object, pass it in, and assert on parts that you are interested in checking. You don't have to create an entirely valid object. Even here there are alternatives to mocking. For example, when dealing with a database, it is common to mock out the database calls and hardcode a return value into the mock. This is because the database might be on another server, and accessing it makes the tests slow and unreliable. However, instead of mocks, another option could be to use a fast in-memory database for the tests. This allows us to use a live database instead of a mocked out database. Which approach is better depends on the situation. Mocks versus stubs versus fakes versus spies We've been talking about mocks so far, but we've been a little loose on the terminology. Technically, everything we've talked about falls under the category of a test double. A test double is some sort of fake object that we use to stand in for a real object in a test case. Mocks are a specific kind of test double that record information about calls that have been made to it, so that we can assert on them later. Stubs are just an empty do-nothing kind of object or method. They are used when we don't care about some functionality in the test. For example, imagine we have a method that performs a calculation and then sends an e-mail. If we are testing the calculation logic, we might just replace the e-mail sending method with an empty do-nothing method in the test case so that no e-mails are sent out while the test is running. Fakes are a replacement of one object or system with a simpler one that facilitates easier testing. Using an in-memory database instead of the real one, or the way we created a dummy TestAction earlier in this article would be examples of fakes. Finally, spies are objects that are like middlemen. Like mocks, they record the calls so that we can assert on them later, but after recording, they continue execution to the original code. Spies are different from the other three in the sense that they do not replace any functionality. After recording the call, the real code is still executed. Spies sit in the middle and do not cause any change in execution pattern. Summary In this article, you looked at how to use mocks to test interactions between objects. You saw how to hand write our own mocks, followed by using the mocking framework provided in the Python standard library. Resources for Article: Further resources on this subject: Analyzing a Complex Dataset [article] Solving problems – closest good restaurant [article] Importing Dynamic Data [article]
Read more
  • 0
  • 0
  • 4491

article-image-constructing-common-ui-widgets
Packt
22 Apr 2015
21 min read
Save for later

Constructing Common UI Widgets

Packt
22 Apr 2015
21 min read
One of the biggest features that draws developers to Ext JS is the vast array of UI widgets available out of the box. The ease with which they can be integrated with each other and the attractive and consistent visuals each of them offers is also a big attraction. No other framework can compete on this front, and this is a huge reason Ext JS leads the field of large-scale web applications. In this article by Stuart Ashworth and Andrew Duncan by authors of the book, Ext JS Essentials, we will look at how UI widgets fit into the framework's structure, how they interact with each other, and how we can retrieve and reference them. We will then delve under the surface and investigate the lifecycle of a component and the stages it will go through during the lifetime of an application. (For more resources related to this topic, see here.) Anatomy of a UI widget Every UI element in Ext JS extends from the base component class Ext.Component. This class is responsible for rendering UI elements to the HTML document. They are generally sized and positioned by layouts used by their parent components and participate in the automatic component lifecycle process. You can imagine an instance of Ext.Component as a single section of the user interface in a similar way that you might think of a DOM element when building traditional web interfaces. Each subclass of Ext.Component builds upon this simple fact and is responsible for generating more complex HTML structures or combining multiple Ext.Components to create a more complex interface. Ext.Component classes, however, can't contain other Ext.Components. To combine components, one must use the Ext.container.Container class, which itself extends from Ext.Component. This class allows multiple components to be rendered inside it and have their size and positioning managed by the framework's layout classes. Components and HTML Creating and manipulating UIs using components requires a slightly different way of thinking than you may be used to when creating interactive websites with libraries such as jQuery. The Ext.Component class provides a layer of abstraction from the underlying HTML and allows us to encapsulate additional logic to build and manipulate this HTML. This concept is different from the way other libraries allow you to manipulate UI elements and provides a hurdle for new developers to get over. The Ext.Component class generates HTML for us, which we rarely need to interact with directly; instead, we manipulate the configuration and properties of the component. The following code and screenshot show the HTML generated by a simple Ext.Component instance: var simpleComponent = Ext.create('Ext.Component', { html   : 'Ext JS Essentials!', renderTo: Ext.getBody() }); As you can see, a simple <DIV> tag is created, which is given some CSS classes and an autogenerated ID, and has the HTML config displayed inside it. This generated HTML is created and managed by the Ext.dom.Element class, which wraps a DOM element and its children, offering us numerous helper methods to interrogate and manipulate it. After it is rendered, each Ext.Component instance has the element instance stored in its el property. You can then use this property to manipulate the underlying HTML that represents the component. As mentioned earlier, the el property won't be populated until the component has been rendered to the DOM. You should put logic dependent on altering the raw HTML of the component in an afterrender event listener or override the afterRender method. The following example shows how you can manipulate the underlying HTML once the component has been rendered. It will set the background color of the element to red: Ext.create('Ext.Component', { html     : 'Ext JS Essentials!', renderTo : Ext.getBody(), listeners: {    afterrender: function(comp) {      comp.el.setStyle('background-color', 'red');    } } }); It is important to understand that digging into and updating the HTML and CSS that Ext JS creates for you is a dangerous game to play and can result in unexpected results when the framework tries to update things itself. There is usually a framework way to achieve the manipulations you want to include, which we recommend you use first. We always advise new developers to try not to fight the framework too much when starting out. Instead, we encourage them to follow its conventions and patterns, rather than having to wrestle it to do things in the way they may have previously done when developing traditional websites and web apps. The component lifecycle When a component is created, it follows a lifecycle process that is important to understand, so as to have an awareness of the order in which things happen. By understanding this sequence of events, you will have a much better idea of where your logic will fit and ensure you have control over your components at the right points. The creation lifecycle The following process is followed when a new component is instantiated and rendered to the document by adding it to an existing container. When a component is shown explicitly (for example, without adding to a parent, such as a floating component) some additional steps are included. These have been denoted with a * in the following process. constructor First, the class' constructor function is executed, which triggers all of the other steps in turn. By overriding this function, we can add any setup code required for the component. Config options processed The next thing to be handled is the config options that are present in the class. This involves each option's apply and update methods being called, if they exist, meaning the values are available via the getter from now onwards. initComponent The initComponent method is now called and is generally used to apply configurations to the class and perform any initialization logic. render Once added to a container, or when the show method is called, the component is rendered to the document. boxready At this stage, the component is rendered and has been laid out by its parent's layout class, and is ready at its initial size. This event will only happen once on the component's first layout. activate (*) If the component is a floating item, then the activate event will fire, showing that the component is the active one on the screen. This will also fire when the component is brought back to focus, for example, in a Tab panel when a tab is selected. show (*) Similar to the previous step, the show event will fire when the component is finally visible on screen. The destruction process When we are removing a component from the Viewport and want to destroy it, it will follow a destruction sequence that we can use to ensure things are cleaned up sufficiently, so as to avoid memory leaks and so on. The framework takes care of the majority of this cleanup for us, but it is important that we tidy up any additional things we instantiate. hide (*) When a component is manually hidden (using the hide method), this event will fire and any additional hide logic can be included here. deactivate (*) Similar to the activate step, this is fired when the component becomes inactive. As with the activate step, this will happen when floating and nested components are hidden and are no longer the items under focus. destroy This is the final step in the teardown process and is implemented when the component and its internal properties and objects are cleaned up. At this stage, it is best to remove event handlers, destroy subclasses, and ensure any other references are released. Component Queries Ext JS boasts a powerful system to retrieve references to components called Component Queries. This is a CSS/XPath style query syntax that lets us target broad sets or specific components within our application. For example, within our controller, we may want to find a button with the text "Save" within a component of type MyForm. In this section, we will demonstrate the Component Query syntax and how it can be used to select components. We will also go into details about how it can be used within Ext.container.Container classes to scope selections. xtypes Before we dive in, it is important to understand the concept of xtypes in Ext JS. An xtype is a shorthand name for an Ext.Component that allows us to identify its declarative component configuration objects. For example, we can create a new Ext.Component as a child of an Ext.container.Container using an xtype with the following code: Ext.create('Ext.Container', { items: [    {      xtype: 'component',      html : 'My Component!'    } ] }); Using xtypes allows you to lazily instantiate components when required, rather than having them all created upfront. Common component xtypes include: Classes xtypes Ext.tab.Panel tabpanel Ext.container.Container container Ext.grid.Panel gridpanel Ext.Button button xtypes form the basis of our Component Query syntax in the same way that element types (for example, div, p, span, and so on) do for CSS selectors. We will use these heavily in the following examples. Sample component structure We will use the following sample component structure—a panel with a child tab panel, form, and buttons—to perform our example queries on: var panel = Ext.create('Ext.panel.Panel', { height : 500, width : 500, renderTo: Ext.getBody(), layout: {    type : 'vbox',    align: 'stretch' }, items : [    {      xtype : 'tabpanel',      itemId: 'mainTabPanel',      flex : 1,      items : [        {          xtype : 'panel',          title : 'Users',          itemId: 'usersPanel',          layout: {            type : 'vbox',            align: 'stretch'            },            tbar : [              {                xtype : 'button',                text : 'Edit',                itemId: 'editButton'                }              ],              items : [                {                  xtype : 'form',                  border : 0,                  items : [                  {                      xtype : 'textfield',                      fieldLabel: 'Name',                      allowBlank: false                    },                    {                      xtype : 'textfield',                      fieldLabel: 'Email',                      allowBlank: false                    }                  ],                  buttons: [                    {                      xtype : 'button',                      text : 'Save',                      action: 'saveUser'                    }                  ]                },                {                  xtype : 'grid',                  flex : 1,                  border : 0,                  columns: [                    {                     header : 'Name',                      dataIndex: 'Name',                      flex : 1                    },                    {                      header : 'Email',                      dataIndex: 'Email'                    }                   ],                  store : Ext.create('Ext.data.Store', {                    fields: [                      'Name',                      'Email'                    ],                    data : [                      {                        Name : 'Joe Bloggs',                        Email: '[email protected]'                      },                      {                        Name : 'Jane Doe',                        Email: '[email protected]'                      }                    ]                  })                }              ]            }          ]        },        {          xtype : 'component',          itemId : 'footerComponent',          html : 'Footer Information',          extraOptions: {            option1: 'test',            option2: 'test'          },          height : 40        }      ]    }); Queries with Ext.ComponentQuery The Ext.ComponentQuery class is used to perform Component Queries, with the query method primarily used. This method accepts two parameters: a query string and an optional Ext.container.Container instance to use as the root of the selection (that is, only components below this one in the hierarchy will be returned). The method will return an array of components or an empty array if none are found. We will work through a number of scenarios and use Component Queries to find a specific set of components. Finding components based on xtype As we have seen, we use xtypes like element types in CSS selectors. We can select all the Ext.panel.Panel instances using its xtype—panel: var panels = Ext.ComponentQuery.query('panel'); We can also add the concept of hierarchy by including a second xtype separated by a space. The following code will select all Ext.Button instances that are descendants (at any level) of an Ext.panel.Panel class: var buttons = Ext.ComponentQuery.query('panel buttons'); We could also use the > character to limit it to buttons that are direct descendants of a panel. var directDescendantButtons = Ext.ComponentQuery.query('panel > button'); Finding components based on attributes It is simple to select a component based on the value of a property. We use the XPath syntax to specify the attribute and the value. The following code will select buttons with an action attribute of saveUser: var saveButtons = Ext.ComponentQuery.query('button[action="saveUser"]); Finding components based on itemIds ItemIds are commonly used to retrieve components, and they are specially optimized for performance within the ComponentQuery class. They should be unique only within their parent container and not globally unique like the id config. To select a component based on itemId, we prefix the itemId with a # symbol: var usersPanel = Ext.ComponentQuery.query('#usersPanel'); Finding components based on member functions It is also possible to identify matching components based on the result of a function of that component. For example, we can select all text fields whose values are valid (that is, when a call to the isValid method returns true): var validFields = Ext.ComponentQuery.query('form > textfield{isValid()}'); Scoped Component Queries All of our previous examples will search the entire component tree to find matches, but often we may want to keep our searches local to a specific container and its descendants. This can help reduce the complexity of the query and improve the performance, as fewer components have to be processed. Ext.Containers have three handy methods to do this: up, down, and query. We will take each of these in turn and explain their features. up This method accepts a selector and will traverse up the hierarchy to find a single matching parent component. This can be useful to find the grid panel that a button belongs to, so an action can be taken on it: var grid = button.up('gridpanel'); down This returns the first descendant component that matches the given selector: var firstButton = grid.down('button'); query The query method performs much like Ext.ComponentQuery.query but is automatically scoped to the current container. This means that it will search all descendant components of the current container and return all matching ones as an array. var allButtons = grid.query('button'); Hierarchical data with trees Now that we know and understand components, their lifecycle, and how to retrieve references to them, we will move on to more specific UI widgets. The tree panel component allows us to display hierarchical data in a way that reflects the data's structure and relationships. In our application, we are going to use a tree panel to represent our navigation structure to allow users to see how the different areas of the app are linked and structured. Binding to a data source Like all other data-bound components, tree panels must be bound to a data store—in this particular case it must be an Ext.data.TreeStore instance or subclass, as it takes advantage of the extra features added to this specialist store class. We will make use of the BizDash.store.Navigation TreeStore to bind to our tree panel. Defining a tree panel The tree panel is defined in the Ext.tree.Panel class (which has an xtype of treepanel), which we will extend to create a custom class called BizDash.view.navigation.NavigationTree: Ext.define('BizDash.view.navigation.NavigationTree', { extend: 'Ext.tree.Panel', alias: 'widget.navigation-NavigationTree', store : 'Navigation', columns: [    {      xtype : 'treecolumn',      text : 'Navigation',      dataIndex: 'Label',      flex : 1    } ], rootVisible: false, useArrows : true }); We configure the tree to be bound to our TreeStore by using its storeId, in this case, Navigation. A tree panel is a subclass of the Ext.panel.Table class (similar to the Ext.grid.Panel class), which means it must have a columns configuration present. This tells the component what values to display as part of the tree. In a simple, traditional tree, we might only have one column showing the item and its children; however, we can define multiple columns and display additional fields in each row. This would be useful if we were displaying, for example, files and folders and wanted to have additional columns to display the file type and file size of each item. In our example, we are only going to have one column, displaying the Label field. We do this by using the treecolumn xtype, which is responsible for rendering the tree's navigation elements. Without defining treecolumn, the component won't display correctly. The treecolumn xtype's configuration allows us to define which of the attached data model's fields to use (dataIndex), the column's header text (text), and the fact that the column should fill the horizontal space. Additionally, we set the rootVisible to false, so the data's root is hidden, as it has no real meaning other than holding the rest of the data together. Finally, we set useArrows to true, so the items with children use an arrow instead of the +/- icon. Summary In this article, we have learnt how Ext JS' components fit together and the lifecycle that they follow when created and destroyed. We covered the component lifecycle and Component Queries. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Function passing [article] Static Data Management [article]
Read more
  • 0
  • 0
  • 3705
article-image-third-party-libraries
Packt
21 Apr 2015
21 min read
Save for later

Third Party Libraries

Packt
21 Apr 2015
21 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, the author believes that our TypeScript development environment would not amount to much if we were not able to reuse the myriad of existing JavaScript libraries, frameworks and general goodness. However, in order to use a particular third party library with TypeScript, we will first need a matching definition file. Soon after TypeScript was released, Boris Yankov set up a github repository to house TypeScript definition files for third party JavaScript libraries. This repository, named DefinitelyTyped (https://github.com/borisyankov/DefinitelyTyped) quickly became very popular, and is currently the place to go for high-quality definition files. DefinitelyTyped currently has over 700 definition files, built up over time from hundreds of contributors from all over the world. If we were to measure the success of TypeScript within the JavaScript community, then the DefinitelyTyped repository would be a good indication of how well TypeScript has been adopted. Before you go ahead and try to write your own definition files, check the DefinitelyTyped repository to see if there is one already available. In this article, we will have a closer look at using these definition files, and cover the following topics: Choosing a JavaScript Framework Using TypeScript with Backbone Using TypeScript with Angular (For more resources related to this topic, see here.) Using third party libraries In this section of the article, we will begin to explore some of the more popular third party JavaScript libraries, their declaration files, and how to write compatible TypeScript for each of these frameworks. We will compare Backbone, and Angular which are all frameworks for building rich client-side JavaScript applications. During our discussion, we will see that some frameworks are highly compliant with the TypeScript language and its features, some are partially compliant, and some have very low compliance. Choosing a JavaScript framework Choosing a JavaScript framework or library to develop Single Page Applications is a difficult and sometimes daunting task. It seems that there is a new framework appearing every other month, promising more and more functionality for less and less code. To help developers compare these frameworks, and make an informed choice, Addy Osmani wrote an excellent article, named Journey Through the JavaScript MVC Jungle. (http://www.smashingmagazine.com/2012/07/27/journey-through-the-javascript-mvc-jungle/). In essence, his advice is simple – it's a personal choice – so try some frameworks out, and see what best fits your needs, your programming mindset, and your existing skill set. The TodoMVC project (http://todomvc.com), which Addy started, does an excellent job of implementing the same application in a number of MV* JavaScript frameworks. This really is a reference site for digging into a fully working application, and comparing for yourself the coding techniques and styles of different frameworks. Again, depending on the JavaScript library that you are using within TypeScript, you may need to write your TypeScript code in a specific way. Bear this in mind when choosing a framework - if it is difficult to use with TypeScript, then you may be better off looking at another framework with better integration. If it is easy and natural to work with the framework in TypeScript, then your productivity and overall development experience will be much better. We will look at some of the popular JavaScript libraries, along with their declaration files, and see how to write compatible TypeScript. The key thing to remember is that TypeScript generates JavaScript - so if you are battling to use a third party library, then crack open the generated JavaScript and see what the JavaScript code looks like that TypeScript is emitting. If the generated JavaScript matches the JavaScript code samples in the library's documentation, then you are on the right track. If not, then you may need to modify your TypeScript until the compiled JavaScript starts matching up with the samples. When trying to write TypeScript code for a third party JavaScript framework – particularly if you are working off the JavaScript documentation – your initial foray may just be one of trial and error. Along the way, you may find that you need to write your TypeScript in a specific way in order to match this particular third party library. The rest of this article shows how three different libraries require different ways of writing TypeScript. Backbone Backbone is a popular JavaScript library that gives structure to web applications by providing models, collections and views, amongst other things. Backbone has been around since 2010, and has gained a very large following, with a wealth of commercial websites using the framework. According to Infoworld.com, Backbone has over 1,600 Backbone related projects on GitHub that rate over 3 stars - meaning that it has a vast ecosystem of extensions and related libraries. Let's take a quick look at Backbone written in TypeScript. To follow along with the code in your own project, you will need to install the following NuGet packages: backbone.js ( currently at v1.1.2), and backbone.TypeScript.DefinitelyTyped (currently at version 1.2.3). Using inheritance with Backbone From the Backbone documentation, we find an example of creating a Backbone.Model in JavaScript as follows: var Note = Backbone.Model.extend(    {        initialize: function() {            alert("Note Model JavaScript initialize");        },        author: function () { },        coordinates: function () { },        allowedToEdit: function(account) {            return true;        }    } ); This code shows a typical usage of Backbone in JavaScript. We start by creating a variable named Note that extends (or derives from) Backbone.Model. This can be seen with the Backbone.Model.extend syntax. The Backbone extend function uses JavaScript object notation to define an object within the outer curly braces { … }. In the preceding code, this object has four functions: initialize, author, coordinates and allowedToEdit. According to the Backbone documentation, the initialize function will be called once a new instance of this class is created. The initialize function simply creates an alert to indicate that the function was called. The author and coordinates functions are blank at this stage, with only the allowedToEdit function actually doing something: return true. If we were to simply copy and paste the above JavaScript into a TypeScript file, we would generate the following compile error: Build: 'Backbone.Model.extend' is inaccessible. When working with a third party library, and a definition file from DefinitelyTyped, our first port of call should be to see if the definition file may be in error. After all, the JavaScript documentation says that we should be able to use the extend method as shown, so why is this definition file causing an error? If we open up the backbone.d.ts file, and then search to find the definition of the class Model, we will find the cause of the compilation error: class Model extends ModelBase {      /**    * Do not use, prefer TypeScript's extend functionality.    **/    private static extend(        properties: any, classProperties?: any): any; This declaration file snippet shows some of the definition of the Backbone Model class. Here, we can see that the extend function is defined as private static, and as such, it will not be available outside the Model class itself. This, however, seems contradictory to the JavaScript sample that we saw in the documentation. In the preceding comment on the extend function definition, we find the key to using Backbone in TypeScript: prefer TypeScript's extend functionality. This comment indicates that the declaration file for Backbone is built around TypeScript's extends keyword – thereby allowing us to use natural TypeScript inheritance syntax to create Backbone objects. The TypeScript equivalent to this code, therefore, must use the extends TypeScript keyword to derive a class from the base class Backbone.Model, as follows: class Note extends Backbone.Model {    initialize() {      alert("Note model Typescript initialize");    }    author() { }    coordinates() { }    allowedToEdit(account) {        return true;    } } We are now creating a class definition named Note that extends the Backbone.Model base class. This class then has the functions initialize, author, coordinates and allowedToEdit, similar to the previous JavaScript version. Our Backbone sample will now compile and run correctly. With either of these versions, we can create an instance of the Note object by including the following script within an HTML page: <script type="text/javascript">    $(document).ready( function () {        var note = new Note();    }); </script> This JavaScript sample simply waits for the jQuery document.ready event to be fired, and then creates an instance of the Note class. As documented earlier, the initialize function will be called when an instance of the class is constructed, so we would see an alert box appear when we run this in a browser. All of Backbone's core objects are designed with inheritance in mind. This means that creating new Backbone collections, views and routers will use the same extends syntax in TypeScript. Backbone, therefore, is a very good fit for TypeScript, because we can use natural TypeScript syntax for inheritance to create new Backbone objects. Using interfaces As Backbone allows us to use TypeScript inheritance to create objects, we can just as easily use TypeScript interfaces with any of our Backbone objects as well. Extracting an interface for the Note class above would be as follows: interface INoteInterface {    initialize();    author();    coordinates();    allowedToEdit(account: string); } We can now update our Note class definition to implement this interface as follows: class Note extends Backbone.Model implements INoteInterface {    // existing code } Our class definition now implements the INoteInterface TypeScript interface. This simple change protects our code from being modified inadvertently, and also opens up the ability to work with core Backbone objects in standard object-oriented design patterns. We could, if we needed to, apply the Factory Pattern to return a particular type of Backbone Model – or any other Backbone object for that matter. Using generic syntax The declaration file for Backbone has also added generic syntax to some class definitions. This brings with it further strong typing benefits when writing TypeScript code for Backbone. Backbone collections (surprise, surprise) house a collection of Backbone models, allowing us to define collections in TypeScript as follows: class NoteCollection extends Backbone.Collection<Note> {    model = Note;    //model: Note; // generates compile error    //model: { new (): Note }; // ok } Here, we have a NoteCollection that derives from, or extends a Backbone.Collection, but also uses generic syntax to constrain the collection to handle only objects of type Note. This means that any of the standard collection functions such as at() or pluck() will be strongly typed to return Note models, further enhancing our type safety and Intellisense. Note the syntax used to assign a type to the internal model property of the collection class on the second line. We cannot use the standard TypeScript syntax model: Note, as this causes a compile time error. We need to assign the model property to a the class definition, as seen with the model=Note syntax, or we can use the { new(): Note } syntax as seen on the last line. Using ECMAScript 5 Backbone also allows us to use ECMAScript 5 capabilities to define getters and setters for Backbone.Model classes, as follows: interface ISimpleModel {    Name: string;    Id: number; } class SimpleModel extends Backbone.Model implements ISimpleModel {    get Name() {        return this.get('Name');    }    set Name(value: string) {        this.set('Name', value);    }    get Id() {        return this.get('Id');    }    set Id(value: number) {        this.set('Id', value);    } } In this snippet, we have defined an interface with two properties, named ISimpleModel. We then define a SimpleModel class that derives from Backbone.Model, and also implements the ISimpleModel interface. We then have ES 5 getters and setters for our Name and Id properties. Backbone uses class attributes to store model values, so our getters and setters simply call the underlying get and set methods of Backbone.Model. Backbone TypeScript compatibility Backbone allows us to use all of TypeScript's language features within our code. We can use classes, interfaces, inheritance, generics and even ECMAScript 5 properties. All of our classes also derive from base Backbone objects. This makes Backbone a highly compatible library for building web applications with TypeScript. Angular AngularJs (or just Angular) is also a very popular JavaScript framework, and is maintained by Google. Angular takes a completely different approach to building JavaScript SPA's, introducing an HTML syntax that the running Angular application understands. This provides the application with two-way data binding capabilities, which automatically synchronizes models, views and the HTML page. Angular also provides a mechanism for Dependency Injection (DI), and uses services to provide data to your views and models. The example provided in the tutorial shows the following JavaScript: var phonecatApp = angular.module('phonecatApp', []); phonecatApp.controller('PhoneListCtrl', function ($scope) { $scope.phones = [    {'name': 'Nexus S',      'snippet': 'Fast just got faster with Nexus S.'},    {'name': 'Motorola XOOM™ with Wi-Fi',      'snippet': 'The Next, Next Generation tablet.'},    {'name': 'MOTOROLA XOOM™',      'snippet': 'The Next, Next Generation tablet.'} ]; }); This code snippet is typical of Angular JavaScript syntax. We start by creating a variable named phonecatApp, and register this as an Angular module by calling the module function on the angular global instance. The first argument to the module function is a global name for the Angular module, and the empty array is a place-holder for other modules that will be injected via Angular's Dependency Injection routines. We then call the controller function on the newly created phonecatApp variable with two arguments. The first argument is the global name of the controller, and the second argument is a function that accepts a specially named Angular variable named $scope. Within this function, the code sets the phones object of the $scope variable to be an array of JSON objects, each with a name and snippet property. If we continue reading through the tutorial, we find a unit test that shows how the PhoneListCtrl controller is used: describe('PhoneListCtrl', function(){    it('should create "phones" model with 3 phones', function() {      var scope = {},          ctrl = new PhoneListCtrl(scope);        expect(scope.phones.length).toBe(3); });   }); The first two lines of this code snippet use a global function called describe, and within this function another function called it. These two functions are part of a unit testing framework named Jasmine. We declare a variable named scope to be an empty JavaScript object, and then a variable named ctrl that uses the new keyword to create an instance of our PhoneListCtrl class. The new PhoneListCtrl(scope) syntax shows that Angular is using the definition of the controller just like we would use a normal class in TypeScript. Building the same object in TypeScript would allow us to use TypeScript classes, as follows: var phonecatApp = angular.module('phonecatApp', []);   class PhoneListCtrl {    constructor($scope) {        $scope.phones = [            { 'name': 'Nexus S',              'snippet': 'Fast just got faster' },            { 'name': 'Motorola',              'snippet': 'Next generation tablet' },            { 'name': 'Motorola Xoom',              'snippet': 'Next, next generation tablet' }        ];    } }; Our first line is the same as in our previous JavaScript sample. We then, however, use the TypeScript class syntax to create a class named PhoneListCtrl. By creating a TypeScript class, we can now use this class as shown in our Jasmine test code: ctrl = new PhoneListCtrl(scope). The constructor function of our PhoneListCtrl class now acts as the anonymous function seen in the original JavaScript sample: phonecatApp.controller('PhoneListCtrl', function ($scope) {    // this function is replaced by the constructor } Angular classes and $scope Let's expand our PhoneListCtrl class a little further, and have a look at what it would look like when completed: class PhoneListCtrl {    myScope: IScope;    constructor($scope, $http: ng.IHttpService, Phone) {        this.myScope = $scope;        this.myScope.phones = Phone.query();        $scope.orderProp = 'age';          _.bindAll(this, 'GetPhonesSuccess');    }    GetPhonesSuccess(data: any) {       this.myScope.phones = data;    } }; The first thing to note in this class, is that we are defining a variable named myScope, and storing the $scope argument that is passed in via the constructor, into this internal variable. This is again because of JavaScript's lexical scoping rules. Note the call to _.bindAll at the end of the constructor. This Underscore utility function will ensure that whenever the GetPhonesSuccess function is called, it will use the variable this in the context of the class instance, and not in the context of the calling code. The GetPhonesSuccess function uses the this.myScope variable within its implementation. This is why we needed to store the initial $scope argument in an internal variable. Another thing we notice from this code, is that the myScope variable is typed to an interface named IScope, which will need to be defined as follows: interface IScope {    phones: IPhone[]; } interface IPhone {    age: number;    id: string;    imageUrl: string;    name: string;    snippet: string; }; This IScope interface just contains an array of objects of type IPhone (pardon the unfortunate name of this interface – it can hold Android phones as well). What this means is that we don't have a standard interface or TypeScript type to use when dealing with $scope objects. By its nature, the $scope argument will change its type depending on when and where the Angular runtime calls it, hence our need to define an IScope interface, and strongly type the myScope variable to this interface. Another interesting thing to note on the constructor function of the PhoneListCtrl class is the type of the $http argument. It is set to be of type ng.IHttpService. This IHttpService interface is found in the declaration file for Angular. In order to use TypeScript with Angular variables such as $scope or $http, we need to find the matching interface within our declaration file, before we can use any of the Angular functions available on these variables. The last point to note in this constructor code is the final argument, named Phone. It does not have a TypeScript type assigned to it, and so automatically becomes of type any. Let's take a quick look at the implementation of this Phone service, which is as follows: var phonecatServices =     angular.module('phonecatServices', ['ngResource']);   phonecatServices.factory('Phone',    [        '$resource', ($resource) => {            return $resource('phones/:phoneId.json', {}, {                query: {                    method: 'GET',                    params: {                        phoneId: 'phones'                    },                    isArray: true                }            });        }    ] ); The first line of this code snippet again creates a global variable named phonecatServices, using the angular.module global function. We then call the factory function available on the phonecatServices variable, in order to define our Phone resource. This factory function uses a string named 'Phone' to define the Phone resource, and then uses Angular's dependency injection syntax to inject a $resource object. Looking through this code, we can see that we cannot easily create standard TypeScript classes for Angular to use here. Nor can we use standard TypeScript interfaces or inheritance on this Angular service. Angular TypeScript compatibility When writing Angular code with TypeScript, we are able to use classes in certain instances, but must rely on the underlying Angular functions such as module and factory to define our objects in other cases. Also, when using standard Angular services, such as $http or $resource, we will need to specify the matching declaration file interface in order to use these services. We can therefore describe the Angular library as having medium compatibility with TypeScript. Inheritance – Angular versus Backbone Inheritance is a very powerful feature of object-oriented programming, and is also a fundamental concept when using JavaScript frameworks. Using a Backbone controller or an Angular controller within each framework relies on certain characteristics, or functions being available. Each framework implements inheritance in a different way. As JavaScript does not have the concept of inheritance, each framework needs to find a way to implement it, so that the framework can allow us to extend base classes and their functionality. In Backbone, this inheritance implementation is via the extend function of each Backbone object. The TypeScript extends keyword follows a similar implementation to Backbone, allowing the framework and language to dovetail each other. Angular, on the other hand, uses its own implementation of inheritance, and defines functions on the angular global namespace to create classes (that is angular.module). We can also sometimes use the instance of an application (that is <appName>.controller) to create modules or controllers. We have found, though, that Angular uses controllers in a very similar way to TypeScript classes, and we can therefore simply create standard TypeScript classes that will work within an Angular application. So far, we have only skimmed the surface of both the Angular TypeScript syntax and the Backbone TypeScript syntax. The point of this exercise was to try and understand how TypeScript can be used within each of these two third party frameworks. Be sure to visit http://todomvc.com, and have a look at the full source-code for the Todo application written in TypeScript for both Angular and Backbone. They can be found on the Compile-to-JS tab in the example section. These running code samples, combined with the documentation on each of these sites, will prove to be an invaluable resource when trying to write TypeScript syntax with an external third party library such as Angular or Backbone. Angular 2.0 The Microsoft TypeScript team and the Google Angular team have just completed a months long partnership, and have announced that the upcoming release of Angular, named Angular 2.0, will be built using TypeScript. Originally, Angular 2.0 was going to use a new language named AtScript for Angular development. During the collaboration work between the Microsoft and Google teams, however, the features of AtScript that were needed for Angular 2.0 development have now been implemented within TypeScript. This means that the Angular 2.0 library will be classed as highly compatible with TypeScript, once the Angular 2.0 library, and the 1.5 edition of the TypeScript compiler are available. Summary In this article, we looked at three types of third party libraries, and discussed how to integrate these libraries with TypeScript. We explored Backbone, which can be categorized as a highly compliant third party library, Angular, which is a partially compliant library. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Introduction to TypeScript [article] Getting Ready with CoffeeScript [article]
Read more
  • 0
  • 0
  • 1087

article-image-structure-applications
Packt
21 Apr 2015
21 min read
Save for later

Structure of Applications

Packt
21 Apr 2015
21 min read
In this article by Colin Ramsay, author of the book Ext JS Application Development Blueprints, we will learn that one of the great things about imposing structure is that it automatically gives predictability (a kind of filing system in which we immediately know where a particular piece of code should live). The same applies to the files that make up your application. Certainly, we could put all of our files in the root of the website, mixing CSS, JavaScript, configuration and HTML files in a long alphabetical list, but we'd be losing out on a number of opportunities to keep our application organized. In this article, we'll look at: Ideas to structure your code The layout of a typical Ext JS application Use of singletons, mixins, and inheritance Why global state is a bad thing Structuring your application is like keeping your house in order. You'll know where to find your car keys, and you'll be prepared for unexpected guests. (For more resources related to this topic, see here.) Ideas for structure One of the ways in which code is structured in large applications involves namespacing (the practice of dividing code up by naming identifiers). One namespace could contain everything relating to Ajax, whereas another could contain classes related to mathematics. Programming languages (such as C# and Java) even incorporate namespaces as a first-class language construct to help with code organization. Separating code from directories based on namespace becomes a sensible extension of this: From left: Java's Platform API, Ext JS 5, and .NET Framework A namespace identifier is made up of one or more name tokens, such as "Java" or "Ext", "Ajax" or "Math", separated by a symbol, in most cases a full stop/period. The top level name will be an overarching identifier for the whole package (such as "Ext") and will become less specific as names are added and you drill down into the code base. The Ext JS source code makes heavy use of this practice to partition UI components, utility classes, and all the other parts of the framework, so let's look at a real example. The GridPanel component is perhaps one of the most complicated in the framework; a large collection of classes contribute to features (such as columns, cell editing, selection, and grouping). These work together to create a highly powerful UI widget. Take a look at the following files that make up GridPanel: The Ext JS grid component's directory structure The grid directory reflects the Ext.grid namespace. Likewise, the subdirectories are child namespaces with the deepest namespace being Ext.grid.filters.filter. The main Panel and View classes: Ext.grid.Grid and Ext.grid.View respectively are there in the main director. Then, additional pieces of functionality, for example, the Column class and the various column subclasses are further grouped together in their own subdirectories. We can also see a plugins directory, which contains a number of grid-specific plugins. Ext JS actually already has an Ext.plugins namespace. It contains classes to support the plugin infrastructure as well as plugins that are generic enough to apply across the entire framework. In the event of uncertainty regarding the best place in the code base for a plugin, we might mistakenly have put it in Ext.plugins. Instead, Ext JS follows best practice and creates a new, more specific namespace underneath Ext.grid. Going back to the root of the Ext JS framework, we can see that there are only a few files at the top level. In general, these will be classes that are either responsible for orchestrating other parts of the framework (such as EventManager or StoreManager) or classes that are widely reused across the framework (such as Action or Component). Any more specific functionality should be namespaced in a suitably specific way. As a rule of thumb, you can take your inspiration from the organization of the Ext JS framework, though as a framework rather than a full-blown application. It's lacking some of the structural aspects we'll talk about shortly. Getting to know your application When generating an Ext JS application using Sencha Cmd, we end up with a code base that adheres to the concept of namespacing in class names and in the directory structure, as shown here: The structure created with Sencha Cmd's "generate app" feature We should be familiar with all of this, as it was already covered when we discussed MVVM in Ext JS. Having said that, there are some parts of this that are worth examining further to see whether they're being used to the full. /overrides This is a handy one to help us fall into a positive and predictable pattern. There are some cases where you need to override Ext JS functionality on a global level. Maybe, you want to change the implementation of a low-level class (such as Ext.data.proxy.Proxy) to provide custom batching behavior for your application. Sometimes, you might even find a bug in Ext JS itself and use an override to hotfix until the next point release. The overrides directory provides a logical place to put these changes (just mirror the directory structure and namespacing of the code you're overriding). This also provides us with a helpful rule, that is, subclasses go in /app and overrides go in /overrides. /.sencha This contains configuration information and build files used by Sencha Cmd. In general, I'd say try and avoid fiddling around in here too much until you know Sencha Cmd inside out because there's a chance you'll end up with nasty conflicts if you try and upgrade to a newer version of Sencha Cmd. bootstrap.js, bootstrap.json, and bootstrap.css The Ext JS class system has powerful dependency management through the requires feature, which gives us the means to create a build that contains only the code that's in use. The bootstrap files contain information about the minimum CSS and JavaScript needed to run your application as provided by the dependency system. /packages In a similar way to something like Ruby has RubyGems and Node.js has npm, Sencha Cmd has the concept of packages (a bundle which can be pulled into your application from a local or remote source). This allows you to reuse and publish bundles of functionality (including CSS, images, and other resources) to reduce copy and paste of code and share your work with the Sencha community. This directory is empty until you configure packages to be used in your app. /resources and SASS SASS is a technology that aids in the creation of complex CSS by promoting reuse and bringing powerful features (such as mixins and functions) to your style sheets. Ext JS uses SASS for its theme files and encourages you to use it as well. index.html We know that index.html is the root HTML page of our application. It can be customized as you see fit (although, it's rare you'll need to). There's one caveat and it's written in a comment in the file already: <!-- The line below must be kept intact for Sencha Cmd to build your application --><script id="microloader" type="text/javascript" src="bootstrap.js"></script> We know what bootstrap.js refers to (loading up our application and starting to fulfill its dependencies according to the current build). So, heed the comment and leave this script tag, well, alone! /build and build.xml The /build directory contains build artifacts (the files created when the build process is run). If you run a production build, then you'll get a directory inside /build called production and you should use only these files when deploying. The build.xml file allows you to avoid tweaking some of the files in /.sencha when you want to add some extra functionality to a build process. If you want to do something before, during, or after the build, this is the place to do it. app.js This is the main JavaScript entry point to your application. The comments in this file advise avoiding editing it in order to allow Sencha Cmd to upgrade it in the future. The Application.js file at /app/Application.js can be edited without fear of conflicts and will enable you to do the majority of things you might need to do. app.json This contains configuration options related to Sencha Cmd and to boot your application. When we refer to the subject of this article as a JavaScript application, we need to remember that it's just a website composed of HTML, CSS, and JavaScript as well. However, when dealing with a large application that needs to target different environments, it's incredibly useful to augment this simplicity with tools that assist in the development process. At first, it may seem that the default application template contains a lot of cruft, but they are the key to supporting the tools that will help you craft a solid product. Cultivating your code As you build your application, there will come a point at which you create a new class and yet it doesn't logically fit into the directory structure Sencha Cmd created for you. Let's look at a few examples. I'm a lumberjack – let's go log in Many applications have a centralized SessionManager to take care of the currently logged in user, perform authentication operations, and set up persistent storage for session credentials. There's only one SessionManager in an application. A truncated version might look like this: /** * @class CultivateCode.SessionManager * @extends extendsClass * Description */ Ext.define('CultivateCode.SessionManager', {    singleton: true,    isLoggedIn: false,      login: function(username, password) {        // login impl    },        logout: function() {        // logout impl    },        isLoggedIn() {        return isLoggedIn;    } }); We create a singleton class. This class doesn't have to be instantiated using the new keyword. As per its class name, CultivateCode.SessionManager, it's a top-level class and so it goes in the top-level directory. In a more complicated application, there could be a dedicated Session class too and some other ancillary code, so maybe, we'd create the following structure: The directory structure for our session namespace What about user interface elements? There's an informal practice in the Ext JS community that helps here. We want to create an extension that shows the coordinates of the currently selected cell (similar to cell references in Excel). In this case, we'd create an ux directory—user experience or user extensions—and then go with the naming conventions of the Ext JS framework: Ext.define('CultivateCode.ux.grid.plugins.CoordViewer', {    extend: 'Ext.plugin.Abstract',    alias: 'plugin.coordviewer',      mixins: {        observable: 'Ext.util.Observable'    },      init: function(grid) {        this.mon(grid.view, 'cellclick', this.onCellClick, this);    },      onCellClick: function(view, cell, colIdx, record, row, rowIdx, e) {        var coords = Ext.String.format('Cell is at {0}, {1}', colIdx, rowIdx)          Ext.Msg.alert('Coordinates', coords);    } }); It looks a little like this, triggering when you click on a grid cell: Also, the corresponding directory structure follows directly from the namespace: You can probably see a pattern emerging already. We've mentioned before that organizing an application is often about setting things up to fall into a position of success. A positive pattern like this is a good sign that you're doing things right. We've got a predictable system that should enable us to create new classes without having to think too hard about where they're going to sit in our application. Let's take a look at one more example of a mathematics helper class (one that is a little less obvious). Again, we can look at the Ext JS framework itself for inspiration. There's an Ext.util namespace containing over 20 general classes that just don't fit anywhere else. So, in this case, let's create CultivateCode.util.Mathematics that contains our specialized methods for numerical work: Ext.define('CultivateCode.util.Mathematics', {    singleton: true,      square: function(num) {        return Math.pow(num, 2);    },      circumference: function(radius) {        return 2 * Math.PI * radius;    } }); There is one caveat here and it's an important one. There's a real danger that rather than thinking about the namespace for your code and its place in your application, a lot of stuff ends up under the utils namespace, thereby defeating the whole purpose. Take time to carefully check whether there's a more suitable location for your code before putting it in the utils bucket. This is particularly applicable if you're considering adding lots of code to a single class in the utils namespace. Looking again at Ext JS, there are lots of specialized namespaces (such as Ext.state or Ext.draw. If you were working with an application with lots of mathematics, perhaps you'd be better off with the following namespace and directory structure: Ext.define('CultivateCode.math.Combinatorics', {    // implementation here! }); Ext.define('CultivateCode.math.Geometry', {    // implementation here! }); The directory structure for the math namespace is shown in the following screenshot: This is another situation where there is no definitive right answer. It will come to you with experience and will depend entirely on the application you're building. Over time, putting together these high-level applications, building blocks will become second nature. Money can't buy class Now that we're learning where our classes belong, we need to make sure that we're actually using the right type of class. Here's the standard way of instantiating an Ext JS class: var geometry = Ext.create('MyApp.math.Geometry'); However, think about your code. Think how rare it's in Ext JS to actually manually invoke Ext.create. So, how else are the class instances created? Singletons A singleton is simply a class that only has one instance across the lifetime of your application. There are quite a number of singleton classes in the Ext JS framework. While the use of singletons in general is a contentious point in software architecture, they tend to be used fairly well in Ext JS. It could be that you prefer to implement the mathematical functions (we discussed earlier) as a singleton. For example, the following command could work: var area = CultivateCode.math.areaOfCircle(radius); However, most developers would implement a circle class: var circle = Ext.create('CultivateCode.math.Circle', { radius: radius }); var area = circle.getArea(); This keeps the circle-related functionality partitioned off into the circle class. It also enables us to pass the circle variable round to other functions and classes for additional processing. On the other hand, look at Ext.Msg. Each of the methods here are fired and forget, there's never going to be anything to do further actions on. The same is true of Ext.Ajax. So, once more we find ourselves with a question that does not have a definitive answer. It depends entirely on the context. This is going to happen a lot, but it's a good thing! This article isn't going to teach you a list of facts and figures; it's going to teach you to think for yourself. Read other people's code and learn from experience. This isn't coding by numbers! The other place you might find yourself reaching for the power of the singleton is when you're creating an overarching manager class (such as the inbuilt StoreManager or our previous SessionManager example). One of the objections about singletons is that they tend to be abused to store lots of global state and break down the separation of concerns we've set up in our code as follows: Ext.define('CultivateCode.ux.grid.GridManager', {       singleton: true,    currentGrid: null,    grids: [],      add: function(grid) {        this.grids.push(grid);    },      setCurrentGrid: function(grid) {        this.focusedGrid = grid;    } }); No one wants to see this sort of thing in a code base. It brings behavior and state to a high level in the application. In theory, any part of the code base could call this manager with unexpected results. Instead, we'd do something like this: Ext.define('CultivateCode.view.main.Main', {    extend: 'CultivateCode.ux.GridContainer',      currentGrid: null,    grids: [],      add: function(grid) {        this.grids.push(grid);    },      setCurrentGrid: function(grid) {        this.currentGrid = grid;    } }); We still have the same behavior (a way of collecting together grids), but now, it's limited to a more contextually appropriate part of the grid. Also, we're working with the MVVM system. We avoid global state and organize our code in a more correct manner. A win all round. As a general rule, if you can avoid using a singleton, do so. Otherwise, think very carefully to make sure that it's the right choice for your application and that a standard class wouldn't better fit your requirements. In the previous example, we could have taken the easy way out and used a manager singleton, but it would have been a poor choice that would compromise the structure of our code. Mixins We're used to the concept of inheriting from a subclass in Ext JS—a grid extends a panel to take on all of its functionality. Mixins provide a similar opportunity to reuse functionality to augment an existing class with a thin slice of behavior. An Ext.Panel "is an" Ext.Component, but it also "has a" pinnable feature that provides a pin tool via the Ext.panel.Pinnable mixin. In your code, you should be looking at mixins to provide a feature, particularly in cases where this feature can be reused. In the next example, we'll create a UI mixin called shakeable, which provides a UI component with a shake method that draws the user's attention by rocking it from side to side: Ext.define('CultivateCode.util.Shakeable', {    mixinId: 'shakeable',      shake: function() {        var el = this.el,            box = el.getBox(),            left = box.x - (box.width / 3),            right = box.x + (box.width / 3),            end = box.x;          el.animate({            duration: 400,            keyframes: {                33: {                      x: left                },                66: {                    x: right                },                 100: {                    x: end                }            }        });    } }); We use the animate method (which itself is actually mixed in Ext.Element) to set up some animation keyframes to move the component's element first left, then right, then back to its original position. Here's a class that implements it: Ext.define('CultivateCode.ux.button.ShakingButton', {    extend: 'Ext.Button',    mixins: ['CultivateCode.util.Shakeable'],    xtype: 'shakingbutton' }); Also it's used like this: var btn = Ext.create('CultivateCode.ux.button.ShakingButton', {    text: 'Shake It!' }); btn.on('click', function(btn) {    btn.shake(); }); The button has taken on the new shake method provided by the mixin. Now, if we'd like a class to have the shakeable feature, we can reuse this mixin where necessary. In addition, mixins can simply be used to pull out the functionality of a class into logical chunks, rather than having a single file of many thousands of lines. Ext.Component is an example of this. In fact, most of its core functionality is found in classes that are mixed in Ext.Component. This is also helpful when navigating a code base. Methods that work together to build a feature can be grouped and set aside in a tidy little package. Let's take a look at a practical example of how an existing class could be refactored using a mixin. Here's the skeleton of the original: Ext.define('CultivateCode.ux.form.MetaPanel', {    extend: 'Ext.form.Panel',      initialize: function() {        this.callParent(arguments);        this.addPersistenceEvents();    },      loadRecord: function(model) {        this.buildItemsFromRecord(model);        this.callParent(arguments);    },      buildItemsFromRecord: function(model) {        // Implementation    },      buildFieldsetsFromRecord: function(model){        // Implementation    },      buildItemForField: function(field){        // Implementation    },      isStateAvailable: function(){        // Implementation    },      addPersistenceEvents: function(){      // Implementation    },      persistFieldOnChange: function(){        // Implementation    },      restorePersistedForm: function(){        // Implementation    },      clearPersistence: function(){        // Implementation    } }); This MetaPanel does two things that the normal FormPanel does not: It reads the Ext.data.Fields from an Ext.data.Model and automatically generates a form layout based on these fields. It can also generate field sets if the fields have the same group configuration value. When the values of the form change, it persists them to localStorage so that the user can navigate away and resume completing the form later. This is useful for long forms. In reality, implementing these features would probably require additional methods to the ones shown in the previous code skeleton. As the two extra features are clearly defined, it's easy enough to refactor this code to better describe our intent: Ext.define('CultivateCode.ux.form.MetaPanel', {    extend: 'Ext.form.Panel',      mixins: [        // Contains methods:        // - buildItemsFromRecord        // - buildFieldsetsFromRecord        // - buildItemForField        'CultivateCode.ux.form.Builder',          // - isStateAvailable        // - addPersistenceEvents        // - persistFieldOnChange        // - restorePersistedForm        // - clearPersistence        'CultivateCode.ux.form.Persistence'    ],      initialize: function() {        this.callParent(arguments);        this.addPersistenceEvents();    },      loadRecord: function(model) {        this.buildItemsFromRecord(model);        this.callParent(arguments);    } }); We have a much shorter file and the behavior we're including in this class is described a lot more concisely. Rather than seven or more method bodies that may span a couple of hundred lines of code, we have two mixin lines and the relevant methods extracted to a well-named mixin class. Summary This article showed how the various parts of an Ext JS application can be organized into a form that eases the development process. Resources for Article: Further resources on this subject: CreateJS – Performing Animation and Transforming Function [article] Good time management in CasperJS tests [article] The Login Page using Ext JS [article]
Read more
  • 0
  • 0
  • 903

article-image-our-first-api-go
Packt
14 Apr 2015
15 min read
Save for later

Our First API in Go

Packt
14 Apr 2015
15 min read
This article is penned by Nathan Kozyra, the author of the book, Mastering Go Web Services. This quickly introduces—or reintroduces—some core concepts related to Go setup and usage as well as the http package. (For more resources related to this topic, see here.) If you spend any time developing applications on the Web (or off it, for that matter), it won't be long before you find yourself facing the prospect of interacting with a web service or an API. Whether it's a library that you need or another application's sandbox with which you have to interact, the world of development relies in no small part on the cooperation among dissonant applications, languages, and formats. That, after all, is why we have APIs to begin with—to allow standardized communication between any two given platforms. If you spend a long amount of time working on the Web, you'll encounter bad APIs. By bad we mean APIs that are not all-inclusive, do not adhere to best practices and standards, are confusing semantically, or lack consistency. You'll encounter APIs that haphazardly use OAuth or simple HTTP authentication in some places and the opposite in others, or more commonly, APIs that ignore the stated purposes of HTTP verbs. Google's Go language is particularly well suited to servers. With its built-in HTTP serving, a simple method for XML and JSON encoding of data, high availability, and concurrency, it is the ideal platform for your API. We will cover the following topics in this article: Understanding requirements and dependencies Introducing the HTTP package Understanding requirements and dependencies Before we get too deep into the weeds in this article, it would be a good idea for us to examine the things that you will need to have installed. Installing Go It should go without saying that we will need to have the Go language installed. However, there are a few associated items that you will also need to install in order to do everything we do in this book. Go is available for Mac OS X, Windows, and most common Linux variants. You can download the binaries at http://golang.org/doc/install. On Linux, you can generally grab Go through your distribution's package manager. For example, you can grab it on Ubuntu with a simple apt-get install golang command. Something similar exists for most distributions. In addition to the core language, we'll also work a bit with the Google App Engine, and the best way to test with the App Engine is to install the Software Development Kit (SDK). This will allow us to test our applications locally prior to deploying them and simulate a lot of the functionality that is provided only on the App Engine. The App Engine SDK can be downloaded from https://developers.google.com/appengine/downloads. While we're obviously most interested in the Go SDK, you should also grab the Python SDK as there are some minor dependencies that may not be available solely in the Go SDK. Installing and using MySQL We'll be using quite a few different databases and datastores to manage our test and real data, and MySQL will be one of the primary ones. We will use MySQL as a storage system for our users; their messages and their relationships will be stored in our larger application (we will discuss more about this in a bit). MySQL can be downloaded from http://dev.mysql.com/downloads/. You can also grab it easily from a package manager on Linux/OS X as follows: Ubuntu: sudo apt-get install mysql-server mysql-client OS X with Homebrew: brew install mysql Redis Redis is the first of the two NoSQL datastores that we'll be using for a couple of different demonstrations, including caching data from our databases as well as the API output. If you're unfamiliar with NoSQL, we'll do some pretty simple introductions to results gathering using both Redis and Couchbase in our examples. If you know MySQL, Redis will at least feel similar, and you won't need the full knowledge base to be able to use the application in the fashion in which we'll use it for our purposes. Redis can be downloaded from http://redis.io/download. Redis can be downloaded on Linux/OS X using the following: Ubuntu: sudo apt-get install redis-server OS X with Homebrew: brew install redis Couchbase As mentioned earlier, Couchbase will be our second NoSQL solution that we'll use in various products, primarily to set short-lived or ephemeral key store lookups to avoid bottlenecks and as an experiment with in-memory caching. Unlike Redis, Couchbase uses simple REST commands to set and receive data, and everything exists in the JSON format. Couchbase can be downloaded from http://www.couchbase.com/download. For Ubuntu (deb), use the following command to download Couchbase: dpkg -i couchbase-server version.deb For OS X with Homebrew use the following command to download Couchbase: brew install https://github.com/couchbase/homebrew/raw/    stable/Library/Formula/libcouchbase.rb Nginx Although Go comes with everything you need to run a highly concurrent, performant web server, we're going to experiment with wrapping a reverse proxy around our results. We'll do this primarily as a response to the real-world issues regarding availability and speed. Nginx is not available natively for Windows. For Ubuntu, use the following command to download Nginx: apt-get install nginx For OS X with Homebrew, use the following command to download Nginx: brew install nginx Apache JMeter We'll utilize JMeter for benchmarking and tuning our API for performance. You have a bit of a choice here, as there are several stress-testing applications for simulating traffic. The two we'll touch on are JMeter and Apache's built-in Apache Benchmark (AB) platform. The latter is a stalwart in benchmarking but is a bit limited in what you can throw at your API, so JMeter is preferred. One of the things that we'll need to consider when building an API is its ability to stand up to heavy traffic (and introduce some mitigating actions when it cannot), so we'll need to know what our limits are. Apache JMeter can be downloaded from http://jmeter.apache.org/download_jmeter.cgi. Using predefined datasets While it's not entirely necessary to have our dummy dataset, you can save a lot of time as we build our social network by bringing it in because it is full of users, posts, and images. By using this dataset, you can skip creating this data to test certain aspects of the API and API creation. Our dummy dataset can be downloaded at https://github.com/nkozyra/masteringwebservices. Choosing an IDE A choice of Integrated Development Environment (IDE) is one of the most personal choices a developer can make, and it's rare to find a developer who is not steadfastly passionate about their favorite. Nothing in this article will require one IDE over another; indeed, most of Go's strength in terms of compiling, formatting, and testing lies at the command-line level. That said, we'd like to at least explore some of the more popular choices for editors and IDEs that exist for Go. Eclipse As one of the most popular and expansive IDEs available for any language, Eclipse is an obvious first mention. Most languages get their support in the form of an Eclipse plugin and Go is no exception. There are some downsides to this monolithic piece of software; it is occasionally buggy on some languages, notoriously slow for some autocompletion functions, and is a bit heavier than most of the other available options. However, the pluses are myriad. Eclipse is very mature and has a gigantic community from which you can seek support when issues arise. Also, it's free to use. Eclipse can be downloaded from http://eclipse.org/ Get the Goclipse plugin at http://goclipse.github.io/ Sublime Text Sublime Text is our particular favorite, but it comes with a large caveat—it is the only one listed here that is not free. This one feels more like a complete code/text editor than a heavy IDE, but it includes code completion options and the ability to integrate the Go compilers (or other languages' compilers) directly into the interface. Although Sublime Text's license costs $70, many developers find its elegance and speed to be well worth it. You can try out the software indefinitely to see if it's right for you; it operates as nagware unless and until you purchase a license. Sublime Text can be downloaded from http://www.sublimetext.com/2. LiteIDE LiteIDE is a much younger IDE than the others mentioned here, but it is noteworthy because it has a focus on the Go language. It's cross-platform and does a lot of Go's command-line magic in the background, making it truly integrated. LiteIDE also handles code autocompletion, go fmt, build, run, and test directly in the IDE and a robust package browser. It's free and totally worth a shot if you want something lean and targeted directly for the Go language. LiteIDE can be downloaded from https://code.google.com/p/golangide/. IntelliJ IDEA Right up there with Eclipse is the JetBrains family of IDE, which has spanned approximately the same number of languages as Eclipse. Ultimately, both are primarily built with Java in mind, which means that sometimes other language support can feel secondary. The Go integration here, however, seems fairly robust and complete, so it's worth a shot if you have a license. If you do not have a license, you can try the Community Edition, which is free. You can download IntelliJ IDEA at http://www.jetbrains.com/idea/download/ The Go language support plugin is available at http://plugins.jetbrains.com/plugin/?idea&id=5047 Some client-side tools Although the vast majority of what we'll be covering will focus on Go and API services, we will be doing some visualization of client-side interactions with our API. In doing so, we'll primarily focus on straight HTML and JavaScript, but for our more interactive points, we'll also rope in jQuery and AngularJS. Most of what we do for client-side demonstrations will be available at this book's GitHub repository at https://github.com/nkozyra/goweb under client. Both jQuery and AngularJS can be loaded dynamically from Google's CDN, which will prevent you from having to download and store them locally. The examples hosted on GitHub call these dynamically. To load AngularJS dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/libs/ angularjs/1.2.18/angular.min.js"></script> To load jQuery dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/ libs/jquery/1.11.1/jquery.min.js"></script> Looking at our application Well in the book, we'll be building myriad small applications to demonstrate points, functions, libraries, and other techniques. However, we'll also focus on a larger project that mimics a social network wherein we create and return to users, statuses, and so on, via the API. For that you'll need to have a copy of it. Setting up our database As mentioned earlier, we'll be designing a social network that operates almost entirely at the API level (at least at first) as our master project in the book. Time and space wouldn't allow us to cover this here in the article. When we think of the major social networks (from the past and in the present), there are a few omnipresent concepts endemic among them, which are as follows: The ability to create a user and maintain a user profile The ability to share messages or statuses and have conversations based on them The ability to express pleasure or displeasure on the said statuses/messages to dictate the worthiness of any given message There are a few other features that we'll be building here, but let's start with the basics. Let's create our database in MySQL as follows: create database social_network; This will be the basis of our social network product in the book. For now, we'll just need a users table to store our individual users and their most basic information. We'll amend this to include more features as we go along: CREATE TABLE users ( user_id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, user_nickname VARCHAR(32) NOT NULL, user_first VARCHAR(32) NOT NULL, user_last VARCHAR(32) NOT NULL, user_email VARCHAR(128) NOT NULL, PRIMARY KEY (user_id), UNIQUE INDEX user_nickname (user_nickname) ) We won't need to do too much in this article, so this should suffice. We'll have a user's most basic information—name, nickname, and e-mail, and not much else. Introducing the HTTP package The vast majority of our API work will be handled through REST, so you should become pretty familiar with Go's http package. In addition to serving via HTTP, the http package comprises of a number of other very useful utilities that we'll look at in detail. These include cookie jars, setting up clients, reverse proxies, and more. The primary entity about which we're interested right now, though, is the http.Server struct, which provides the very basis of all of our server's actions and parameters. Within the server, we can set our TCP address, HTTP multiplexing for routing specific requests, timeouts, and header information. Go also provides some shortcuts for invoking a server without directly initializing the struct. For example, if you have a lot of default properties, you could use the following code: Server := Server { Addr: ":8080", Handler: urlHandler, ReadTimeout: 1000 * time.MicroSecond, WriteTimeout: 1000 * time.MicroSecond, MaxHeaderBytes: 0, TLSConfig: nil } You can simply execute using the following code: http.ListenAndServe(":8080", nil) This will invoke a server struct for you and set only the Addr and Handler  properties within. There will be times, of course, when we'll want more granular control over our server, but for the time being, this will do just fine. Let's take this concept and output some JSON data via HTTP for the first time. Quick hitter – saying Hello, World via API As mentioned earlier in this article, we'll go off course and do some work that we'll preface with quick hitter to denote that it's unrelated to our larger project. In this case, we just want to rev up our http package and deliver some JSON to the browser. Unsurprisingly, we'll be merely outputting the uninspiring Hello, world message to, well, the world. Let's set this up with our required package and imports: package main   import ( "net/http" "encoding/json" "fmt" ) This is the bare minimum that we need to output a simple string in JSON via HTTP. Marshalling JSON data can be a bit more complex than what we'll look at here, so if the struct for our message doesn't immediately make sense, don't worry. This is our response struct, which contains all of the data that we wish to send to the client after grabbing it from our API: type API struct { Message string "json:message" } There is not a lot here yet, obviously. All we're setting is a single message string in the obviously-named Message variable. Finally, we need to set up our main function (as follows) to respond to a route and deliver a marshaled JSON response: func main() {   http.HandleFunc("/api", func(w http.ResponseWriter, r    *http.Request) {      message := API{"Hello, world!"}      output, err := json.Marshal(message)      if err != nil {      fmt.Println("Something went wrong!")    }      fmt.Fprintf(w, string(output))   })   http.ListenAndServe(":8080", nil) } Upon entering main(), we set a route handling function to respond to requests at /api that initializes an API struct with Hello, world! We then marshal this to a JSON byte array, output, and after sending this message to our iowriter class (in this case, an http.ResponseWriter value), we cast that to a string. The last step is a kind of quick-and-dirty approach for sending our byte array through a function that expects a string, but there's not much that could go wrong in doing so. Go handles typecasting pretty simply by applying the type as a function that flanks the target variable. In other words, we can cast an int64 value to an integer by simply surrounding it with the int(OurInt64) function. There are some exceptions to this—types that cannot be directly cast and some other pitfalls, but that's the general idea. Among the possible exceptions, some types cannot be directly cast to others and some require a package like strconv to manage typecasting. If we head over to our browser and call localhost:8080/api (as shown in the following screenshot), you should get exactly what we expect, assuming everything went correctly: Summary We've touched on the very basics of developing a simple web service interface in Go. Admittedly, this particular version is extremely limited and vulnerable to attack, but it shows the basic mechanisms that we can employ to produce usable, formalized output that can be ingested by other services. At this point, you should have the basic tools at your disposal that are necessary to start refining this process and our application as a whole. Resources for Article: Further resources on this subject: Adding Authentication [article] C10K – A Non-blocking Web Server in Go [article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article]
Read more
  • 0
  • 0
  • 1049
article-image-managing-images
Packt
14 Apr 2015
11 min read
Save for later

Managing Images

Packt
14 Apr 2015
11 min read
Cats, dogs and all sorts of memes, the Internet as we know it today is dominated by images. You can open almost any web page and you'll surely find images on the page. The more interactive our web browsing experience becomes, the more images we tend to use. So, it is tremendously important to ensure that the images we use are optimized and loaded as fast as possible. We should also make sure that we choose the correct image type. In this article by Dewald Els, author of the book Responsive Design High Performance,we will talk about, why image formats are important, conditional loading, visibility for DOM elements, specifying sizes, media queries, introducing sprite sheets, and caching. Let's talk basics. (For more resources related to this topic, see here.) Choosing the correct image format Deciding what image format to use is usually the first step you take when you start your website. Take a look at this table for an overview and comparison ofthe available image formats: Format Features GIF 256 colors Support for animation Transparency PNG 256 colors True colors Transparency JPEG/JPG 256 colors True colors From the preceding listed formats, you can conclude that, if you had a complex image that was 1000 x 1000 pixels, the image in the JPEG format would be the smallest in file size. This also means that it would load the fastest. The smallest image is not always the best choice though. If you need to have images with transparent parts, you'll have to use the PNG or GIF formats and if you need an animation, you are stuck with using a GIF format or the lesser know APNG format. Optimizing images Optimizing your image can have a huge impact on your overall website performance. There are some great applications to help you with image optimization and compression. TinyPNG is a great example of a site that helps you to compress you PNG's images online for free. They also have a Photoshop plugin that is available for download at https://tinypng.com/. Another great application to help you with JPG compression is JPEGMini. Head over to http://www.jpegmini.com/ to get a copy for either Windows or Mac OS X. Another application that is worth considering is Radical Image Optimization Tool (RIOT). It is a free program and can be found at http://luci.criosweb.ro/riot/. RIOT is a Windows application. Viewing as JPEG is not the only image format that we use in the Web; you can also look at a Mac OS X application called ImageOptim (http://www.imageoptim.com) It is also a free application and compresses both JPEG and PNG images. If you are not on Mac OS X, you can head over to https://tinypng.com/. This handy little site allows you to upload your image to the site, where it is then compressed. The optimized images are then linked to the site as downloadable files. As JPEG image formats make up the majority of most web pages, with some exceptions, lets take a look at how to make your images load faster. Progressive images Most advanced image editors such as Photoshop and GIMP give you the option to encode your JPEG images using either baseline or progressive. If you Save For Web using Photoshop, you will see this section at the top of the dialog box: In most cases, for use on web pages, I would advise you to use the Progressive encoding type. When you save an image using baseline, the full image data of every pixel block is written to the file one after the other. Baseline images load gradually from the top-left corner. If you save an image using the Progressive option, then it saves only a part of each of these blocks to the file and then another part and so on, until the entire image's information is captured in the file. When you render a progressive image, you will see a very grainy image display and this will gradually become sharper as it loads. Progressive images are also smaller than baseline images for various technical reasons. This means that they load faster. In addition, they appear to load faster when something is displayed on the screen. Here is a typical example of the visual difference between loading a progressive and a baseline JPEG image: Here, you can clearly see how the two encodings load in a browser. On the left, the progressive image is already displayed whereas the baseline image is still loading from the top. Alright, that was some really basic stuff, but it was extremely important nonetheless. Let's move on to conditional loading. Adaptive images Adaptive images are an adaptation of Filament Group's context-aware image sizing experiment. What does it do? Well, this is what the guys say about themselves: "Adaptive images detects your visitor's screen size and automatically creates, caches, and delivers device appropriate re-scaled versions of your web page's embedded HTML images. No mark-up changes needed. It is intended for use with Responsive Designs and to be combined with Fluid Images techniques." It certainly trumps the experiment in the simplicity of implementation. So, how does it work? It's quite simple. There is no need to change any of your current code. Head over to http://adaptive-images.com/download.htm and get the latest version of adaptive images. You can place the adaptive-images.php file in the root of your site. Make sure to add the content of the .htaccess file to your own as well. Head over to the index file of your site and add this in the <head> tags: <script>document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/';</script> Note that it is has to be in the <head> tag of your site. Open the adaptive-images.php file and add you media query values into the $resolutions variable. Here is a snippet of code that is pretty self-explanatory: $resolutions   = array(1382, 992, 768, 480);$cache_path   = "ai-cache";$jpg_quality   = 80;$sharpen       = TRUE;$watch_cache   = TRUE;$browser_cache = 60*60*24*7; The $resolution variable accepts the break-points that you use for your website. You can simply add the value of the screen width in pixels. So, in the the preceding example, it would read 1382 pixels as the first break-point, 992 pixels as the second one, and so on. The cache path tells adaptive images where to store the generated resized images. It's a relative path from your document root. So, in this case, your folder structure would read as document_root/a-cache/{images stored here}. The next variable, $jpg_quality, sets the quality of any generated JPGs images on a scale of 0 to 100. Shrinking images could cause blurred details. Set $sharpen to TRUE to perform a sharpening process on rescaled images. When you set $watch_cache to TRUE, you force adaptive images to check that the adapted image isn't stale; that is, it ensures that the updated source images are recached. Lastly, $browser_cache sets how long the browser cache should last for. The values are seconds, minutes, hours, days (7 days by default). You can change the last digit to modify the days. So, if you want images to be cached for two days, simply change the last value to 2. Then,… oh wait, that's all? It is indeed! Adaptive images will work with your existing website and they don't require any markup changes. They are also device-agnostic and follow a mobile-first philosophy. Conditional loading Responsive designs combine three main techniques, which are as follows: Fluid grids Flexible images Media queries The technique that I want to focus on in this section is media queries. In most cases, developers use media queries to change the layout, width height, padding, font size and so on, depending on conditions related to the viewport. Let's see how we can achieve conditional image loading using CSS3's image-set function: .my-background-img {background-image: image-set(url(icon1x.jpg) 1x,url(icon2x.jpg) 2x);} You can see in the preceding piece of CSS3 code that the image is loaded conditionally based on its display type. The second statement url(icon2x.jpg) 2x would load the hi-resolution image or retina image. This reduces the number of CSS rules we have to create. Maintaining a site with a lot of background images can become quite a chore if a separate rule exists for each one. Here is a simple media query example: @media screen and (max-width: 480px) {   .container {       width: 320px;   }} As I'm sure you already know, this snippet tells the browser that, for any device with a viewport of fewer than 480 pixels, any element with the class container has to be 320 pixels wide. When you use media queries, always make sure to include the viewport <meta> tag in the head of your HTML document, as follows: <meta name="viewport" content="width=device-width, initial-scale=1"> I've included this template here as I'd like to start with this. It really makes it very easy to get started with new responsive projects: /* MOBILE */@media screen and (max-width: 480px) {   .container {       width: 320px;   }}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {   .container {       width: 480px;   }}/* SMALL DESKTOP OR LARGE TABLETS */@media screen and (min-width: 721px) and (max-width: 960px) {   .container {       width: 720px;   }}/* STANDARD DESKTOP */@media screen and (min-width: 961px) and (max-width: 1200px) {   .container {       width: 960px;   }}/* LARGE DESKTOP */@media screen and (min-width: 1201px) and (max-width: 1600px) {   .container {       width: 1200px;   }}/* EXTRA LARGE DESKTOP */@media screen and (min-width: 1601px) {   .container {       width: 1600px;   }} When you view a website on a desktop, it's quite a common thing to have a left and a right column. Generally, the left column contains information that requires more focus and the right column contains content with a bit less importance. In some cases, you might even have three columns. Take the social website Facebook as an example. At the time of writing this article, Facebook used a three-column layout, which is as follows:   When you view a web page on a mobile device, you won't be able to fit all three columns into the smaller viewport. So, you'd probably want to hide some of the columns and not request the data that is usually displayed in the columns that are hidden. Alright, we've done some talking. Well, you've done some reading. Now, let's get into our code! Our goal in this section is to learn about conditional development, with the focus on images. I've constructed a little website with a two-column layout. The left column houses the content and the right column is used to populate a little news feed. I made a simple PHP script that returns a JSON object with the news items. Here is a preview of the different screens that we will work on:   These two views are a result of the queries that are shown in the following style sheet code: /* MOBILE */@media screen and (max-width: 480px) {}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {} Summary Managing images is no small feat in a website. Almost all modern websites rely heavily on images to present content to the users. In this article we looked at which image formats to use and when. We also looked at how to optimize your images for websites. We discussed the difference between progressive and optimized images as well. Conditional loading can greatly help you to load your site faster. In this article, we briefly discussed how to use conditional loading to improve your site's performance. Resources for Article: Further resources on this subject: A look into responsive design frameworks [article] Building Responsive Image Sliders [article] Creating a Responsive Project [article]
Read more
  • 0
  • 0
  • 1645

article-image-creating-responsive-project
Packt
08 Apr 2015
14 min read
Save for later

Creating a Responsive Project

Packt
08 Apr 2015
14 min read
In today's ultra connected world, a good portion of your students probably own multiple devices. Of course, they may want to take your eLearning course on all their devices. They might want to start the course on their desktop computer at work, continue it on their phone while commuting back home, and finish it at night on their tablet. In other situations, students might only have a mobile phone available to take the course, and sometimes the topic to teach only makes sense on a mobile device. To address these needs, you want to deliver your course on multiple screens. As of Captivate 6, you can publish your courses in HTML5, which makes them available on mobile devices that do not support the Flash technology. Now, Captivate 8 takes it one huge step further by introducing Responsive Projects. A Responsive Project is a project that you can optimize for the desktop, the tablet, and the mobile phone. It is like providing three different versions of the course in a single project. In this article, by Damien Bruyndonckx, author of the book Mastering Adobe Captivate 8, you will be introduced to the key concepts and techniques used to create a responsive project in Captivate 8. While reading, keep the following two things in mind. First, everything you have learned so far can be applied to a responsive project. Second, creating a responsive project requires more experience than what a book can offer. I hope that this article will give you a solid understanding of the core concepts in order to jump start your own discovery of Captivate 8 Responsive Projects. (For more resources related to this topic, see here.) About Responsive Projects A Responsive Project is meant to be used on multiple devices, including tablets and smartphones that do not support the Flash technology. Therefore, it can be published only in HTML5. This means that all the restrictions of a traditional HTML5 project also apply to a Responsive Project. For example, you will not be able to add Text Animations or Rollover Objects in a Responsive Project because these features are not supported in HTML5. Responsive design is not limited to eLearning projects made in Captivate. It is actually used by web designers and developers around the world to create websites that have the ability to automatically adapt themselves to the screen they are viewed on. To do so, they need to detect the screen width that is available to their content and adapt accordingly. Responsive Design by Ethan Marcotte If you want to know more about responsive design, I strongly recommend a book by Ethan Marcotte in the A Book Apart collection. This is the founding book of responsive design. If you have some knowledge of HTML and CSS, this is a must have resource in order to fully understand what responsive design is all about. More information on this book can be found at http://www.abookapart.com/products/responsive-web-design. Viewport size versus screen size At the heart of the responsive design approach is the width of the screen used by the student to consume the content. To be more exact, it is the width of the viewport that is detected—not the width of the screen. The viewport is the area that is actually available to the content. On a desktop or laptop computer, the difference between the screen width and the viewport width is very easy to understand. Let's do a simple experiment to grasp that concept hands-on: Open your default web browser and make sure it is in fullscreen mode. Browse to http://www.viewportsizes.com/mine. The main information provided by this page is the size of your viewport. Because your web browser is currently in fullscreen mode, the viewport size should be close (but not quite the same) to the resolution of your screen. Use your mouse to resize your browser window and see how the viewport size evolves. As shown in the following screenshot, the size of the viewport changes as you resize your browser window but the actual screen you use is always the same: This viewport concept is also valid on a mobile device, even though it may be a bit subtler to grasp. The following screenshot shows the http://www.viewportsizes.com/mine web page as viewed in the Safari mobile browser on an iPad mini held in landscape (left) and in portrait (right). As you can see, the viewport size changes but, once again, the actual screen used is always the same. Don't hesitate to perform these experiments on your own mobile devices and compare your results to mine. Another thing that might affect the viewport size on a mobile device is the browser used. The following screenshot shows the viewport size of the same iPad mini held in portrait mode in Safari mobile (left) and in Chrome mobile (right). Note that the viewport size is slightly different in Chrome than in Safari. This is due to the interface elements of the browser (such as the address bar and the tabs) that use a variable portion of the screen real estate in each browser. Understanding breakpoints Before setting up your own Responsive Project there is one more concept to explore. To discover this second concept, you will also perform a simple experiment with your desktop or laptop computer: Open the web browser of your desktop or laptop computer and maximize it to fullscreen size. Browse to http://courses.dbr-training.eu/8/goingmobile. This is the online version of the Responsive Project that you will build in this article. When viewed on a desktop or laptop computer in fullscreen mode, you should see a version of the course optimized for larger screens. Use your mouse to slowly scale your browser window down. Note how the size and the position of the elements are automatically recalculated as you resize the browser window. At some point, you should see that the height of the slide changes and that another layout is applied. The point at which the layout changes is situated at a width of exactly 768 px. In other words, if the width of the browser (actually the viewport) is above 768 px, one layout is applied, but if the width of the viewport falls under 768 px, another layout is applied. You just discovered a breakpoint. The layout that is applied after the breakpoint (in other words when the viewport width is lower than 768 px) is optimized for a tablet device held in portrait mode. Note that even though you are using a desktop or laptop computer, it is the tablet-optimized layout that is applied when the viewport width is at or under 768 px. Keep scaling the browser window down and see how the position and the size of the elements of the slide are recalculated in real time as you resize the browser window. This simple experiment should better explain what a breakpoint is and how these breakpoints work. Before moving on to the next section, let's take some time to summarize the important concepts uncovered in this section: The aim of responsive design is to provide an optimized viewing experience across a wide range of devices and form factors. To achieve this goal, responsive design uses fluid sizing and positioning techniques, responsive images, and breakpoints. Responsive design is not limited to eLearning courses made in Captivate, but is widely used in web and app design by thousands of designers around the world. A Captivate 8 Responsive Project can only be published in HTML5. The capabilities and restrictions of a standard HTML5 project also apply to a Responsive Project. A breakpoint defines the exact viewport width at which the layout breaks and another layout is applied. The breakpoints, and therefore the optimized layouts, are based on the width of the viewport and not on the detection of an actual device. This explains why the tablet-optimized layout is applied to the downsized browser window on a desktop computer. The viewport width and the screen width are two different things. In the next section, you will start the creation of your very first Responsive Project. To learn more about these concepts, there is a video course on Responsive eLearning with Captivate 8 available on Adobe KnowHow. The course itself is for a fee, but there is a free sample of 15 minutes that walks you through these concepts using another approach. I suggest you take some time to watch this 15-minute sample at https://www.adobeknowhow.com/courselanding/create-responsive-elearning-adobe-captivate-8. Setting up a Responsive Project It is now time to open Captivate and set up your first Responsive Project using the following steps: Open Captivate or close every open file. Switch to the New tab of the Welcome screen. Double-click on the Responsive Project thumbnail. Alternatively, you can also use the File | New Project | Responsive Project menu item. This action creates a new Responsive Project. Note that the choice to create a Responsive Project or a regular Captivate project must be done up front when creating the project. As of Captivate 8, it is not yet possible to take an existing non-responsive project and make it responsive after the fact. The workspace of Captivate should be very similar to what you are used to, with the exception of an extra ruler that spans across the top of the screen. This ruler contains three predefined breakpoints. As shown in the following screenshot, the first breakpoint is called the Primary breakpoint and is situated at 1024 pixels. Also, note that the breakpoint ruler is green when the Primary breakpoint is selected. You will now discover the other two breakpoints using the following steps. In the breakpoint ruler, click on the icon of a tablet to select the second breakpoint. The stage and all the elements it contains are resized. In the breakpoint ruler at the top of the stage, the second breakpoint is now selected. It is called the Tablet breakpoint and is situated at 768 pixels. Note the blue color associated with the Tablet breakpoint. In the breakpoint ruler, click on the icon of a smartphone to select the third and last breakpoint. Once again, the stage and the elements it contains are resized. The third breakpoint is called the Mobile breakpoint and is situated at 360 pixels. The orange color is associated with this third breakpoint. Adjusting the breakpoints In some situations, the default location of these three breakpoints works just fine But, in other situations, some adjustments are needed. In this project, you want to target the regular screen of a desktop or laptop computer in the Primary view, an iPad mini held in portrait in the Tablet view, and an iPhone 4 held in portrait in the Mobile view. You will now adjust the breakpoints to fit these particular specifications by using the following steps: Click on the Primary breakpoint in the breakpoints ruler to select it. Use your mouse to move the breakpoint all the way to the left. Captivate should stop at a width of 1280 pixels. It is not possible to have a stage wider than 1280 pixels in a Responsive Project. For this project, the default width of 1024 pixels is perfect, so you will now move this breakpoint back to its original location. Move the Primary breakpoint to the right until it is placed at 1024 pixels. Return to your web browser and browse to http://www.viewportsizes.com. Once on the website, type iPad in the Filter field at the top of the page. The portrait width of an iPad mini is 768 pixels. In Captivate, the Tablet breakpoint is placed at 768 pixels by default, which is perfectly fine for the needs of this project. Still on the http://www.viewportsizes.com website, type iPhone in the Filter field at the top of the page. The portrait width of an iPhone 4 is 320 pixels. In Captivate, the Mobile breakpoint is placed at 360 pixels by default. You will now move it to 320 pixels so that it matches the portrait width of an iPhone 4. Return to Captivate and select the Mobile breakpoint. Move the Mobile breakpoint to the right until it is placed at exactly 320 pixels. Note that the minimum width of the stage in the Mobile breakpoint is 320 pixels. In other words, the stage cannot be narrower than 320 pixels in a Responsive Project. The viewport size of your device Before moving on to the next section, take some time to inspect the http://viewportsizes.com site a little further. For example, type the name of the devices you own and compare their characteristics to the breakpoints of the current project. Will the project fit on your devices? How do you need to change the breakpoints so the project perfectly fits your devices? The breakpoints are now in place. But these breakpoints only take care of the width of the stage. In the next section, you will adjust the height of the stage in each breakpoint. Adjusting the slide height Captivate slides have a fixed height. This is the primary difference between a Captivate project and a regular responsive website whose page height is infinite. In this section, you will adjust the height of the stage in all three breakpoints. The steps are as follows: Still in Captivate, click on the desktop icon situated on the left side of the breakpoint switcher to return to the Primary view. On the far right of the breakpoint ruler, select the View Device Height checkbox. As shown in the following screenshot, a yellow border now surrounds the stage in the Primary view, and the slide height is displayed in the top left corner of the stage: For the Primary view, a slide height of 627 pixels is perfect. It matches the viewport size of an iPad held in landscape and provides a big enough area on a desktop or laptop computer. Click on the Tablet breakpoint to select it. Return to http://www.viewportsizes.com/ and type iPad in the filter field at the top of the page. According to the site, the height of an iPad is 1024 pixels. Use your mouse to drag the yellow rectangle situated at the bottom of the stage down until the stage height is around 950 pixels. It may be needed to reduce the zoom magnification to perform this action in good conditions. After this operation, the stage should look like the following screenshot (the zoom magnification has been reduced to 50 percent in the screenshot): With a height of 950 pixels, the Captivate slide can fit on an iPad screen and still account for the screen real estate consumed by the interface elements of the browser such as the address bar and the tabs. Still in the Tablet view, make sure the slide is the selected object and open the Properties panel. Note that, at the end of the Properties panel, the Slide Height property is currently unavailable. Click on the chain icon (Unlink from Device height) next to the Slide Height property. By default, the slide height is linked to the device height. By clicking on the chain icon you have broken the link between the slide height and the device (or viewport) height. This allows you to modify the height of the Captivate slide without modifying the height of the device. Use the Properties panel to change the Slide Height to 1024 pixels. On the stage, note that the slide is now a little bit higher than the yellow rectangle. This means that this particular slide will generate a vertical scrollbar on the tablet device held in portrait. Scrolling is something you want to avoid as much as possible, so you will now enable the link between the device height and the Slide Height. In the Properties panel, click on the chain icon next to the Slide Height property to enable the link. The slide height is automatically readjusted to the device height of 950 pixels. Use the breakpoint ruler to select the Mobile breakpoint. By default, the device height in the Mobile breakpoint is set to 415 pixels. According to the http://www.viewportsizes.com/ website, the screen of an iPhone 4 has a height of 480 pixels. A slide height of 415 pixels is perfect to accommodate the slide itself plus the interface elements of the mobile browser. Summary In this article, you learned the key concepts and techniques used to create a responsive project in Captivate 8. Resources for Article: Further resources on this subject: Publishing the project for mobile [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article] Creating Motion Through the Timeline [article]
Read more
  • 0
  • 0
  • 2618