Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-static-data-management
Packt
25 Feb 2015
29 min read
Save for later

Static Data Management

Packt
25 Feb 2015
29 min read
In this article by Loiane Groner, author of the book Mastering Ext JS, Second Edition, we will start implementing the application's core features, starting with static data management. What exactly is this? Every application has information that is not directly related to the core business, but this information is used by the core business logic somehow. There are two types of data in every application: static data and dynamic data. For example, the types of categories, languages, cities, and countries can exist independently of the core business and can be used by the core business information as well; this is what we call static data because it does not change very often. And there is the dynamic data, which is the information that changes in the application, what we call core business data. Clients, orders, and sales would be examples of dynamic or core business data. We can treat this static information as though they are independent MySQL tables (since we are using MySQL as the database server), and we can perform all the actions we can do on a MySQL table. (For more resources related to this topic, see here.) Creating a Model As usual, we are going to start by creating the Models. First, let's list the tables we will be working with and their columns: Actor: actor_id, first_name, last_name, last_update Category: category_id, name, last_update Language: language_id, name, last_update City: city_id, city, country_id, last_update Country: country_id, country, last_update We could create one Model for each of these entities with no problem at all; however, we want to reuse as much code as possible. Take another look at the list of tables and their columns. Notice that all tables have one column in common—the last_update column. All the previous tables have the last_update column in common. That being said, we can create a super model that contains this field. When we implement the actor and category models, we can extend the super Model, in which case we do not need to declare the column. Don't you think? Abstract Model In OOP, there is a concept called inheritance, which is a way to reuse the code of existing objects. Ext JS uses an OOP approach, so we can apply the same concept in Ext JS applications. If you take a look back at the code we already implemented, you will notice that we are already applying inheritance in most of our classes (with the exception of the util package), but we are creating classes that inherit from Ext JS classes. Now, we will start creating our own super classes. As all the models that we will be working with have the last_update column in common (if you take a look, all the Sakila tables have this column), we can create a super Model with this field. So, we will create a new file under app/model/staticData named Base.js: Ext.define('Packt.model.staticData.Base', {     extend: 'Packt.model.Base', //#1       fields: [         {             name: 'last_update',             type: 'date',             dateFormat: 'Y-m-j H:i:s'         }     ] }); This Model has only one column, that is, last_update. On the tables, the last_update column has the type timestamp, so the type of the field needs to be date, and we will also apply date format: 'Y-m-j H:i:s', which is years, months, days, hours, minutes, and seconds, following the same format as we have in the database (2006-02-15 04:34:33). When we can create each Model representing the tables, we will not need to declare the last_update field again. Look again at the code at line #1. We are not extending the default Ext.data.Model class, but another Base class (security.Base). Adapting the Base Model schema Create a file named Base.js inside the app/model folder with the following content in it: Ext.define('Packt.model.Base', {     extend: 'Ext.data.Model',       requires: [         'Packt.util.Util'     ],       schema: {         namespace: 'Packt.model', //#1         urlPrefix: 'php',         proxy: {             type: 'ajax',             api :{                 read : '{prefix}/{entityName:lowercase}/list.php',                 create:                     '{prefix}/{entityName:lowercase}/create.php',                 update:                     '{prefix}/{entityName:lowercase}/update.php',                 destroy:                     '{prefix}/{entityName:lowercase}/destroy.php'             },             reader: {                 type: 'json',                 rootProperty: 'data'             },             writer: {                 type: 'json',                 writeAllFields: true,                 encode: true,                 rootProperty: 'data',                 allowSingle: false             },             listeners: {                 exception: function(proxy, response, operation){               Packt.util.Util.showErrorMsg(response.responseText);                 }             }         }     } }); Instead of using Packt.model.security, we are going to use only Packt.model. The Packt.model.security.Base class will look simpler now as follows: Ext.define('Packt.model.security.Base', {     extend: 'Packt.model.Base',       idProperty: 'id',       fields: [         { name: 'id', type: 'int' }     ] }); It is very similar to the staticData.Base Model we are creating for this article. The difference is in the field that is common for the staticData package (last_update) and security package (id). Having a single schema for the application now means entityName of the Models will be created based on their name after 'Packt.model'. This means that the User and Group models we created will have entityName security.User, and security.Group respectively. However, we do not want to break the code we have implemented already, and for this reason we want the User and Group Model classes to have the entity name as User and Group. We can do this by adding entityName: 'User' to the User Model and entityName: 'Group' to the Group Model. We will do the same for the specific models we will be creating next. Having a super Base Model for all models within the application means our models will follow a pattern. The proxy template is also common for all models, and this means our server-side code will also follow a pattern. This is good to organize the application and for future maintenance. Specific models Now we can create all the models representing each table. Let's start with the Actor Model. We will create a new class named Packt.model.staticData.Actor; therefore, we need to create a new file name Actor.js under app/model/staticData, as follows: Ext.define('Packt.model.staticData.Actor', {     extend: 'Packt.model.staticData.Base', //#1       entityName: 'Actor', //#2       idProperty: 'actor_id', //#3       fields: [         { name: 'actor_id' },         { name: 'first_name'},         { name: 'last_name'}     ] }); There are three important things we need to note in the preceding code: This Model is extending (#1) from the Packt.model.staticData.Base class, which extends from the Packt.model.Base class, which in turn extends from the Ext.data.Model class. This means this Model inherits all the attributes and behavior from the classes Packt.model.staticData.Base, Packt.model.Base, and Ext.data.Model. As we created a super Model with the schema Packt.model, the default entityName created for this Model would be staticData.Actor. We are using entityName to help the proxy compile the url template with entityName. To make our life easier we are going to overwrite entityName as well (#2). The third point is idProperty (#3). By default, idProperty has the value "id". This means that when we declare a Model with a field named "id", Ext JS already knows that this is the unique field of this Model. When it is different from "id", we need to specify it using the idProperty configuration. As all Sakila tables do not have a unique field called "id"—it is always the name of the entity + "_id"—we will need to declare this configuration in all models. Now we can do the same for the other models. We need to create four more classes: Packt.model.staticData.Category Packt.model.staticData.Language Packt.model.staticData.City Packt.model.staticData.Country At the end, we will have six Model classes (one super Model and five specific models) created inside the app/model/staticData package. If we create a UML-class diagram for the Model classes, we will have the following diagram: The Actor, Category, Language, City, and Country Models extend the Packt.model.staticData Base Model, which extends from Packt.model.Base, which in turn extends the Ext.data.Model class. Creating a Store The next step is to create the storesfor each Model. As we did with the Model, we will try to create a generic Storeas well (in this article, will create a generic code for all screens, so creating a super Model, Store, and View is part of the capability). Although the common configurations are not in the Store, but in the Proxy(which we declared inside the schema in the Packt.model.Base class), having a super Store class can help us to listen to events that are common for all the static data stores. We will create a super Store named Packt.store.staticData.Base. As we need a Store for each Model, we will create the following stores: Packt.store.staticData.Actors Packt.store.staticData.Categories Packt.store.staticData.Languages Packt.store.staticData.Cities Packt.store.staticData.Countries At the end of this topic, we will have created all the previous classes. If we create a UML diagram for them, we will have something like the following diagram: All the Store classes extend from the Base Store. Now that we know what we need to create, let's get our hands dirty! Abstract Store The first class we need to create is the Packt.store.staticData.Base class. Inside this class, we will only declare autoLoad as true so that all the subclasses of this Store can be loaded when the application launches: Ext.define('Packt.store.staticData.Base', {     extend: 'Ext.data.Store',       autoLoad: true }); All the specific stores that we will create will extend this Store. Creating a super Store like this can feel pointless; however, we do not know that during future maintenance, we will need to add some common Store configuration. As we will use MVC for this module, another reason is that inside the Controller, we can also listen to Store events (available since Ext JS 4.2). If we want to listen to the same event of a set of stores and we execute exactly the same method, having a super Store will save us some lines of code. Specific Store Our next step is to implement the Actors, Categories, Languages, Cities, and Countries stores. So let's start with the Actors Store: Ext.define('Packt.store.staticData.Actors', {     extend: 'Packt.store.staticData.Base', //#1       model: 'Packt.model.staticData.Actor' //#2 }); After the definition of the Store, we need to extend from the Ext JS Store class. As we are using a super Store, we can extend directly from the super Store (#1), which means extending from the Packt.store.staticData.Base class. Next, we need to declare the fields or the model that this Store is going to represent. In our case, we always declare the Model (#2). Using a model inside the Store is good for reuse purposes. The fields configuration is recommended just in case we need to create a very specific Store with specific data that we are not planning to reuse throughout the application, as in a chart or a report. For the other stores, the only thing that is going to be different is the name of the Store and the Model. However, if you need the code to compare with yours or simply want to get the complete source code, you can download the code bundle from this book or get it at https://github.com/loiane/masteringextjs. Creating an abstract GridPanel for reuse Now is the time to implement the views. We have to implement five views: one to perform the CRUD operations for Actor, one for Category, one for Language, one for City, and one for Country. The following screenshot represents the final result we want to achieve after implementing the Actors screen: And the following screenshot represents the final result we want to achieve after implementing the Categories screen: Did you notice anything similar between these two screens? Let's take a look again: The top toolbar is the same (1); there is a Live Search capability (2); there is a filter plugin (4), and the Last Update and widget columns are also common (3). Going a little bit further, both GridPanels can be edited using a cell editor(similar to MS Excel capabilities, where you can edit a single cell by clicking on it). The only things different between these two screens are the columns that are specific to each screen (5). Does this mean we can reuse a good part of the code if we use inheritance by creating a super GridPanel with all these common capabilities? Yes! So this is what we are going to do. So let's create a new class named Packt.view.staticData.BaseGrid, as follows: Ext.define('Packt.view.staticData.BaseGrid', {     extend: 'Ext.ux.LiveSearchGridPanel', //#1     xtype: 'staticdatagrid',       requires: [         'Packt.util.Glyphs' //#2     ],       columnLines: true,    //#3     viewConfig: {         stripeRows: true //#4     },       //more code here });    We will extend the Ext.ux.LiveSearchGridPanel class instead of Ext.grid.Panel. The Ext.ux.LiveSearchGridPanel class already extends the Ext.grid.Panel class and also adds the Live Search toolbar (2). The LiveSearchGridPanel class is a plugin that is distributed with the Ext JS SDK. So, we do not need to worry about adding it manually to our project (you will learn how to add third-party plugins to the project later in this book). As we will also add a toolbar with the Add, Save Changes, Cancel Changes buttons, we need to require the util.Glyphs class we created (#2). The configurations #3 and #4 show the border of each cell of the grid and to alternate between a white background and a light gray background. Likewise, any other component that is responsible for displaying information in Ext JS, such as the "Panel" piece is only the shell. The View is responsible for displaying the columns in a GridPanel. We can customize it using the viewConfig (#4). The next step is to create an initComponent method. To initComponent or not? While browsing other developers' code, we might see some using the initComponent when declaring an Ext JS class and some who do not (as we have done until now). So what is the difference between using it and not using it? When declaring an Ext JS class, we usually configure it according to the application needs. They might become either a parent class for other classes or not. If they become a parent class, some of the configurations will be overridden, while some will not. Usually, we declare the ones that we expect to override in the class as configurations. We declare inside the initComponent method the ones we do not want to be overridden. As there are a few configurations we do not want to be overridden, we will declare them inside the initComponent, as follows: initComponent: function() {     var me = this;       me.selModel = {         selType: 'cellmodel' //#5     };       me.plugins = [         {             ptype: 'cellediting',  //#6             clicksToEdit: 1,             pluginId: 'cellplugin'         },         {             ptype: 'gridfilters'  //#7         }     ];       //docked items       //columns       me.callParent(arguments); //#8 } We can define how the user can select information from the GridPanel: the default configuration is the Selection RowModel class. As we want the user to be able to edit cell by cell, we will use the Selection CellModel class (#5) and also the CellEditing plugin (#6), which is part of the Ext JS SDK. For the CellEditing plugin, we configure the cell to be available to edit when the user clicks on the cell (if we need the user to double-click, we can change to clicksToEdit: 2). To help us later in the Controller, we also assign an ID to this plugin. To be able to filter the information (the Live Search will only highlight the matching records), we will use the Filters plugin (#7). The Filters plugin is also part of the Ext JS SDK. The callParent method (#8) will call initConfig from the superclass Ext.ux.LiveSearchGridPanel passing the arguments we defined. It is a common mistake to forget to include the callParent call when overriding the initComponent method. If the component does not work, make sure you are calling the callParent method! Next, we are going to declare dockedItems. As all GridPanels will have the same toolbar, we can declare dockedItems in the super class we are creating, as follows: me.dockedItems = [     {         xtype: 'toolbar',         dock: 'top',         itemId: 'topToolbar', //#9         items: [             {                 xtype: 'button',                 itemId: 'add', //#10                 text: 'Add',                 glyph: Packt.util.Glyphs.getGlyph('add')             },             {                 xtype: 'tbseparator'             },             {                 xtype: 'button',                 itemId: 'save',                 text: 'Save Changes',                 glyph: Packt.util.Glyphs.getGlyph('saveAll')             },             {                 xtype: 'button',                 itemId: 'cancel',                 text: 'Cancel Changes',                 glyph: Packt.util.Glyphs.getGlyph('cancel')             },             {                 xtype: 'tbseparator'             },             {                 xtype: 'button',                 itemId: 'clearFilter',                 text: 'Clear Filters',                 glyph: Packt.util.Glyphs.getGlyph('clearFilter')             }         ]     } ]; We will have Add, Save Changes, Cancel Changes, and Clear Filters buttons. Note that the toolbar (#9) and each of the buttons (#10) has itemId declared. As we are going to use the MVC approach in this example, we will declare a Controller. The itemId configuration has a responsibility similar to the reference that we declare when working with a ViewController. We will discuss the importance of itemId more when we declare the Controller. When declaring buttons inside a toolbar, we can omit the xtype: 'button' configuration since the button is the default component for toolbars. Inside the Glyphs class, we need to add the following attributes inside its config: saveAll: 'xf0c7', clearFilter: 'xf0b0' And finally, we will add the two columns that are common for all the screens (Last Update column and Widget Column delete (#13)) along with the columns already declared in each specific GridPanel: me.columns = Ext.Array.merge( //#11     me.columns,               //#12     [{         xtype    : 'datecolumn',         text     : 'Last Update',         width    : 150,         dataIndex: 'last_update',         format: 'Y-m-j H:i:s',         filter: true     },     {         xtype: 'widgetcolumn', //#13         width: 45,         sortable: false,       //#14         menuDisabled: true,    //#15         itemId: 'delete',         widget: {             xtype: 'button',   //#16             glyph: Packt.util.Glyphs.getGlyph('destroy'),             tooltip: 'Delete',             scope: me,                //#17             handler: function(btn) {  //#18                 me.fireEvent('widgetclick', me, btn);             }         }     }] ); In the preceding code we merge (#11) me.columns (#12) with two other columns and assign this value to me.columns again. We want all child grids to have these two columns plus the specific columns for each child grid. If the columns configuration from the BaseGrid class were outside initConfig, then when a child class declared its own columns configuration the value would be overridden. If we declare the columns configuration inside initComponent, a child class would not be able to add its own columns configuration, so we need to merge these two configurations (the columns from the child class #12 with the two columns we want each child class to have). For the delete button, we are going to use a Widget Column (#13) (introduced in Ext JS 5). Until Ext JS 4, the only way to have a button inside a Grid Column was using an Action Column. We are going to use a button (#16) to represent a Widget Column. Because it is a Widget Column, there is no reason to make this column sortable (#14), and we can also disable its menu (#15). Specific GridPanels for each table Our last stop before we implement the Controller is the specific GridPanels. We have already created the super GridPanel that contains most of the capabilities that we need. Now we just need to declare the specific configurations for each GridPanel. We will create five GridPanels that will extend from the Packt.view.staticData.BaseGrid class, as follows: Packt.view.staticData.Actors Packt.view.staticData.Categories Packt.view.staticData.Languages Packt.view.staticData.Cities Packt.view.staticData.Countries Let's start with the Actors GridPanel, as follows: Ext.define('Packt.view.staticData.Actors', {     extend: 'Packt.view.staticData.BaseGrid',     xtype: 'actorsgrid',        //#1       store: 'staticData.Actors', //#2       columns: [         {             text: 'Actor Id',             width: 100,             dataIndex: 'actor_id',             filter: {                 type: 'numeric'   //#3             }         },         {             text: 'First Name',             flex: 1,             dataIndex: 'first_name',             editor: {                 allowBlank: false, //#4                 maxLength: 45      //#5             },             filter: {                 type: 'string'     //#6             }         },         {             text: 'Last Name',             width: 200,             dataIndex: 'last_name',             editor: {                 allowBlank: false, //#7                 maxLength: 45      //#8             },             filter: {                 type: 'string'     //#9             }         }     ] }); Each specific class has its own xtype (#1). We also need to execute an UPDATE query in the database to update the menu table with the new xtypes we are creating: UPDATE `sakila`.`menu` SET `className`='actorsgrid' WHERE `id`='5'; UPDATE `sakila`.`menu` SET `className`='categoriesgrid' WHERE `id`='6'; UPDATE `sakila`.`menu` SET `className`='languagesgrid' WHERE `id`='7'; UPDATE `sakila`.`menu` SET `className`='citiesgrid' WHERE `id`='8'; UPDATE `sakila`.`menu` SET `className`='countriesgrid' WHERE `id`='9'; The first declaration that is specific to the Actors GridPanel is the Store (#2). We are going to use the Actors Store. Because the Actors Store is inside the staticData folder (store/staticData), we also need to pass the name of the subfolder; otherwise, Ext JS will think that this Store file is inside the app/store folder, which is not true. Then we need to declare the columns specific to the Actors GridPanel (we do not need to declare the Last Update and the Delete Action Column because they are already in the super GridPanel). What you need to pay attention to now are the editor and filter configurations for each column. The editor is for editing (cellediting plugin). We will only apply this configuration to the columns we want the user to be able to edit, and the filter (filters plugin) is the configuration that we will apply to the columns we want the user to be able to filter information from. For example, for the id column, we do not want the user to be able to edit it as it is a sequence provided by the MySQL database auto increment, so we will not apply the editor configuration to it. However, the user can filter the information based on the ID, so we will apply the filter configuration (#3). We want the user to be able to edit the other two columns: first_name and last_name, so we will add the editor configuration. We can perform client validations as we can do on a field of a form too. For example, we want both fields to be mandatory (#4 and #7) and the maximum number of characters the user can enter is 45 (#5 and #8). And at last, as both columns are rendering text values (string), we will also apply filter (#6 and #9). For other filter types, please refer to the Ext JS documentation as shown in the following screenshot. The documentation provides an example and more configuration options that we can use: And that is it! The super GridPanel will provide all the other capabilities. Summary In this article, we covered how to implement screens that look very similar to the MySQL Table Editor. The most important concept we covered in this article is implementing abstract classes, using the inheritance concept from OOP. We are used to using these concepts on server-side languages, such as PHP, Java, .NET, and so on. This article demonstrated that it is also important to use these concepts on the Ext JS side; this way, we can reuse a lot of code and also implement generic code that provides the same capability for more than one screen. We created a Base Model and Store. We used GridPanel and Live Search grid and filter plugin for the GridPanel as well. You learned how to perform CRUD operations using the Store capabilities. Resources for Article: Further resources on this subject: So, What Is EXT JS? [article] The Login Page Using EXT JS [article] Improving Code Quality [article]
Read more
  • 0
  • 0
  • 3182

article-image-introducing-web-application-development-rails
Packt
25 Feb 2015
8 min read
Save for later

Introducing Web Application Development in Rails

Packt
25 Feb 2015
8 min read
In this article by Syed Fazle Rahman, author of the book Bootstrap for Rails,we will learn how to present your application in the best possible way, which has been the most important factor for every web developer for ages. In this mobile-first generation, we are forced to go with the wind and make our application compatible with mobiles, tablets, PCs, and every possible display on Earth. Bootstrap is the one stop solution for all woes that developers have been facing. It creates beautiful responsive designs without any extra efforts and without any advanced CSS knowledge. It is a true boon for every developer. we will be focusing on how to beautify our Rails applications through the help of Bootstrap. We will create a basic Todo application with Rails. We will explore the folder structure of a Rails application and analyze which folders are important for templating a Rails Application. This will be helpful if you want to quickly revisit Rails concepts. We will also see how to create views, link them, and also style them. The styling in this article will be done traditionally through the application's default CSS files. Finally, we will discuss how we can speed up the designing process using Bootstrap. In short, we will cover the following topics: Why Bootstrap with Rails? Setting up a Todo Application in Rails Analyzing folder structure of a Rails application Creating views Styling views using CSS Challenges in traditionally styling a Rails Application (For more resources related to this topic, see here.) Why Bootstrap with Rails? Rails is one the most popular Ruby frameworks which is currently at its peak, both in terms of demand and technology trend. With more than 3,100 members contributing to its development, and tens of thousands of applications already built using it, Rails has created a standard for every other framework in the Web today. Rails was initially developed by David Heinemeier Hansson in 2003 to ease his own development process in Ruby. Later, he became generous enough to release Rails to the open source community. Today, it is popularly known as Ruby on Rails. Rails shortens the development life cycle by moving the focus from reinventing the wheel to innovating new features. It is based on the convention of the configurations principle, which means that if you follow the Rails conventions, you would end up writing much less code than you would otherwise write. Bootstrap, on the other hand, is one of the most popular frontend development frameworks. It was initially developed at Twitter for some of its internal projects. It makes the life of a novice web developer easier by providing most of the reusable components that are already built and are ready to use. Bootstrap can be easily integrated with a Rails development environment through various methods. We can directly use the .css files provided by the framework, or can extend it through its Sass version and let Rails compile it. Sass is a CSS preprocessor that brings logic and functionality into CSS. It includes features like variables, functions, mixins, and others. Using the Sass version of Bootstrap is a recommended method in Rails. It gives various options to customize Bootstrap's default styles easily. Bootstrap also provides various JavaScript components that can be used by those who don't have any real JavaScript knowledge. These components are required in almost every modern website being built today. Bootstrap with Rails is a deadly combination. You can build applications faster and invest more time to think about functionality, rather than rewrite codes. Setting up a Todo application in Rails I assume that you already have basic knowledge of Rails development. You should also have Rails and Ruby installed in your machine to start with. Let's first understand what this Todo application will do. Our application will allow us to create, update, and delete items from the Todo list. We will first analyze the folders that are created while scaffolding this application and which of them are necessary for templating the application. So, let's dip our feet into the water: First, we need to select our workspace, which can be any folder inside your system. Let's create a folder named Bootstrap_Rails_Project. Now, open the terminal and navigate to this folder. It's time to create our Todo application. Write the following command to create a Rails application named TODO: rails new TODO This command will execute a series of various other commands that are necessary to create a Rails application. So, just wait for sometime before it stops executing all the codes. If you are using a newer version of Rails, then this command will also execute bundle install command at the end. Bundle install command is used to install other dependencies. The output for the preceding command is as follows: Now, you should have a new folder inside Bootstrap_Rails_Project named TODO, which was created by the preceding code. Here is the output: Analyzing folder structure of a Rails application Let's navigate to the TODO folder to check how our application's folder structure looks like: Let me explain to you some of the important folders here: The first one is the app folder. The assets folder inside the app folder is the location to store all the static files like JavaScript, CSS, and Images. You can take a sneak peek inside them to look at the various files. The controllers folder handles various requests and responses of the browser. The helpers folder contains various helper methods both for the views and controllers. The next folder mailers, contains all the necessary files to send an e-mail. The models folder contains files that interact with the database. Finally, we have the views folder, which contains all the .erb files that will be complied to HTML files. So, let's start the Rails server and check out our application on the browser: Navigate to the TODO folder in the terminal and then type the following command to start a server: rails server You can also use the following command: rails s You will see that the server is deployed under the port 3000. So, type the following URL to view the application: http://localhost:3000. You can also use the following URL: http://0.0.0.0:3000. If your application is properly set up, you should see the default page of Rails in the browser: Creating views We will be using Rails' scaffold method to create models, views, and other necessary files that Rails needs to make our application live. Here's the set of tasks that our application should perform: It should list out the pending items Every task should be clickable, and the details related to that item should be seen in a new view We can edit that item's description and some other details We can delete that item The task looks pretty lengthy, but any Rails developer would know how easy it is to do. We don't actually have to do anything to achieve it. We just have to pass a single scaffold command, and the rest will be taken care of. Close the Rails server using Ctrl + C keys and then proceed as follows: First, navigate to the project folder in the terminal. Then, pass the following command: rails g scaffold todo title:string description:text completed:boolean This will create a new model called todo that has various fields like title, description, and completed. Each field has a type associated with it. Since we have created a new model, it has to be reflected in the database. So, let's migrate it: rake db:create db:migrate The preceding code will create a new table inside a new database with the associated fields. Let's analyze what we have done. The scaffold command has created many HTML pages or views that are needed for managing the todo model. So, let's check out our application. We need to start our server again: rails s Go to the localhost page http://localhost:3000 at port number 3000. You should still see the Rails' default page. Now, type the URL: http://localhost:3000/todos. You should now see the application, as shown in the following screenshot: Click on New Todo, you will be taken to a form which allows you to fill out various fields that we had earlier created. Let's create our first todo and click on submit. It will be shown on the listing page: It was easy, wasn't it? We haven't done anything yet. That's the power of Rails, which people are crazy about. Summary This article was to brief you on how to develop and design a simple Rails application without the help of any CSS frontend frameworks. We manually styled the application by creating an external CSS file styles.css and importing it into the application using another CSS file application.css. We also discussed the complexities that a novice web designer might face on directly styling the application. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 2269

article-image-url-routing-and-template-rendering
Packt
24 Feb 2015
11 min read
Save for later

URL Routing and Template Rendering

Packt
24 Feb 2015
11 min read
In this article by Ryan Baldwin, the author of Clojure Web Development Essentials, however, we will start building our application, creating actual endpoints that process HTTP requests, which return something we can look at. We will: Learn what the Compojure routing library is and how it works Build our own Compojure routes to handle an incoming request What this chapter won't cover, however, is making any of our HTML pretty, client-side frameworks, or JavaScript. Our goal is to understand the server-side/Clojure components and get up and running as quickly as possible. As a result, our templates are going to look pretty basic, if not downright embarrassing. (For more resources related to this topic, see here.) What is Compojure? Compojure is a small, simple library that allows us to create specific request handlers for specific URLs and HTTP methods. In other words, "HTTP Method A requesting URL B will execute Clojure function C. By allowing us to do this, we can create our application in a sane way (URL-driven), and thus architect our code in some meaningful way. For the studious among us, the Compojure docs can be found at https://github.com/weavejester/compojure/wiki. Creating a Compojure route Let's do an example that will allow the awful sounding tech jargon to make sense. We will create an extremely basic route, which will simply print out the original request map to the screen. Let's perform the following steps: Open the home.clj file. Alter the home-routes defroute such that it looks like this: (defroutes home-routes   (GET "/" [] (home-page))   (GET "/about" [] (about-page))   (ANY "/req" request (str request))) Start the Ring Server if it's not already started. Navigate to http://localhost:3000/req. It's possible that your Ring Server will be serving off a port other than 3000. Check the output on lein ring server for the serving port if you're unable to connect to the URL listed in step 4. You should see something like this: Using defroutes Before we dive too much into the anatomy of the routes, we should speak briefly about what defroutes is. The defroutes macro packages up all of the routes and creates one big Ring handler out of them. Of course, you don't need to define all the routes for an application under a single defroutes macro. You can, and should, spread them out across various namespaces and then incorporate them into the app in Luminus' handler namespace. Before we start making a bunch of example routes, let's move the route we've already created to its own namespace: Create a new namespace hipstr.routes.test-routes (/hipstr/routes/test_routes.clj) . Ensure that the namespace makes use of the Compojure library: (ns hipstr.routes.test-routes   (:require [compojure.core :refer :all])) Next, use the defroutes macro and create a new set of routes, and move the /req route we created in the hipstr.routes.home namespace under it: (defroutes test-routes   (ANY "/req" request (str request))) Incorporate the new test-routes route into our application handler. In hipstr.handler, perform the following steps: Add a requirement to the hipstr.routes.test-routes namespace: (:require [compojure.core :refer [defroutes]]   [hipstr.routes.home :refer [home-routes]]   [hipstr.routes.test-routes :refer [test-routes]]   …) Finally, add the test-routes route to the list of routes in the call to app-handler: (def app (app-handler   ;; add your application routes here   [home-routes test-routes base-routes] We've now created a new routing namespace. It's with this namespace where we will create the rest of the routing examples. Anatomy of a route So what exactly did we just create? We created a Compojure route, which responds to any HTTP method at /req and returns the result of a called function, in our case a string representation of the original request map. Defining the method The first argument of the route defines which HTTP method the route will respond to; our route uses the ANY macro, which means our route will respond to any HTTP method. Alternatively, we could have restricted which HTTP methods the route responds to by specifying a method-specific macro. The compojure.core namespace provides macros for GET, POST, PUT, DELETE, HEAD, OPTIONS, and PATCH. Let's change our route to respond only to requests made using the GET method: (GET "/req" request (str request)) When you refresh your browser, the entire request map is printed to the screen, as we'd expect. However, if the URL and the method used to make the request don't match those defined in our route, the not-found route in hipstr.handler/base-routes is used. We can see this in action by changing our route to listen only to the POST methods: (POST "/req" request (str request)) If you try and refresh the browser again, you'll notice we don't get anything back. In fact, an "HTTP 404: Page Not Found" response is returned to the client. If we POST to the URL from the terminal using curl, we'll get the following expected response: # curl -d {} http://localhost:3000/req {:ssl-client-cert nil, :go-bowling? "YES! NOW!", :cookies {}, :remote-addr "0:0:0:0:0:0:0:1", :params {}, :flash nil, :route-params {}, :headers {"user-agent" "curl/7.37.1", "content-type" "application/x-www-form-urlencoded", "content-length" "2", "accept" "*/*", "host" "localhost:3000"}, :server-port 3000, :content-length 2, :form-params {}, :session/key nil, :query-params {}, :content-type "application/x-www-form-urlencoded", :character-encoding nil, :uri "/req", :server-name "localhost", :query-string nil, :body #<HttpInput org.eclipse.jetty.server.HttpInput@38dea1>, :multipart-params {}, :scheme :http, :request-method :post, :session {}} Defining the URL The second component of the route is the URL on which the route is served. This can be anything we want and as long as the request to the URL matches exactly, the route will be invoked. There are, however, two caveats we need to be aware of: Routes are tested in order of their declaration, so order matters. The trailing slash isn't handled well. Compojure will always strip the trailing slash from the incoming request but won't redirect the user to the URL without the trailing slash. As a result an HTTP 404: Page Not Found response is returned. So never base anything off a trailing slash, lest ye peril in an ocean of confusion. Parameter destructuring In our previous example we directly refer to the implicit incoming request and pass that request to the function constructing the response. This works, but it's nasty. Nobody ever said, I love passing around requests and maintaining meaningless code and not leveraging URLs, and if anybody ever did, we don't want to work with them. Thankfully, Compojure has a rather elegant destructuring syntax that's easier to read than Clojure's native destructuring syntax. Let's create a second route that allows us to define a request map key in the URL, then simply prints that value in the response: (GET "/req/:val" [val] (str val)) Compojure's destructuring syntax binds HTTP request parameters to variables of the same name. In the previous syntax, the key :val will be in the request's :params map. Compojure will automatically map the value of {:params {:val...}} to the symbol val in [val]. In the end, you'll get the following output for the URL http://localhost:3000/req/holy-moly-molly: That's pretty slick but what if there is a query string? For example, http://localhost:3000/req/holy-moly-molly!?more=ThatsAHotTomalle. We can simply add the query parameter more to the vector, and Compojure will automatically bring it in: (GET "/req/:val" [val more] (str val "<br>" more)) Destructuring the request What happens if we still need access to the entire request? It's natural to think we could do this: (GET "/req/:val" [val request] (str val "<br>" request)) However, request will always be nil because it doesn't map back to a parameter key of the same name. In Compojure, we can use the magical :as key: (GET "/req/:val" [val :as request] (str val "<br>" request)) This will now result in request being assigned the entire request map, as shown in the following screenshot: Destructuring unbound parameters Finally, we can bind any remaining unbound parameters into another map using &. Take a look at the following example code: (GET "/req/:val/:another-val/:and-another"   [val & remainders] (str val "<br>" remainders)) Saving the file and navigating to http://localhost:3000/req/holy-moly-molly!/what-about/susie-q will render both val and the map with the remaining unbound keys :another-val and :and-another, as seen in the following screenshot: Constructing the response The last argument in the route is the construction of the response. Whatever the third argument resolves to will be the body of our response. For example, in the following route: (GET "/req/:val" [val] (str val)) The third argument, (str val), will echo whatever the value passed in on the URL is. So far, we've simply been making calls to Clojure's str but we can just as easily call one of our own functions. Let's add another route to our hipstr.routes.test-routes, and write the following function to construct its response: (defn render-request-val [request-map & [request-key]]   "Simply returns the value of request-key in request-map,   if request-key is provided; Otherwise return the request-map.   If request-key is provided, but not found in the request-map,   a message indicating as such will be returned." (str (if request-key         (if-let [result ((keyword request-key) request-map)]           result           (str request-key " is not a valid key."))         request-map))) (defroutes test-routes   (POST "/req" request (render-request-val request))   ;no access to the full request map   (GET "/req/:val" [val] (str val))   ;use :as to get access to full request map   (GET "/req/:val" [val :as full-req] (str val "<br>" full-req))   ;use :as to get access to the remainder of unbound symbols   (GET "/req/:val/:another-val/:and-another" [val & remainders]     (str val "<br>" remainders))   ;use & to get access to unbound params, and call our route   ;handler function   (GET "/req/:key" [key :as request]     (render-request-val request key))) Now when we navigate to http://localhost:3000/req/server-port, we'll see the value of the :server-port key in the request map… or wait… we should… what's wrong? If this doesn't seem right, it's because it isn't. Why is our /req/:val route getting executed? As stated earlier, the order of routes is important. Because /req/:val with the GET method is declared earlier, it's the first route to match our request, regardless of whether or not :val is in the HTTP request map's parameters. Routes are matched on URL structure, not on parameters keys. As it stands right now, our /req/:key will never get matched. We'll have to change it as follows: ;use & to get access to unbound params, and call our route handler function (GET "/req/:val/:another-val/:and-another" [val & remainders]   (str val "<br>" remainders))   ;giving the route a different URL from /req/:val will ensure its   execution   (GET "/req/key/:key" [key :as request] (render-request-val   request key))) Now that our /req/key/:key route is logically unique, it will be matched appropriately and render the server-port value to screen. Let's try and navigate to http://localhost:3000/req/key/server-port again: Generating complex responses What if we want to create more complex responses? How might we go about doing that? The last thing we want to do is hardcode a whole bunch of HTML into a function, it's not 1995 anymore, after all. This is where the Selmer library comes to the rescue. Summary In this article we have learnt what Compojure is, what a Compojure routing library is and how it works. You have also learnt to build your own Compojure routes to handle an incoming request, within which you learnt how to use defroutes, the anatomy of a route, destructuring parameter and how to define the URL. Resources for Article: Further resources on this subject: Vmware Vcenter Operations Manager Essentials - Introduction To Vcenter Operations Manager [article] Websockets In Wildfly [article] Clojure For Domain-Specific Languages - Design Concepts With Clojure [article]
Read more
  • 0
  • 0
  • 3390
Banner background image

article-image-aggregators-file-exchange-over-ftpftps-social-integration-and-enterprise-messaging
Packt
20 Feb 2015
26 min read
Save for later

Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging

Packt
20 Feb 2015
26 min read
In this article by Chandan Pandey, the author of Spring Integration Essentials, we will explore the out-of-the-box capabilities that the Spring Integration framework provides for a seamless flow of messages across heterogeneous components and see what Spring Integration has in the box when it comes to real-world integration challenges. We will cover Spring Integration's support for external components and we will cover the following topics in detail: Aggregators File exchange over FTP/FTPS Social integration Enterprise messaging (For more resources related to this topic, see here.) Aggregators The aggregators are the opposite of splitters - they combine multiple messages and present them as a single message to the next endpoint. This is a very complex operation, so let's start by a real life scenario. A news channel might have many correspondents who can upload articles and related images. It might happen that the text of the articles arrives much sooner than the associated images - but the article must be sent for publishing only when all relevant images have also arrived. This scenario throws up a lot of challenges; partial articles should be stored somewhere, there should be a way to correlate incoming components with existing ones, and also there should be a way to identify the completion of a message. Aggregators are there to handle all of these aspects - some of the relevant concepts that are used are MessageStore, CorrelationStrategy, and ReleaseStrategy. Let's start with a code sample and then we will dive down to explore each of these concepts in detail: <int:aggregator   input-channel="fetchedFeedChannelForAggregatior"   output-channel="aggregatedFeedChannel"   ref="aggregatorSoFeedBean"   method="aggregateAndPublish"   release-strategy="sofeedCompletionStrategyBean"   release-strategy-method="checkCompleteness"   correlation-strategy="soFeedCorrelationStrategyBean"   correlation-strategy-method="groupFeedsBasedOnCategory"   message-store="feedsMySqlStore "   expire-groups-upon-completion="true">   <int:poller fixed-rate="1000"></int:poller> </int:aggregator> Hmm, a pretty big declaration! And why not—a lot of things combine together to act as an aggregator. Let's quickly glance at all the tags used: int:aggregator: This is used to specify the Spring framework's namespace for the aggregator. input-channel: This is the channel from which messages will be consumed. output-channel: This is the channel to which messages will be dropped after aggregation. ref: This is used to specify the bean having the method that is called on the release of messages. method: This is used to specify the method that is invoked when messages are released. release-strategy: This is used to specify the bean having the method that decides whether aggregation is complete or not. release-strategy-method: This is the method having the logic to check for completeness of the message. correlation-strategy: This is used to specify the bean having the method to correlate the messages. correlation-strategy-method: This is the method having the actual logic to correlate the messages. message-store: This is used to specify the message store, where messages are temporarily stored until they have been correlated and are ready to release. This can be in memory (which is default) or can be a persistence store. If a persistence store is configured, message delivery will be resumed across a server crash. Java class can be defined as an aggregator and, as described in the previous bullet points, the method and ref parameters decide which method of bean (referred by ref) should be invoked when messages have been aggregated as per CorrelationStrategy and released after fulfilment of ReleaseStrategy. In the following example, we are just printing the messages before passing them on to the next consumer in the chain: public class SoFeedAggregator {   public List<SyndEntry> aggregateAndPublish(List<SyndEntry>     messages) {     //Do some pre-processing before passing on to next channel     return messages;   } } Let's get to the details of the three most important components that complete the aggregator. Correlation strategy Aggregator needs to group the messages—but how will it decide the groups? In simple words, CorrelationStrategy decides how to correlate the messages. The default is based on a header named CORRELATION_ID. All messages having the same value for the CORRELATION_ID header will be put in one bracket. Alternatively, we can designate any Java class and its method to define a custom correlation strategy or can extend Spring Integration framework's CorrelationStrategy interface to define it. If the CorrelationStrategy interface is implemented, then the getCorrelationKey() method should be implemented. Let's see our correlation strategy in the feeds example: public class CorrelationStrategy {   public Object groupFeedsBasedOnCategory(Message<?> message) {     if(message!=null){       SyndEntry entry = (SyndEntry)message.getPayload();       List<SyndCategoryImpl> categories=entry.getCategories();       if(categories!=null&&categories.size()>0){         for (SyndCategoryImpl category: categories) {           //for simplicity, lets consider the first category           return category.getName();         }       }     }     return null;   } } So how are we correlating our messages? We are correlating the feeds based on the category name. The method must return an object that can be used for correlating the messages. If a user-defined object is returned, it must satisfy the requirements for a key in a map such as defining hashcode() and equals(). The return value must not be null. Alternatively, if we would have wanted to implement it by extending framework support, then it would have looked like this: public class CorrelationStrategy implements CorrelationStrategy {   public Object getCorrelationKey(Message<?> message) {     if(message!=null){       …             return category.getName();           }         }       }       return null;     }   } } Release strategy We have been grouping messages based on correlation strategy—but when will we release it for the next component? This is decided by the release strategy. Similar to the correlation strategy, any Java POJO can define the release strategy or we can extend framework support. Here is the example of using the Java POJO class: public class CompletionStrategy {   public boolean checkCompleteness(List<SyndEntry> messages) {     if(messages!=null){       if(messages.size()>2){         return true;       }     }     return false;   } } The argument of a message must be of type collection and it must return a Boolean indication whether to release the accumulated messages or not. For simplicity, we have just checked for the number of messages from the same category—if it's greater than two, we release the messages. Message store Until an aggregated message fulfils the release criteria, the aggregator needs to store them temporarily. This is where message stores come into the picture. Message stores can be of two types: in-memory and persistence store. Default is in memory, and if this is to be used, then there is no need to declare this attribute at all. If a persistent message store needs to be used, then it must be declared and its reference should be given to the message- store attribute. A mysql message store can be declared and referenced as follows: <bean id=" feedsMySqlStore "   class="org.springframework.integration.jdbc.JdbcMessageStore">   <property name="dataSource" ref="feedsSqlDataSource"/> </bean> Data source is Spring framework's standard JDBC data source. The greatest advantage of using persistence store is recoverability—if the system recovers from a crash, all in-memory aggregated messages will not be lost. Another advantage is capacity—memory is limited, which can accommodate a limited number of messages for aggregation, but the database can have a much bigger space. FTP/FTPS FTP, or File Transfer Protocol, is used to transfer files across networks. FTP communications consist of two parts: server and client. The client establishes a session with the server, after which it can download or upload files. Spring Integration provides components that act as a client and connect to the FTP server to communicate with it. What about the server—which server will it connect to? If you have access to any public or hosted FTP server, use it. Else, the easiest way for trying out the example in this section is to set up a local instance of the FTP server. FTP setup is out of the scope of this article. Prerequisites To use Spring Integration components for FTP/FTPS, we need to add a namespace to our configuration file and then add the Maven dependency entry in the pom.xml file. The following entries should be made: Namespace support can be added by using the following code snippet:   class="org.springframework.integration.     ftp.session.DefaultFtpSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="21"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> The DefaultFtpSessionFactory class is at work here, and it takes the following parameters: Host that is running the FTP server Port at which it's running the server Username Password for the server A session pool for the factory is maintained and an instance is returned when required. Spring takes care of validating that a stale session is never returned. Downloading files from the FTP server Inbound adapters can be used to read the files from the server. The most important aspect is the session factory that we just discussed in the preceding section. The following code snippet configures an FTP inbound adapter that downloads a file from a remote directory and makes it available for processing: <int-ftp:inbound-channel-adapter   channel="ftpOutputChannel"   session-factory="ftpClientSessionFactory"   remote-directory="/"   local-directory=   "C:\Chandan\Projects\siexample\ftp\ftplocalfolder"   auto-create-local-directory="true"   delete-remote-files="true"   filename-pattern="*.txt"   local-filename-generator-expression=   "#this.toLowerCase() + '.trns'">   <int:poller fixed-rate="1000"/> </int-ftp:inbound-channel-adapter> Let's quickly go through the tags used in this code: int-ftp:inbound-channel-adapter: This is the namespace support for the FTP inbound adapter. channel: This is the channel on which the downloaded files will be put as a message. session-factory: This is a factory instance that encapsulates details for connecting to a server. remote-directory: This is the directory on the server where the adapter should listen for the new arrival of files. local-directory: This is the local directory where the downloaded files should be dumped. auto-create-local-directory: If enabled, this will create the local directory structure if it's missing. delete-remote-files: If enabled, this will delete the files on the remote directory after it has been downloaded successfully. This will help in avoiding duplicate processing. filename-pattern: This can be used as a filter, but only files matching the specified pattern will be downloaded. local-filename-generator-expression: This can be used to generate a local filename. An inbound adapter is a special listener that listens for events on the remote directory, for example, an event fired on the creation of a new file. At this point, it will initiate the file transfer. It creates a payload of type Message<File> and puts it on the output channel. By default, the filename is retained and a file with the same name as the remote file is created in the local directory. This can be overridden by using local- filename-generator-expression. Incomplete files On the remote server, there could be files that are still in the process of being written. Typically, there the extension is different, for example, filename.actualext.writing. The best way to avoid reading incomplete files is to use the filename pattern that will copy only those files that have been written completely. Uploading files to the FTP server Outbound adapters can be used to write files to the server. The following code snippet reads a message from a specified channel and writes it inside the FTP server's remote directory. The remote server session is determined as usual by the session factory. Make sure the username configured in the session object has the necessary permission to write to the remote directory. The following configuration sets up a FTP adapter that can upload files in the specified directory:   <int-ftp:outbound-channel-adapter channel="ftpOutputChannel"     remote-directory="/uploadfolder"     session-factory="ftpClientSessionFactory"     auto-create-directory="true">   </int-ftp:outbound-channel-adapter> Here is a brief description of the tags used: int-ftp:outbound-channel-adapter: This is the namespace support for the FTP outbound adapter. channel: This is the name of the channel whose payload will be written to the remote server. remote-directory: This is the remote directory where files will be put. The user configured in the session factory must have appropriate permission. session-factory: This encapsulates details for connecting to the FTP server. auto-create-directory: If enabled, this will automatically create a remote directory if it's missing, and the given user should have sufficient permission. The payload on the channel need not necessarily be a file type; it can be one of the following: java.io.File: A Java file object byte[]: This is a byte array that represents the file contents java.lang.String: This is the text that represents the file contents Avoiding partially written files Files on the remote server must be made available only when they have been written completely and not when they are still partial. Spring uses a mechanism of writing the files to a temporary location and its availability is published only when it has been completely written. By default, the suffix is written, but it can be changed using the temporary-file-suffix property. This can be completely disabled by setting use-temporary-file- name to false. FTP outbound gateway Gateway, by definition, is a two-way component: it accepts input and provides a result for further processing. So what is the input and output in the case of FTP? It issues commands to the FTP server and returns the result of the command. The following command will issue an ls command with the option –l to the server. The result is a list of string objects containing the filename of each file that will be put on the reply- channel. The code is as follows: <int-ftp:outbound-gateway id="ftpGateway"     session-factory="ftpClientSessionFactory"     request-channel="commandInChannel"     command="ls"     command-options="-1"     reply-channel="commandOutChannel"/> The tags are pretty simple: int-ftp:outbound-gateway: This is the namespace support for the FTP outbound gateway session-factory: This is the wrapper for details needed to connect to the FTP server command: This is the command to be issued command-options: This is the option for the command reply-channel: This is the response of the command that is put on this channel FTPS support For FTPS support, all that is needed is to change the factory class—an instance of org.springframework.integration.ftp.session.DefaultFtpsSessionFactory should be used. Note the s in DefaultFtpsSessionFactory. Once the session is created with this factory, it's ready to communicate over a secure channel. Here is an example of a secure session factory configuration: <bean id="ftpSClientFactory"   class="org.springframework.integration.ftp.session.   DefaultFtpsSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="22"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> Although it is obvious, I would remind you that the FTP server must be configured to support a secure connection and open the appropriate port. Social integration Any application in today's context is incomplete if it does not provide support for social messaging. Spring Integration provides in-built support for many social interfaces such as e-mails, Twitter feeds, and so on. Let's discuss the implementation of Twitter in this section. Prior to Version 2.1, Spring Integration was dependent on the Twitter4J API for Twitter support, but now it leverages Spring's social module for Twitter integration. Spring Integration provides an interface for receiving and sending tweets as well as searching and publishing the search results in messages. Twitter uses oauth for authentication purposes. An app must be registered before we start Twitter development on it. Prerequisites Let's look at the steps that need to be completed before we can use a Twitter component in our Spring Integration example: Twitter account setup: A Twitter account is needed. Perform the following steps to get the keys that will allow the user to use Twitter using the API: Visit https://apps.twitter.com/. Sign in to your account. Click on Create New App. Enter the details such as Application name, Description, Website, and so on. All fields are self-explanatory and appropriate help has also been provided. The value for the field Website need not be a valid one—put an arbitrary website name in the correct format. Click on the Create your application button. If the application is created successfully, a confirmation message will be shown and the Application Management page will appear, as shown here: Go to the Keys and Access Tokens tab and note the details for Consumer Key (API Key) and Consumer Secret (API Secret) under Application Settings, as shown in the following screenshot: You need additional access tokens so that applications can use Twitter using APIs. Click on Create my access token; it takes a while to generate these tokens. Once it is generated, note down the value of Access Token and Access Token Secret. Go to the Permissions tab and provide permission to Read, Write and Access direct messages. After performing all these steps, and with the required keys and access token, we are ready to use Twitter. Let's store these in the twitterauth.properties property file: twitter.oauth.apiKey= lnrDlMXSDnJumKLFRym02kHsy twitter.oauth.apiSecret= 6wlriIX9ay6w2f6at6XGQ7oNugk6dqNQEAArTsFsAU6RU8F2Td twitter.oauth.accessToken= 158239940-FGZHcbIDtdEqkIA77HPcv3uosfFRnUM30hRix9TI twitter.oauth.accessTokenSecret= H1oIeiQOlvCtJUiAZaachDEbLRq5m91IbP4bhg1QPRDeh The next step towards Twitter integration is the creation of a Twitter template. This is similar to the datasource or connection factory for databases, JMS, and so on. It encapsulates details to connect to a social platform. Here is the code snippet: <context:property-placeholder location="classpath: twitterauth.properties "/> <bean id="twitterTemplate" class=" org.springframework.social.   twitter.api.impl.TwitterTemplate ">   <constructor-arg value="${twitter.oauth.apiKey}"/>   <constructor-arg value="${twitter.oauth.apiSecret}"/>   <constructor-arg value="${twitter.oauth.accessToken}"/>   <constructor-arg value="${twitter.oauth.accessTokenSecret}"/> </bean> As I mentioned, the template encapsulates all the values. Here is the order of the arguments: apiKey apiSecret accessToken accessTokenSecret With all the setup in place, let's now do some real work: Namespace support can be added by using the following code snippet: <beans   twitter-template="twitterTemplate"   channel="twitterChannel"> </int-twitter:inbound-channel-adapter> The components in this code are covered in the following bullet points: int-twitter:inbound-channel-adapter: This is the namespace support for Twitter's inbound channel adapter. twitter-template: This is the most important aspect. The Twitter template encapsulates which account to use to poll the Twitter site. The details given in the preceding code snippet are fake; it should be replaced with real connection parameters. channel: Messages are dumped on this channel. These adapters are further used for other applications, such as for searching messages, retrieving direct messages, and retrieving tweets that mention your account, and so on. Let's have a quick look at the code snippets for these adapters. I will not go into detail for each one; they are almost similar to what have been discussed previously. Search: This adapter helps to search the tweets for the parameter configured in the query tag. The code is as follows: <int-twitter:search-inbound-channel-adapter id="testSearch"   twitter-template="twitterTemplate"   query="#springintegration"   channel="twitterSearchChannel"> </int-twitter:search-inbound-channel-adapter> Retrieving Direct Messages: This adapter allows us to receive the direct message for the account in use (account configured in Twitter template). The code is as follows: <int-twitter:dm-inbound-channel-adapter   id="testdirectMessage"   twitter-template="twiterTemplate"   channel="twitterDirectMessageChannel"> </int-twitter:dm-inbound-channel-adapter> Retrieving Mention Messages: This adapter allows us to receive messages that mention the configured account via the @user tag (account configured in the Twitter template). The code is as follows: <int-twitter:mentions-inbound-channel-adapter   id="testmentionMessage"   twitter-template="twiterTemplate"   channel="twitterMentionMessageChannel"> </int-twitter:mentions-inbound-channel-adapter> Sending tweets Twitter exposes outbound adapters to send messages. Here is a sample code:   <int-twitter:outbound-channel-adapter     twitter-template="twitterTemplate"     channel="twitterSendMessageChannel"/> Whatever message is put on the twitterSendMessageChannel channel is tweeted by this adapter. Similar to an inbound gateway, the outbound gateway provides support for sending direct messages. Here is a simple example of an outbound adapter: <int-twitter:dm-outbound-channel-adapter   twitter-template="twitterTemplate"   channel="twitterSendDirectMessage"/> Any message that is put on the twitterSendDirectMessage channel is sent to the user directly. But where is the name of the user to whom the message will be sent? It is decided by a header in the message TwitterHeaders.DM_TARGET_USER_ID. This must be populated either programmatically, or by using enrichers or SpEL. For example, it can be programmatically added as follows: Message message = MessageBuilder.withPayload("Chandan")   .setHeader(TwitterHeaders.DM_TARGET_USER_ID,   "test_id").build(); Alternatively, it can be populated by using a header enricher, as follows: <int:header-enricher input-channel="twitterIn"   output-channel="twitterOut">   <int:header name="twitter_dmTargetUserId" value=" test_id "/> </int:header-enricher> Twitter search outbound gateway As gateways provide a two-way window, the search outbound gateway can be used to issue dynamic search commands and receive the results as a collection. If no result is found, the collection is empty. Let's configure a search outbound gateway, as follows:   <int-twitter:search-outbound-gateway id="twitterSearch"     request-channel="searchQueryChannel"     twitter-template="twitterTemplate"     search-args-expression="#springintegration"     reply-channel="searchQueryResultChannel"/> And here is what the tags covered in this code mean: int-twitter:search-outbound-gateway: This is the namespace for the Twitter search outbound gateway request-channel: This is the channel that is used to send search requests to this gateway twitter-template: This is the Twitter template reference search-args-expression: This is used as arguments for the search reply-channel: This is the channel on which searched results are populated This gives us enough to get started with the social integration aspects of the spring framework. Enterprise messaging Enterprise landscape is incomplete without JMS—it is one of the most commonly used mediums of enterprise integration. Spring provides very good support for this. Spring Integration builds over that support and provides adapter and gateways to receive and consume messages from many middleware brokers such as ActiveMQ, RabbitMQ, Rediss, and so on. Spring Integration provides inbound and outbound adapters to send and receive messages along with gateways that can be used in a request/reply scenario. Let's walk through these implementations in a little more detail. A basic understanding of the JMS mechanism and its concepts is expected. It is not possible to cover even the introduction of JMS here. Let's start with the prerequisites. Prerequisites To use Spring Integration messaging components, namespaces, and relevant Maven the following dependency should be added: Namespace support can be added by using the following code snippet: > Maven entry can be provided using the following code snippet: <dependency>   <groupId>org.springframework.integration</groupId>   <artifactId>spring-integration-jms</artifactId>   <version>${spring.integration.version}</version> </dependency> After adding these two dependencies, we are ready to use the components. But before we can use an adapter, we must configure an underlying message broker. Let's configure ActiveMQ. Add the following in pom.xml:   <dependency>     <groupId>org.apache.activemq</groupId>     <artifactId>activemq-core</artifactId>     <version>${activemq.version}</version>     <exclusions>       <exclusion>         <artifactId>spring-context</artifactId>         <groupId>org.springframework</groupId>       </exclusion>     </exclusions>   </dependency>   <dependency>     <groupId>org.springframework</groupId>     <artifactId>spring-jms</artifactId>     <version>${spring.version}</version>     <scope>compile</scope>   </dependency> After this, we are ready to create a connection factory and JMS queue that will be used by the adapters to communicate. First, create a session factory. As you will notice, this is wrapped in Spring's CachingConnectionFactory, but the underlying provider is ActiveMQ: <bean id="connectionFactory" class="org.springframework.   jms.connection.CachingConnectionFactory">   <property name="targetConnectionFactory">     <bean class="org.apache.activemq.ActiveMQConnectionFactory">       <property name="brokerURL" value="vm://localhost"/>     </bean>   </property> </bean> Let's create a queue that can be used to retrieve and put messages: <bean   id="feedInputQueue"   class="org.apache.activemq.command.ActiveMQQueue">   <constructor-arg value="queue.input"/> </bean> Now, we are ready to send and retrieve messages from the queue. Let's look into each message one by one. Receiving messages – the inbound adapter Spring Integration provides two ways of receiving messages: polling and event listener. Both of them are based on the underlying Spring framework's comprehensive support for JMS. JmsTemplate is used by the polling adapter, while MessageListener is used by the event-driven adapter. As the name suggests, a polling adapter keeps polling the queue for the arrival of new messages and puts the message on the configured channel if it finds one. On the other hand, in the case of the event-driven adapter, it's the responsibility of the server to notify the configured adapter. The polling adapter Let's start with a code sample: <int-jms:inbound-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel">   <int:poller fixed-rate="1000" /> </int-jms:inbound-channel-adapter> This code snippet contains the following components: int-jms:inbound-channel-adapter: This is the namespace support for the JMS inbound adapter connection-factory: This is the encapsulation for the underlying JMS provider setup, such as ActiveMQ destination: This is the JMS queue where the adapter is listening for incoming messages channel: This is the channel on which incoming messages should be put There is a poller element, so it's obvious that it is a polling-based adapter. It can be configured in one of two ways: by providing a JMS template or using a connection factory along with a destination. I have used the latter approach. The preceding adapter has a polling queue mentioned in the destination and once it gets any message, it puts the message on the channel configured in the channel attribute. The event-driven adapter Similar to polling adapters, event-driven adapters also need a reference either to an implementation of the interface AbstractMessageListenerContainer or need a connection factory and destination. Again, I will use the latter approach. Here is a sample configuration: <int-jms:message-driven-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel"/> There is no poller sub-element here. As soon as a message arrives at its destination, the adapter is invoked, which puts it on the configured channel. Sending messages – the outbound adapter Outbound adapters convert messages on the channel to JMS messages and put them on the configured queue. To convert Spring Integration messages to JMS messages, the outbound adapter uses JmsSendingMessageHandler. This is is an implementation of MessageHandler. Outbound adapters should be configured with either JmsTemplate or with a connection factory and destination queue. Keeping in sync with the preceding examples, we will take the latter approach, as follows: <int-jms:outbound-channel-adapter   connection-factory="connectionFactory"   channel="jmsChannel"   destination="feedInputQueue"/> This adapter receives the Spring Integration message from jmsChannel, converts it to a JMS message, and puts it on the destination. Gateway Gateway provides a request/reply behavior instead of a one-way send or receive. For example, after sending a message, we might expect a reply or we may want to send an acknowledgement after receiving a message. The inbound gateway Inbound gateways provide an alternative to inbound adapters when request-reply capabilities are expected. An inbound gateway is an event-based implementation that listens for a message on the queue, converts it to Spring Message, and puts it on the channel. Here is a sample code: <int-jms:inbound-gateway   request-destination="feedInputQueue"   request-channel="jmsProcessedChannel"/> However, this is what an inbound adapter does—even the configuration is similar, except the namespace. So, what is the difference? The difference lies in replying back to the reply destination. Once the message is put on the channel, it will be propagated down the line and at some stage a reply would be generated and sent back as an acknowledgement. The inbound gateway, on receiving this reply, will create a JMS message and put it back on the reply destination queue. Then, where is the reply destination? The reply destination is decided in one of the following ways: Original message has a property JMSReplyTo, if it's present it has the highest precedence. The inbound gateway looks for a configured, default-reply-destination which can be configured either as a name or as a direct reference of a channel. For specifying channel as direct reference default-reply-destination tag should be used. An exception will be thrown by the gateway if it does not find either of the preceding two ways. The outbound gateway Outbound gateways should be used in scenarios where a reply is expected for the send messages. Let's start with an example: <int-jms:outbound-gateway   request-channel="jmsChannel"   request-destination="feedInputQueue"   reply-channel="jmsProcessedChannel" /> The preceding configuration will send messages to request-destination. When an acknowledgement is received, it can be fetched from the configured reply-destination. If reply-destination has not been configured, JMS TemporaryQueues will be created. Summary In this article, we covered out-of-the-box component provided by the Spring Integration framework such as aggregator. This article also showcased the simplicity and abstraction that Spring Integration provides when it comes to handling complicated integrations, be it file-based, HTTP, JMS, or any other integration mechanism. Resources for Article: Further resources on this subject: Modernizing Our Spring Boot App [article] Home Security by Beaglebone [article] Integrating With Other Frameworks [article]
Read more
  • 0
  • 0
  • 4633

article-image-testing-ui-using-webdriverjs
Packt
17 Feb 2015
30 min read
Save for later

Testing a UI Using WebDriverJS

Packt
17 Feb 2015
30 min read
In this article, by the author, Enrique Amodeo, of the book, Learning Behavior-driven Development with JavaScript, we will look into an advanced concept: how to test a user interface. For this purpose, you will learn the following topics: Using WebDriverJS to manipulate a browser and inspect the resulting HTML generated by our UI Organizing our UI codebase to make it easily testable The right abstraction level for our UI tests (For more resources related to this topic, see here.) Our strategy for UI testing There are two traditional strategies towards approaching the problem of UI testing: record-and-replay tools and end-to-end testing. The first approach, record-and-replay, leverages the use of tools capable of recording user activity in the UI and saves this into a script file. This script file can be later executed to perform exactly the same UI manipulation as the user performed and to check whether the results are exactly the same. This approach is not very compatible with BDD because of the following reasons: We cannot test-first our UI. To be able to use the UI and record the user activity, we first need to have most of the code of our application in place. This is not a problem in the waterfall approach, where QA and testing are performed after the codification phase is finished. However, in BDD, we aim to document the product features as automated tests, so we should write the tests before or during the coding. The resulting test scripts are low-level and totally disconnected from the problem domain. There is no way to use them as a live documentation for the requirements of the system. The resulting test suite is brittle and it will stop working whenever we make slight changes, even cosmetic ones, to the UI. The problem is that the tools record the low-level interaction with the system that depends on technical details of the HTML. The other classic approach is end-to-end testing, where we do not only test the UI layer, but also most of the system or even the whole of it. To perform the setup of the tests, the most common approach is to substitute the third-party systems with test doubles. Normally, the database is under the control of the development team, so some practitioners use a regular database for the setup. However, we could use an in-memory database or even mock the DAOs. In any case, this approach prompts us to create an integrated test suite where we are not only testing the correctness of the UI, but the business logic as well. In the context of this discussion, an integrated test is a test that checks several layers of abstraction, or subsystems, in combination. Do not confuse it with the act of testing several classes or functions together. This approach is not inherently against BDD; for example, we could use Cucumber.js to capture the features of the system and implement Gherkin steps using WebDriver to drive the UI and make assertions. In fact, for most people, when you say BDD they always interpret this term to refer to this kind of test. We will end up writing a lot of test cases, because we need to combine the scenarios from the business logic domain with the ones from the UI domain. Furthermore, in which language should we formulate the tests? If we use the UI language, maybe it will be too low-level to easily describe business concepts. If we use the business domain language, maybe we will not be able to test the important details of the UI because they are too low-level. Alternatively, we can even end up with tests that mix UI language with business terminology, so they will neither be focused nor very clear to anyone. Choosing the right tests for the UI If we want to test whether the UI works, why should we test the business rules? After all, this is already tested in the BDD test suite of the business logic layer. To decide which tests to write, we should first determine the responsibilities of the UI layer, which are as follows: Presenting the information provided by the business layer to the user in a nice way. Transforming user interaction into requests for the business layer. Controlling the changes in the appearance of the UI components, which includes things such as enabling/disabling controls, highlighting entry fields, showing/hiding UI elements, and so on. Orchestration between the UI components. Transferring and adapting information between the UI components and navigation between pages fall under this category. We do not need to write tests about business rules, and we should not assume much about the business layer itself, apart from a loose contract. How we should word our tests? We should use a UI-related language when we talk about what the user sees and does. Words such as fields, buttons, forms, links, click, hover, highlight, enable/disable, or show and hide are relevant in this context. However, we should not go too far; otherwise, our tests will be too brittle. Saying, for example, that the name field should have a pink border is too low-level. The moment that the designer decides to use red instead of pink, or changes his mind and decides to change the background color instead of the border, our test will break. We should aim for tests that express the real intention of the user interface; for example, the name field should be highlighted as incorrect. The testing architecture At this point, we could write tests relevant for our UI using the following testing architecture: A simple testing architecture for our UI We can use WebDriver to issue user gestures to interact with the browser. These user gestures are transformed by the browser in to DOM events that are the inputs of our UI logic and will trigger operations on it. We can use WebDriver again to read the resulting HTML in the assertions. We can simply use a test double to impersonate our server, so we can set up our tests easily. This architecture is very simple and sounds like a good plan, but it is not! There are three main problems here: UI testing is very slow. Take into account that the boot time and shutdown phase can take 3 seconds in a normal laptop. Each UI interaction using WebDriver can take between 50 and 100 milliseconds, and the latency with the fake server can be an extra 10 milliseconds. This gives us only around 10 tests per second, plus an extra 3 seconds. UI tests are complex and difficult to diagnose when they fail. What is failing? Our selectors used to tell WebDriver how to find the relevant elements. Some race condition we were not aware of? A cross-browser issue? Also note that our test is now distributed between two different processes, a fact that always makes debugging more difficult. UI tests are inherently brittle. We can try to make them less brittle with best practices, but even then a change in the structure of the HTML code will sometimes break our tests. This is a bad thing because the UI often changes more frequently than the business layer. As UI testing is very risky and expensive, we should try to code as less amount of tests that interact with the UI as possible. We can achieve this without losing testing power, with the following testing architecture:   A smarter testing architecture We have now split our UI layer into two components: the view and the UI logic. This design aligns with the family of MV* design patterns. In the context of this article, the view corresponds with a passive view, and the UI logic corresponds with the controller or the presenter, in combination with the model. A passive view is usually very hard to test; so in this article we will focus mostly on how to do it. You will often be able to easily separate the passive view from the UI logic, especially if you are using an MV* pattern, such as MVC, MVP, or MVVM. Most of our tests will be for the UI logic. This is the component that implements the client-side validation, orchestration of UI components, navigation, and so on. It is the UI logic component that has all the rules about how the user can interact with the UI, and hence it needs to maintain some kind of internal state. The UI logic component can be tested completely in memory using standard techniques. We can simply mock the XMLHttpRequest object, or the corresponding object in the framework we are using, and test everything in memory using a single Node.js process. No interaction with the browser and the HTML is needed, so these tests will be blazingly fast and robust. Then we need to test the view. This is a very thin component with only two responsibilities: Manipulating and updating the HTML to present the user with the information whenever it is instructed to do so by the UI logic component Listening for HTML events and transforming them into suitable requests for the UI logic component The view should not have more responsibilities, and it is a stateless component. It simply does not need to store the internal state, because it only transforms and transmits information between the HTML and the UI logic. Since it is the only component that interacts with the HTML, it is the only one that needs to be tested using WebDriver. The point of all of this is that the view can be tested with only a bunch of tests that are conceptually simple. Hence, we minimize the number and complexity of the tests that need to interact with the UI. WebDriverJS Testing the passive view layer is a technical challenge. We not only need to find a way for our test to inject native events into the browser to simulate user interaction, but we also need to be able to inspect the DOM elements and inject and execute scripts. This was very challenging to do approximately 5 years ago. In fact, it was considered complex and expensive, and some practitioners recommended not to test the passive view. After all, this layer is very thin and mostly contains the bindings of the UI to the HTML DOM, so the risk of error is not supposed to be high, specially if we use modern cross-browser frameworks to implement this layer. Nonetheless, nowadays the technology has evolved, and we can do this kind of testing without much fuss if we use the right tools. One of these tools is Selenium 2.0 (also known as WebDriver) and its library for JavaScript, which is WebDriverJS (https://code.google.com/p/selenium/wiki/WebDriverJs).  In this book, we will use WebDriverJS, but there are other bindings in JavaScript for Selenium 2.0, such as WebDriverIO (http://webdriver.io/). You can use the one you like most or even try both. The point is that the techniques I will show you here can be applied with any client of WebDriver or even with other tools that are not WebDriver. Selenium 2.0 is a tool that allows us to make direct calls to a browser automation API. This way, we can simulate native events, we can access the DOM, and we can control the browser. Each browser provides a different API and has its own quirks, but Selenium 2.0 will offer us a unified API called the WebDriver API. This allows us to interact with different browsers without changing the code of our tests. As we are accessing the browser directly, we do not need a special server, unless we want to control browsers that are on a different machine. Actually, this is only true, due some technical limitations, if we want to test against a Google Chrome or a Firefox browser using WebDriverJS. So, basically, the testing architecture for our passive view looks like this: Testing with WebDriverJS We can see that we use WebDriverJS for the following: Sending native events to manipulate the UI, as if we were the user, during the action phase of our tests Inspecting the HTML during the assert phase of our test Sending small scripts to set up the test doubles, check them, and invoke the update method of our passive view Apart from this, we need some extra infrastructure, such as a web server that serves our test HTML page and the components we want to test. As is evident from the diagram, the commands of WebDriverJS require some network traffic to able to send the appropriate request to the browser automation API, wait for the browser to execute, and get the result back through the network. This forces the API of WebDriverJS to be asynchronous in order to not block unnecessarily. That is why WebDriverJS has an API designed around promises. Most of the methods will return a promise or an object whose methods return promises. This plays perfectly well with Mocha and Chai.  There is a W3C specification for the WebDriver API. If you want to have a look, just visit https://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html. The API of WebDriverJS is a bit complex, and you can find its official documentation at http://selenium.googlecode.com/git/docs/api/javascript/module_selenium-webdriver.html. However, to follow this article, you do not need to read it, since I will now show you the most important API that WebDriverJS offers us. Finding and interacting with elements It is very easy to find an HTML element using WebDriverJS; we just need to use either the findElement or the findElements methods. Both methods receive a locator object specifying which element or elements to find. The first method will return the first element it finds, or simply fail with an exception, if there are no elements matching the locator. The findElements method will return a promise for an array with all the matching elements. If there are no matching elements, the promised array will be empty and no error will be thrown. How do we specify which elements we want to find? To do so, we need to use a locator object as a parameter. For example, if we would like to find the element whose identifier is order_item1, then we could use the following code: var By = require('selenium-webdriver').By;   driver.findElement(By.id('order_item1')); We need to import the selenium-webdriver module and capture its locator factory object. By convention, we store this locator factory in a variable called By. Later, we will see how we can get a WebDriverJS instance. This code is very expressive, but a bit verbose. There is another version of this: driver.findElement({ id: 'order_item1' }); Here, the locator criteria is passed in the form of a plain JSON object. There is no need to use the By object or any factory. Which version is better? Neither. You just use the one you like most. In this article, the plain JSON locator will be used. The following are the criteria for finding elements: Using the tag name, for example, to locate all the <li> elements in the document: driver.findElements(By.tagName('li'));driver.findElements({ tagName: 'li' }); We can also locate using the name attribute. It can be handy to locate the input fields. The following code will locate the first element named password: driver.findElement(By.name('password')); driver.findElement({ name: 'password' }); Using the class name; for example, the following code will locate the first element that contains a class called item: driver.findElement(By.className('item')); driver.findElement({ className: 'item' }); We can use any CSS selector that our target browser understands. If the target browser does not understand the selector, it will throw an exception; for example, to find the second item of an order (assuming there is only one order on the page): driver.findElement(By.css('.order .item:nth-of-type(2)')); driver.findElement({ css: '.order .item:nth-of-type(2)' }); Using only the CSS selector you can locate any element, and it is the one I recommend. The other ones can be very handy in specific situations. There are more ways of locating elements, such as linkText, partialLinkText, or xpath, but I seldom use them. Locating elements by their text, such as in linkText or partialLinkText, is brittle because small changes in the wording of the text can break the tests. Also, locating by xpath is not as useful in HTML as using a CSS selector. Obviously, it can be used if the UI is defined as an XML document, but this is very rare nowadays. In both methods, findElement and findElements, the resulting HTML elements are wrapped as a WebElement object. This object allows us to send an event to that element or inspect its contents. Some of its methods that allow us to manipulate the DOM are as follows: clear(): This will do nothing unless WebElement represents an input control. In this case, it will clear its value and then trigger a change event. It returns a promise that will be fulfilled whenever the operation is done. sendKeys(text or key, …): This will do nothing unless WebElement is an input control. In this case, it will send the equivalents of keyboard events to the parameters we have passed. It can receive one or more parameters with a text or key object. If it receives a text, it will transform the text into a sequence of keyboard events. This way, it will simulate a user typing on a keyboard. This is more realistic than simply changing the value property of an input control, since the proper keyDown, keyPress, and keyUp events will be fired. A promise is returned that will be fulfilled when all the key events are issued. For example, to simulate that a user enters some search text in an input field and then presses Enter, we can use the following code: var Key = require('selenium-webdriver').Key;   var searchField = driver.findElement({name: 'searchTxt'}); searchField.sendKeys('BDD with JS', Key.ENTER);  The webdriver.Key object allows us to specify any key that does not represent a character, such as Enter, the up arrow, Command, Ctrl, Shift, and so on. We can also use its chord method to represent a combination of several keys pressed at the same time. For example, to simulate Alt + Command + J, use driver.sendKeys(Key.chord(Key.ALT, Key.COMMAND, 'J'));. click(): This will issue a click event just in the center of the element. The returned promise will be fulfilled when the event is fired.  Sometimes, the center of an element is nonclickable, and an exception is thrown! This can happen, for example, with table rows, since the center of a table row may just be the padding between cells! submit(): This will look for the form that contains this element and will issue a submit event. Apart from sending events to an element, we can inspect its contents with the following methods: getId(): This will return a promise with the internal identifier of this element used by WebDriver. Note that this is not the value of the DOM ID property! getText(): This will return a promise that will be fulfilled with the visible text inside this element. It will include the text in any child element and will trim the leading and trailing whitespaces. Note that, if this element is not displayed or is hidden, the resulting text will be an empty string! getInnerHtml() and getOuterHtml(): These will return a promise that will be fulfilled with a string that contains innerHTML or outerHTML of this element. isSelected(): This will return a promise with a Boolean that determines whether the element has either been selected or checked. This method is designed to be used with the <option> elements. isEnabled(): This will return a promise with a Boolean that determines whether the element is enabled or not. isDisplayed(): This will return a promise with a Boolean that determines whether the element is displayed or not. Here, "displayed" is taken in a broad sense; in general, it means that the user can see the element without resizing the browser. For example, whether the element is hidden, whether it has diplay: none, or whether it has no size, or is in an inaccessible part of the document, the returned promise will be fulfilled as false. getTagName(): This will return a promise with the tag name of the element. getSize(): This will return a promise with the size of the element. The size comes as a JSON object with width and height properties that indicate the height and width in pixels of the bounding box of the element. The bounding box includes padding, margin, and border. getLocation(): This will return a promise with the position of the element. The position comes as a JSON object with x and y properties that indicate the coordinates in pixels of the element relative to the page. getAttribute(name): This will return a promise with the value of the specified attribute. Note that WebDriver does not distinguish between attributes and properties! If there is neither an attribute nor a property with that name, the promise will be fulfilled as null. If the attribute is a "boolean" HTML attribute (such as checked or disabled), the promise will be evaluated as true only if the attribute is present. If there is both an attribute and a property with the same name, the attribute value will be used.  If you really need to be precise about getting an attribute or a property, it is much better to use an injected script to get it. getCssValue(cssPropertyName): This will return a promise with a string that represents the computed value of the specified CSS property. The computed value is the resulting value after the browser has applied all the CSS rules and the style and class attributes. Note that the specific representation of the value depends on the browser; for example, the color property can be returned as red, #ff0000, or rgb(255, 0, 0) depending on the browser. This is not cross-browser, so we should avoid this method in our tests. findElement(locator) and findElements(locator): These will return an element, or all the elements that are the descendants of this element, and match the locator. isElementPresent(locator): This will return a promise with a Boolean that indicates whether there is at least one descendant element that matches this locator. As you can see, the WebElement API is pretty simple and allows us to do most of our tests easily. However, what if we need to perform some complex interaction with the UI, such as drag-and-drop? Complex UI interaction WebDriverJS allows us to define a complex action gesture in an easy way using the DSL defined in the webdriver.ActionSequence object. This DSL allows us to define any sequence of browser events using the builder pattern. For example, to simulate a drag-and-drop gesture, proceed with the following code: var beverageElement = driver.findElement({ id: 'expresso' });var orderElement = driver.findElement({ id: 'order' });driver.actions()    .mouseMove(beverageElement)    .mouseDown()    .mouseMove(orderElement)    .mouseUp()    .perform(); We want to drag an espresso to our order, so we move the mouse to the center of the espresso and press the mouse. Then, we move the mouse, by dragging the element, over the order. Finally, we release the mouse button to drop the espresso. We can add as many actions we want, but the sequence of events will not be executed until we call the perform method. The perform method will return a promise that will be fulfilled when the full sequence is finished. The webdriver.ActionSequence object has the following methods: sendKeys(keys...): This sends a sequence of key events, exactly as we saw earlier, to the method with the same name in the case of WebElement. The difference is that the keys will be sent to the document instead of a specific element. keyUp(key) and keyDown(key): These send the keyUp and keyDown events. Note that these methods only admit the modifier keys: Alt, Ctrl, Shift, command, and meta. mouseMove(targetLocation, optionalOffset): This will move the mouse from the current location to the target location. The location can be defined either as a WebElement or as page-relative coordinates in pixels, using a JSON object with x and y properties. If we provide the target location as a WebElement, the mouse will be moved to the center of the element. In this case, we can override this behavior by supplying an extra optional parameter indicating an offset relative to the top-left corner of the element. This could be needed in the case that the center of the element cannot receive events. mouseDown(), click(), doubleClick(), and mouseUp(): These will issue the corresponding mouse events. All of these methods can receive zero, one, or two parameters. Let's see what they mean with the following examples: var Button = require('selenium-webdriver').Button;   // to emit the event in the center of the expresso element driver.actions().mouseDown(expresso).perform(); // to make a right click in the current position driver.actions().click(Button.RIGHT).perform(); // Middle click in the expresso element driver.actions().click(expresso, Button.MIDDLE).perform();  The webdriver.Button object defines the three possible buttons of a mouse: LEFT, RIGHT, and MIDDLE. However, note that mouseDown() and mouseUp() only support the LEFT button! dragAndDrop(element, location): This is a shortcut to performing a drag-and-drop of the specified element to the specified location. Again, the location can be WebElement of a page-relative coordinate. Injecting scripts We can use WebDriver to execute scripts in the browser and then wait for its results. There are two methods for this: executeScript and executeAsyncScript. Both methods receive a script and an optional list of parameters and send the script and the parameters to the browser to be executed. They return a promise that will be fulfilled with the result of the script; it will be rejected if the script failed. An important detail is how the script and its parameters are sent to the browser. For this, they need to be serialized and sent through the network. Once there, they will be deserialized, and the script will be executed inside an autoexecuted function that will receive the parameters as arguments. As a result of of this, our scripts cannot access any variable in our tests, unless they are explicitly sent as parameters. The script is executed in the browser with the window object as its execution context (the value of this). When passing parameters, we need to take into consideration the kind of data that WebDriver can serialize. This data includes the following: Booleans, strings, and numbers. The null and undefined values. However, note that undefined will be translated as null. Any function will be transformed to a string that contains only its body. A WebElement object will be received as a DOM element. So, it will not have the methods of WebElement but the standard DOM method instead. Conversely, if the script results in a DOM element, it will be received as WebElement in the test. Arrays and objects will be converted to arrays and objects whose elements and properties have been converted using the preceding rules. With this in mind, we could, for example, retrieve the identifier of an element, such as the following one: var elementSelector = ".order ul > li"; driver.executeScript(     "return document.querySelector(arguments[0]).id;",     elementSelector ).then(function(id) {   expect(id).to.be.equal('order_item0'); }); Notice that the script is specified as a string with the code. This can be a bit awkward, so there is an alternative available: var elementSelector = ".order ul > li"; driver.executeScript(function() {     var selector = arguments[0];     return document.querySelector(selector).id; }, elementSelector).then(function(id) {   expect(id).to.be.equal('order_item0'); }); WebDriver will just convert the body of the function to a string and send it to the browser. Since the script is executed in the browser, we cannot access the elementSelector variable, and we need to access it through parameters. Unfortunately, we are forced to retrieve the parameters using the arguments pseudoarray, because WebDriver have no way of knowing the name of each argument. As its name suggest, executeAsyncScript allows us to execute an asynchronous script. In this case, the last argument provided to the script is always a callback that we need to call to signal that the script has finalized. The result of the script will be the first argument provided to that callback. If no argument or undefined is explicitly provided, then the result will be null. Note that this is not directly compatible with the Node.js callback convention and that any extra parameters passed to the callback will be ignored. There is no way to explicitly signal an error in an asynchronous way. For example, if we want to return the value of an asynchronous DAO, then proceed with the following code: driver.executeAsyncScript(function() {   var cb = arguments[1],       userId = arguments[0];   window.userDAO.findById(userId).then(cb, cb); }, 'user1').then(function(userOrError) {   expect(userOrError).to.be.equal(expectedUser); }); Command control flows All the commands in WebDriverJS are asynchronous and return a promise or WebElement. How do we execute an ordered sequence of commands? Well, using promises could be something like this: return driver.findElement({name:'quantity'}).sendKeys('23')     .then(function() {       return driver.findElement({name:'add'}).click();     })     .then(function() {       return driver.findElement({css:firstItemSel}).getText();     })     .then(function(quantity) {       expect(quantity).to.be.equal('23');     }); This works because we wait for each command to finish before issuing the next command. However, it is a bit verbose. Fortunately, with WebDriverJS we can do the following: driver.findElement({name:'quantity'}).sendKeys('23'); driver.findElement({name:'add'}).click(); return expect(driver.findElement({css:firstItemSel}).getText())     .to.eventually.be.equal('23'); How can the preceding code work? Because whenever we tell WebDriverJS to do something, it simply schedules the requested command in a queue-like structure called the control flow. The point is that each command will not be executed until it reaches the top of the queue. This way, we do not need to explicitly wait for the sendKeys command to be completed before executing the click command. The sendKeys command is scheduled in the control flow before click, so the latter one will not be executed until sendKeys is done. All the commands are scheduled against the same control flow queue that is associated with the WebDriver object. However, we can optionally create several control flows if we want to execute commands in parallel: var flow1 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); var flow2 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); webdriver.promise.fullyResolved([flow1, flow2]).then(function(){   // Wait for flow1 and flow2 to finish and do something }); We need to create each control flow instance manually and, inside each flow, create a separate WebDriver instance. The commands in both flows will be executed in parallel, and we can wait for both of them to be finalized to do something else using fullyResolved. In fact, we can even nest flows if needed to create a custom parallel command-execution graph. Taking screenshots Sometimes, it is useful to take some screenshots of the current screen for debugging purposes. This can be done with the takeScreenshot() method. This method will return a promise that will be fulfilled with a string that contains a base-64 encoded PNG. It is our responsibility to save this string as a PNG file. The following snippet of code will do the trick: driver.takeScreenshot()     .then(function(shot) {       fs.writeFileSync(fileFullPath, shot, 'base64');     });  Note that not all browsers support this capability. Read the documentation for the specific browser adapter to see if it is available. Working with several tabs and frames WebDriver allows us to control several tabs, or windows, for the same browser. This can be useful if we want to test several pages in parallel or if our test needs to assert or manipulate things in several frames at the same time. This can be done with the switchTo() method that will return a webdriver.WebDriver.TargetLocator object. This object allows us to change the target of our commands to a specific frame or window. It has the following three main methods: frame(nameOrIndex): This will switch to a frame with the specified name or index. It will return a promise that is fulfilled when the focus has been changed to the specified frame. If we specify the frame with a number, this will be interpreted as a zero-based index in the window.frames array. window(windowName): This will switch focus to the window named as specified. The returned promise will be fulfilled when it is done. alert(): This will switch the focus to the active alert window. We can dismiss an alert with driver.switchTo().alert().dismiss();. The promise returned by these methods will be rejected if the specified window, frame, or alert window is not found. To make tests on several tabs at the same time, we must ensure that they do not share any kind of state, or interfere with each other through cookies, local storage, or an other kind of mechanism. Summary This article showed us that a good way to test the UI of an application is actually to split it into two parts and test them separately. One part is the core logic of the UI that takes responsibility for control logic, models, calls to the server, validations, and so on. This part can be tested in a classic way, using BDD, and mocking the server access. No new techniques are needed for this, and the tests will be fast. Here, we can involve nonengineer stakeholders, such as UX designers, users, and so on, to write some nice BDD features using Gherkin and Cucumber.js. The other part is a thin view layer that follows a passive view design. It only updates the HTML when it is asked for, and listens to DOM events to transform them as requests to the core logic UI layer. This layer has no internal state or control rules; it simply transforms data and manipulates the DOM. We can use WebDriverJS to test the view. This is a good approach because the most complex part of the UI can be fully test-driven easily, and the hard and slow parts to test the view do not need many tests since they are very simple. In this sense, the passive view should not have a state; it should only act as a proxy of the DOM. Resources for Article: Further resources on this subject: Dart With Javascript [article] Behavior-Driven Development With Selenium WebDriver [article] Event-Driven Programming [article]
Read more
  • 0
  • 0
  • 6481

article-image-programming-littlebits-circuits-javascript-part-1
Anna Gerber
12 Feb 2015
6 min read
Save for later

Programming littleBits circuits with JavaScript Part 1

Anna Gerber
12 Feb 2015
6 min read
littleBits are electronic building blocks that snap together with magnetic connectors. They are great for getting started with electronics and robotics and for prototyping circuits. The littleBits Arduino Coding Kit includes an Arduino-compatible microcontroller, which means that you can use the Johnny-Five JavaScript Robotics programming framework to program your littleBits creations using JavaScript, the programming language of the web. Setup Plug the Arduino bit into your computer from the port at the top of the Arduino module. You'll need to supply power to the Arduino by connecting a blue power module to any of the input connectors. The Arduino will appear as a device with a name like /dev/cu.usbmodemfa131 on Mac, or COM3 on Windows. Johnny-Five uses a communication protocol called Firmata to communicate with the Arduino microcontroller. We'll load the Standard Firmata sketch onto the Arduino the first time we go to use it, to make this communication possible. Installing Firmata via the Chrome App One of the easiest ways to get started programming with Johnny-Five is by using this app for Google Chrome. After you have installed it, open the 'Johnny-Five Chrome' app from the Chrome apps page. To send the Firmata sketch to your board using the extension, select the port corresponding to your Arduino bit from the drop-down menu and then hit the Install Firmata button. If the device does not appear in the list at first, try the app's refresh button. Installing Firmata via the command line If you would prefer not to use the Chrome app, you can skip straight to using Node.js via the command line. You'll need a recent version of Node.js installed. Create a folder for your project's code. On a Mac run the Terminal app, and on Windows run Command Prompt. From the command line change directory so you are inside your project folder, and then use npm to install the Johnny-Five library and nodebots-interchange: npm install johnny-five npm install -g nodebots-interchange Use the interchange program from nodebots-interchange to send the StandardFirmata sketch to your Arduino: interchange install StandardFirmata -a leonardo -p /dev/cu.usbmodemfa131 Note: If you are familiar with Arduino IDE, you could alternatively use it to write Firmata to your Arduino. Open File > Examples > Firmata > StandardFirmata and select your port and Arduino Leonardo from Tools > Board then hit Upload. Inputs and Outputs Programming with hardware is all about I/O: inputs and outputs. These can be either analog (continuous values) or digital (discrete 0 or 1 values). littleBits input modules are color coded pink, while outputs are green. The Arduino Coding Kit includes analog inputs (dimmers) as well as a digital input module (button). The output modules included in the kit are a servo motor and an LED bargraph, which can be used as a digital output (i.e. on or off) or as an analog output to control the number of LEDs displayed, or with Pulse-Width-Modulation (PWM) - using a pattern of pulses on a digital output - to control LED brightness. Building a circuit Let's start with our output modules: the LED bargraph and servo. Connect a blue power module to any connector on the left-hand side of the Arduino. Connect the LED bargraph to the connector labelled d5 and the servo module to the connector labelled d9. Flick the switch next to both outputs to PWM. The mounting boards that come with the Arduino Coding Kit come in handy for holding your circuit together. Blinking an LED bargraph You can write the JavaScript program using the editor inside the Chrome app, or any text editor. We require the johnny-five library tocreate a board object with a "ready" handler. Our code for working with inputs and outputs will go inside the ready handler so that it will run after the Arduino has started up and communication has been established: var five = require("johnny-five"); var board = new five.Board(); board.on("ready", function() { // code for button, dimmers, servo etc goes here }); We'll treat the bargraph like a single output. It's connected to digital "pin" 5 (d5), so we'll need to provide this with a parameter when we create the Led object. The strobe function causes the LED to blink on and off The parameter to the function indicates the number of milliseconds between toggling the LED on or off (one second in this case): var led = new five.Led(5); led.strobe( 1000 ); Running the code Note: Make sure the power switch on your power module is switched on. If you are using the Chrome app, hit the Run button to start the program. You should see the LED bargraph start blinking. Any errors will be printed to the console below the code. If you have unplugged your Arduino since the last time you ran code via the app, you'll probably need to hit refresh and select the port for your device again from the drop-down above the code editor. The Chrome app is great for getting started, but eventually you'll want to switch to running programs using Node.js, because the Chrome app only supports a limited number of built-in libraries. Use a text editor to save your code to a file (e.g. blink.js) within your project directory, and run it from the command line using Node.js: node blink.js You can hit control-D on Windows or command-D on Mac to end the program. Controlling a Servo Johnny-Five includes a Servo class, but this is for controlling servo motors directly using PWM. The littleBits servo module already takes care of that for us, so we can treat it like a simple motor. Create a Motor object on pin 9 to correspond to the servo. We can start moving it using the start function. The parameter is a number between 0 and 255, which controls the speed. The stop function stops the servo. We'll use the board's wait function to stop the servo after 5 seconds (i.e. 5000 milliseconds). var servo = new five.Motor(9); servo.start(255); this.wait(5000, function(){ servo.stop(); }); In Part 2, we'll read data from our littleBits input modules and use these values to trigger changes to the servo and bargraph. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. Specializing in Digital Humanities, she was a Technical Project Manager at the University of Queensland’s eResearch centre, and she has worked at Brisbane’s Distributed System Technology Centre as a Research Scientist. Anna is a JavaScript robotics enthusiast who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 2847
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-learning-nservicebus-preparing-failure
Packt
09 Feb 2015
19 min read
Save for later

Learning NServiceBus - Preparing for Failure

Packt
09 Feb 2015
19 min read
 In this article by David Boike, author of the book Learning NServiceBus Second Edition we will explore the tools that NServiceBus gives us to stare at failure in the face and laugh. We'll discuss error queues, automatic retries, and controlling how those retries occur. We'll also discuss how to deal with messages that may be transient and should not be retried in certain conditions. Lastly, we'll examine the difficulty of web service integrations that do not handle retries cleanly on their own. (For more resources related to this topic, see here.) Fault tolerance and transactional processing In order to understand the fault tolerance we gain from using NServiceBus, let's first consider what happens without it. Let's order something from a fictional website and watch what might happen to process that order. On our fictional website, we add Batman Begins to our shopping cart and then click on the Checkout button. While our cursor is spinning, the following process is happening: Our web request is transmitted to the web server. The web application knows it needs to make several database calls, so it creates a new transaction scope. Database Call 1 of 3: The shopping cart information is retrieved from the database. Database Call 2 of 3: An Order record is inserted. Database Call 3 of 3: We attempt to insert OrderLine records, but instead get Error Message: Transaction (Process ID 54) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. This exception causes the transaction to roll back. This process is shown in the following diagram:   Ugh! If you're using SQL Server and you've never seen this, you haven't been coding long enough. It never happens during development; there just isn't enough load. It's even possible that this won't occur during load testing. It will likely occur during heavy load at the worst possible time, for example, right after your big launch. So obviously, we should log the error, right? But then what happens to the order? Well that's gone, and your boss may not be happy about losing that revenue. And what about our user? They will likely get a nasty error message. We won't want to divulge the actual exception message, so they will get something like, "An unknown error has occurred. The system administrator has been notified. Please try again later." However, the likelihood that they want to trust their credit card information to a website that has already blown up in their face once is quite low. So how can we do better? Here's how this scenario could have happened with NServiceBus:   The web request is transmitted to the web server. We add the shopping cart identifier to an NServiceBus command and send it through the Bus. We redirect the user to a new page that displays the receipt, even though the order has not yet been processed. Elsewhere, an Order service is ready to start processing a new message: The service creates a new transaction scope, and receives the message within the transaction. Database Call 1 of 3: The shopping cart information is retrieved from the database. Database Call 2 of 3: An Order record is inserted. Database Call 3 of 3: Deadlock! The exception causes the database transaction to roll back. The transaction controlling the message also rolls back. The order is back in the queue. This is great news! The message is back in the queue, and by default, NServiceBus will automatically retry this message a few times. Generally, deadlocks are a temporary condition, and simply trying again is all that is needed. After all, the SQL Server exception says Rerun the transaction. Meanwhile, the user has no idea that there was ever a problem. It will just take a little longer (in the order of milliseconds or seconds) to process the order. Error queues and replay Whenever you talk about automatic retries in a messaging environment, you must invariably consider poison messages. A poison message is a message that cannot be immediately resolved by a retry because it will consistently result in an error. A deadlock is a transient error. We can reasonably expect deadlocks and other transient errors to resolve by themselves without any intervention. Poison messages, on the other hand, cannot resolve themselves. Sometimes, this is because of an extended outage. At other times, it is purely our fault—an exception we didn't catch or an input condition we didn't foresee. Automatic retries If we retry poison messages in perpetuity, they will create a blockage in our incoming queue of messages. They will retry over and over, and valid messages will get stuck behind them, unable to make it through. For this reason, we must set a reasonable limit on retries, and after failing too many times, poison messages must be removed from the processing queue and stored someplace else. NServiceBus handles all of this for us. By default, NServiceBus will try to process a message five times, after which it will move the message to an error queue, configured by the MessageForwardingInCaseOfFaultConfig configuration section: <MessageForwardingInCaseOfFaultConfigErrorQueue="error" /> It is in this error queue that messages will wait for administrative intervention. In fact, you can even specify a different server to collect these messages, which allows you to configure one central point in a system where you watch for and deal with all failures: <MessageForwardingInCaseOfFaultConfigErrorQueue="error@SERVER" /> As mentioned previously, five failed attempts form the default metric for a failed message, but this is configurable via the TransportConfig configuration section: <section name="TransportConfig" type="NServiceBus.Config.TransportConfig, NServiceBus.Core" /> ... <TransportConfig MaxRetries="3" /> You could also generate the TransportConfig section using the Add-NServiceBusTransportConfig PowerShell cmdlet. Keep two things in mind: Depending upon how you read it, MaxRetries can be a somewhat confusing name. What it really means is the total number of tries, so a value of 5 will result in the initial attempt plus 4 retries. This has the odd side effect that MaxRetries="0" is the same as MaxRetries="1". In both instances, the message would be attempted once. During development, you may want to limit retries to MaxRetries="1" so that a single error doesn't cause a nausea-inducing wall of red that flushes your console window's buffer, leaving you unable to scroll up to see what came before. You can then enable retries in production by deploying the endpoint with a different configuration. Replaying errors What happens to those messages unlucky enough to fail so many times that they are unceremoniously dumped in an error queue? "I thought you said that Alfred would never give up on us!" you cry. As it turns out, this is just a temporary holding pattern that enables the rest of the system to continue functioning, while the errant messages await some sort of intervention, which can be human or automated based on your own business rules. Let's say our message handler divides two numbers from the incoming message, and we forget to account for the possibility that one of those numbers might be zero and that dividing by zero is frowned upon. At this point, we need to fix the error somehow. Exactly what we do will depend upon your business requirements: If the messages were sent in an error, we can fix the code that was sending them. In this case, the messages in the error queue are junk and can be discarded. We can check the inputs on the message handler, detect the divide-by-zero condition, and make compensating actions. This may mean returning from the message handler, effectively discarding any divide-by-zero messages that are processed, or it may mean doing new work or sending new messages. In this case, we may want to replay the error messages after we have deployed the new code. We may want to fix both the sending and receiving side. Second-level retries Automatically retrying error messages and sending repeated errors to an error queue is a pretty good strategy to manage both transient errors, such as deadlocks, and poison messages, such as an unrecoverable exception. However, as it turns out, there is a gray area in between, which is best referred to as semi-transient errors. These include incidents such as a web service being down for a few seconds, or a database being temporarily offline. Even with a SQL Server failover cluster, the failover procedure can take upwards of a minute depending on its size and traffic levels. During a time like this, the automatic retries will be executed immediately and great hordes of messages might go to the error queue, requiring an administrator to take notice and return them to their source queues. But is this really necessary? As it turns out, it is not. NServiceBus contains a feature called Second-Level Retries (SLR) that will add additional sets of retries after a wait. By default, the SLR will add three additional retry sessions, with an additional wait of 10 seconds each time. By contrast, the original set of retries is commonly referred to as First-Level Retries (FLR). Let's track a message's full path to complete failure, assuming default settings: Attempt to process the message five times, then wait for 10 seconds Attempt to process the message five times, then wait for 20 seconds Attempt to process the message five times, then wait for 30 seconds Attempt to process the message five times, and then send the message to the error queue Remember that by using five retries, NServiceBus attempts to process the message five times on every pass. Using second-level retries, almost every message should be able to be processed unless it is definitely a poison message that can never be successfully processed. Be warned, however, that using SLR has its downsides too. The first is ignorance of transient errors. If an error never makes it to an error queue and we never manually check out the error logs, there's a chance we might miss it completely. For this reason, it is smart to always keep an eye on error logs. A random deadlock now and then is not a big deal, but if they happen all the time, it is probably still worth some work to improve the code so that the deadlock is not as frequent. An additional risk lies in the time to process a true poison message through all the retry levels. Not accounting for any time taken to process the message itself 20 times or to wait for other messages in the queue, the use of second-level retries with the default settings results in an entire minute of waiting before you see the message in an error queue. If your business stakeholders require the message to either succeed or fail in 30 seconds, then you cannot possibly meet those requirements. Due to the asynchronous nature of messaging, we should be careful never to assume that messages in a distributed system will arrive in any particular order. However, it is still good to note that the concept of retries exacerbates this problem. If Message A and then Message B are sent in order, and Message B succeeds immediately but Message A has to wait in an error queue for awhile, then they will most certainly be processed out of order. Luckily, second-level retries are completely configurable. The configuration element is shown here with the default settings: <section name="SecondLevelRetriesConfig" type="NServiceBus.Config.SecondLevelRetriesConfig,   NServiceBus.Core"/> ... <SecondLevelRetriesConfig Enabled="true"                          TimeIncrease="00:00:10"                          NumberOfRetries="3" /> You could also generate the SecondLevelRetriesConfig section using the Add-NServiceBus SecondLevelRetriesConfig PowerShell cmdlet. Keep in mind that you may want to disable second-level retries, like first-level retries, during development for convenience, and then enable them in production. Messages that expire Messages that lose their business value after a specific amount of time are an important consideration with respect to potential failures. Consider a weather reporting system that reports the current temperature every few minutes. How long is that data meaningful? Nobody seems to care what the temperature was 2 hours ago; they want to know what the temperature is now! NServiceBus provides a method to cause messages to automatically expire after a given amount of time. Unlike storing this information in a database, you don't have to run any batch jobs or take any other administrative action to ensure that old data is discarded. You simply mark the message with an expiration date and when that time arrives, the message simply evaporates into thin air: [TimeToBeReceived("01:00:00")] public class RecordCurrentTemperatureCmd : ICommand { public double Temperature { get; set; } } This example shows that the message must be received within one hour of being sent, or it is simply deleted by the queuing system. NServiceBus isn't actually involved in the deletion at all, it simply tells the queuing system how long to allow the message to live. If a message fails, however, and arrives at an error queue, NServiceBus will not include the expiration date in order to give you a chance to debug the problem. It would be very confusing to try to find an error message that had disappeared into thin air! Another valuable use for this attribute is for high-volume message types, where a communication failure between servers or extended downtime could cause a huge backlog of messages to pile up either at the sending or the receiving side. Running out of disk space to store messages is a show-stopper for most message-queuing systems, and the TimeToBeReceived attribute is the way to guard against it. However, this means we are throwing away data, so we need to be very careful when applying this strategy. It should not simply be used as a reaction to low disk space! Auditing messages At times, it can be difficult to debug a distributed system. Commands and events are sent all around, but after they are processed, they go away. We may be able to tell what will happen to a system in the future by examining queued messages, but how can we analyze what happened in the past? For this reason, NServiceBus contains an auditing function that will enable an endpoint to send a copy of every message it successfully processes to a secondary location, a queue that is generally hosted on a separate server. This is accomplished by adding an attribute or two to the UnicastBusConfig section of an endpoint's configuration: <UnicastBusConfig ForwardReceivedMessagesTo="audit@SecondaryServer" TimeToBeReceivedOnForwardedMessages="1.00:00:00"> <MessageEndpointMappings>    <!-- Mappings go here --> </MessageEndpointMappings> </UnicastBusConfig> In this example, the endpoint will forward a copy of all successfully processed messages to a queue named audit on a server named SecondaryServer, and those messages will expire after one day. While it is not required to use the TimeToBeReceivedOnForwardedMessages parameter, it is highly recommended. Otherwise, it is possible (even likely) that messages will build up in your audit queue until you run out of available storage, which you would really like to avoid. The exact time limit you use is dependent upon the volume of messages in your system and how much storage your queuing system has available. You don't even have to design your own tool to monitor these audit messages; the Particular Service Platform has that job covered for you. NServiceBus includes the auditing configuration in new endpoints by default so that ServiceControl, ServiceInsight, and ServicePulse can keep tabs on your system. Web service integration and idempotence When talking about managing failure, it's important to spend a few minutes discussing web services because they are such a special case; they are just too good at failing. When the message is processed, the email would either be sent or it won't; there really aren't any in-between cases. In reality, when sending an email, it is technically possible that we could call the SMTP server, successfully send an email, and then the server could fail before we are able to finish marking the message as processed. However, in practice, this chance is so infinitesimal that we generally assume it to be zero. Even if it is not zero, we can assume in most cases that sending a user a duplicate email one time in a few million won't be the end of the world. Web services are another story. There are just so many ways a web service can fail: A DNS or network failure may not let us contact the remote web server at all The server may receive our request, but then throw an error before any state is modified on the server The server may receive our request and successfully process it, but a communication problem prevents us from receiving the 200 OK response The connection times out, thus ignoring any response the server may have been about to send us For this reason, it makes our lives a lot easier if all the web services we ever have to deal with are idempotent, which means a process that can be invoked multiple times with no adverse effects. Any service that queries data without modifying it is inherently idempotent. We don't have to worry about how many times we call a service if doing so doesn't change any data. Where we start to get into trouble is when we begin mutating state. Sometimes, we can modify state safely. Consider an example used previously regarding registering for alert notifications. Let's assume that on the first try, the third-party service technically succeeds in registering our user for alerts, but it takes too long to do so and we receive a timeout error. When we retry, we ask to subscribe the email address to alerts again, and the web service call succeeds. What's the net effect? Either way, the user is subscribed for alerts. This web service satisfies idempotence. The classic example of a non-idempotent web service is a credit card transaction processor. If the first attempt to authorize a credit card succeeds on the server and we retry, we may double charge our customer! This is not an acceptable business case and you will quickly find many people angry with you. In these cases, we need to do a little work ourselves because unfortunately, it's impossible for NServiceBus to know whether your web service is idempotent or not. Generally, this work takes the form of recording each step we perform on durable storage in real time, and then query that storage to see which steps have been attempted. In our example of credit card processing, the happy path approach would look like this: Record our intent to make a web service call to durable storage. Make the actual web service call. Record the results of the web service call to durable storage. Send commands or publish events with the results of the web service call. Now, if the message is retried, we can inspect the durable storage and decide what step to jump to and whether any compensating actions need to be taken first. If we have recorded our intent to call the web service but do not see any evidence of a response, we can query the credit card processor based on an order or transaction identifier. Then we will know whether we need to retry the authorization or just get the results of the already completed authorization. If we see that we have already made the web service call and received the results, then we know that the web service call was successful but some exception happened before the resulting messages could be sent. In response, we can just take the results and send the messages without requiring any further web service invocations. It's important to be able to handle the case where our durable storage throws an exception, rendering us unable to make our state persist. This is why it's so important to record the intent to do something before attempting it—so that we know the difference between never having done something and attempting it but not necessarily knowing the results. The process we have just discussed is admittedly a bit abstract, and can be visualized much more easily with the help of the following diagram:   The choice of using the durable storage strategy for this process is up to you. If you choose to use a database, however, you must remember to exempt it from the message handler's ambient transaction, or those changes will also get rolled back if and when the handler fails. In order to escape the transaction to write to durable storage, use a new TransactionScope object to suppress the transaction, like this: public void Handle(CallNonIdempotentWebServiceCmdcmd) { // Under control of ambient transaction   using (var ts = new TransactionScope(TransactionScopeOption.Suppress)) {    // Not under transaction control    // Write updates to durable storage here    ts.Complete(); }   // Back under control of ambient transaction } Summary In this article, we considered the inevitable failure of our software and how NServiceBus can help us to be prepared for it. You learned how NServiceBus promises fault tolerance within every message handler so that messages are never dropped or forgotten, but instead retried and then held in an error queue if they cannot be successfully processed. Once we fix the error, or take some other administrative action, we can replay those messages. In order to avoid flooding our system with useless messages during a failure, you learned how to cause messages that lose their business value after a specific amount of time to expire. Finally, you learned how to build auditing in a system by forwarding a copy of all messages for later inspection, and how to properly deal with the challenges involved in calling external web services. In this article, we dealt exclusively with NServiceBus endpoints hosted by the NServiceBus Host process.
Read more
  • 0
  • 0
  • 3199

article-image-fronting-external-api-ruby-rails-part-1
Mike Ball
09 Feb 2015
6 min read
Save for later

Fronting an external API with Ruby on Rails: Part 1

Mike Ball
09 Feb 2015
6 min read
Historically, a conventional Ruby on Rails application leverages server-side business logic, a relational database, and a RESTful architecture to serve dynamically-generated HTML. JavaScript-intensive applications and the widespread use of external web APIs, however, somewhat challenge this architecture. In many cases, Rails is tasked with performing as an orchestration layer, collecting data from various backend services and serving re-formatted JSON or XML to clients. In such instances, how is Rails' model-view-controller architecture still relevant? In this two part post series, we'll create a simple Rails backend that makes requests to an external XML-based web service and serves JSON. We'll use RSpec for tests and Jbuilder for view rendering. What are we building? We'll create Noterizer, a simple Rails application that requests XML from externally hosted endpoints and re-renders the XML data as JSON at a single URL. To assist in this post, I've created NotesXmlService, a basic web application that serves two XML-based endpoints: http://NotesXmlService.herokuapp.com/note-onehttp://NotesXmlService.herokuapp.com/note-two Why is this necessary in a real-world scenario? Fronting external endpoints with an application like Noterizer opens up a few opportunities: Noterizer's endpoint could serve JavaScript clients who can't perform HTTP requests across domain names to the original, external API. Noterizer's endpoint could reformat the externally hosted data to better serve its own clients' data formatting preferences. Noterizer's endpoint is a single interface to the data; multiple requests are abstracted away by its backend. Noterizer provides caching opportunities. While it's beyond the scope of this series, Rails can cache external request data, thus offloading traffic to the external API and avoiding any terms of service or rate limit violations imposed by the external service. Setup For this series, I’m using Mac OS 10.9.4, Ruby 2.1.2, and Rails 4.1.4. I’m assuming some basic familiarity with Git and the command line. Clone and set up the repo I've created a basic Rails 4 Noterizer app. Clone its repo, enter the project directory, and check out its tutorial branch: $ git clone http://github.com/mdb/noterizer && cd noterizer && git checkout tutorial Install its dependencies: $ bundle install Set up the test framework Let’s install RSpec for testing. Add the following to the project's Gemfile: gem 'rspec-rails', '3.0.1' Install rspec-rails: $ bundle install There’s now an rspec generator available for the rails command. Let's generate a basic RSpec installation: $ rails generate rspec:install This creates a few new files in a spec directory: ├── spec│   ├── rails_helper.rb│   └── spec_helper.rb We’re going to make a few adjustments to our RSpec installation.  First, because Noterizer does not use a relational database, delete the following ActiveRecord reference in spec/rails_helper.rb: # Checks for pending migrations before tests are run. # If you are not using ActiveRecord, you can remove this line. ActiveRecord::Migration.maintain_test_schema! Next, configure RSpec to be less verbose in its warning output; such verbose warnings are beyond the scope of this series. Remove the following line from .rspec: --warnings The RSpec installation also provides a spec rake task. Test this by running the following: $ rake spec You should see the following output, as there aren’t yet any RSpec tests: No examples found. Finished in 0.00021 seconds (files took 0.0422 seconds to load) 0 examples, 0 failures Note that a default Rails installation assumes tests live in a tests directory. RSpec uses a spec directory. For clarity's sake, you’re free to delete the test directory from Noterizer. Building a basic route and controller Currently, Noterizer does not have any URLs; we’ll create a single/notes URL route.  Creating the controller First, generate a controller: $ rails g controller notes Note that this created quite a few files, including JavaScript files, stylesheet files, and a helpers module. These are not relevant to our NotesController; so let's undo our controller generation by removing all untracked files from the project. Note that you'll want to commit any changes you do want to preserve. $ git clean -f Now, open config/application.rb and add the following generator configuration: config.generators do |g| g.helper false g.assets false end Re-running the generate command will now create only the desired files: $ rails g controller notes Testing the controller Let's add a basic NotesController#index test to spec/controllers/notes_spec.rb. The test looks like this: require 'rails_helper' describe NotesController, :type => :controller do describe '#index' do before :each do get :index end it 'successfully responds to requests' do expect(response).to be_success end end end This test currently fails when running rake spec, as we haven't yet created a corresponding route. Add the following route to config/routes.rb get 'notes' => 'notes#index' The test still fails when running rake spec, because there isn't a proper #index controller action.  Create an empty index method in app/controllers/notes_controller.rb class NotesController < ApplicationController def index end end rake spec still yields failing tests, this time because we haven't yet created a corresponding view. Let's create a view: $ touch app/views/notes/index.json.jbuilder To use this view, we'll need to tweak the NotesController a bit. Let's ensure that requests to the /notes route always returns JSON via a before_filter run before each controller action: class NotesController < ApplicationController before_filter :force_json def index end private def force_json request.format = :json end end Now, rake spec yields passing tests: $ rake spec . Finished in 0.0107 seconds (files took 1.09 seconds to load) 1 example, 0 failures Let's write one more test, asserting that the response returns the correct content type. Add the following to spec/controllers/notes_controller_spec.rb it 'returns JSON' do expect(response.content_type).to eq 'application/json' end Assuming rake spec confirms that the second test passes, you can also run the Rails server via the rails server command and visit the currently-empty Noterizer http://localhost:3000/notes URL in your web browser. Conclusion In this first part of the series we have created the basic route and controller for Noterizer, which is a basic example of a Rails application that fronts an external API. In the next blog post (Part 2), you will learn how to build out the backend, test the model, build up and test the controller, and also test the app with JBuilder. About this Author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 4969

article-image-managing-local-environments
Packt
09 Feb 2015
15 min read
Save for later

Managing local environments

Packt
09 Feb 2015
15 min read
In this article by Juampy Novillo Requena, author of Drush for Developers, Second Edition, we will learn that Drush site aliases offer a useful way to manage local environments without having to be within Drupal's root directory. (For more resources related to this topic, see here.) A site alias consists of an array of settings for Drush to access a Drupal project. They can be defined in different locations, using various file structures. You can find all of its variations at drush topic docs-aliases. In this article, we will use the following variations: We will define local site aliases at $HOME/.drush/aliases.drushrc.php, which are accessible anywhere for our command-line user. We will define a group of site aliases to manage the development and production environments of our sample Drupal project. These will be defined at sites/all/drush/example.aliases.drushrc.php. In the following example, we will use the site-alias command to generate a site alias definition for our sample Drupal project: $ cd /home/juampy/projects/example $ drush --uri=example.local site-alias --alias-name=example.local @self $aliases["example.local"] = array ( 'root' => '/home/juampy/projects/example', 'uri' => 'example.local', '#name' => 'self', ); The preceding command printed an array structure for the $aliases variable. You can see the root and uri options. There is also an internal property called #name that we can ignore. Now, we will place the preceding output at $HOME/.drush/aliases.drushrc.php so that we can invoke Drush commands to our local Drupal project from anywhere in the command-line interface: <?php   /** * @file * User-wide site alias definitions. * * Site aliases defined here are available everywhere for the current user. */   // Sample Drupal project. $aliases["example.local"] = array ( 'root' => '/home/juampy/projects/example', 'uri' => 'example.local', ); Here is how we use this site alias in a command. The following example is running the core-status command for our sample Drupal project: $ cd /home/juampy $ drush @example.local core-status Drupal version                 : 7.29-dev                               Site URI                       : example.local                               Database driver                 : mysql                               Database username               : root                                   Database name                   : drupal7x                               Database                       : Connected                               ... Drush alias files              : /home/juampy/.drush/aliases.drushrc.php Drupal root                     : /home/juampy/projects/example           Site path                       : sites/default                           File directory path             : sites/default/files                     Drush loaded our site alias file and used the root and uri options defined in it to find and bootstrap Drupal. The preceding command is equivalent to the following one: $ drush --root=/home/juampy/projects/example --uri=example.local core-status While $HOME/.drush/aliases.drushrc.php is a good place to define site aliases in your local environment, /etc/drush is a first class directory to place site aliases in servers. Let's discover now how we can connect to remote environments via Drush. Managing remote environments Site aliases that reference remote websites can be accessed by Drush through a password-less SSH connection (http://en.wikipedia.org/wiki/Secure_Shell). Before we start with these, let's make sure that we meet the requirements. Verifying requirements First, it is recommended to install the same version of Drush in all the servers that host your website. Drush will fail to run a command if it is not installed in the remote machine except for core-rsync, which runs rsync, a non-Drush command that is available in Unix-like systems. If you can already access the server that hosts your Drupal project through a public key, then skip to the next section. If not, you can either use the pushkey command from Drush extras (https://www.drupal.org/project/drush_extras), or continue reading to set it up manually. Accessing a remote server through a public key The first thing that we need to do is generate a public key for our command-line user in our local machine. Open the command-line interface and execute the following command. We will explain the output step by step: $ cd $HOME $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/juampy/.ssh/id_rsa): By default, SSH keys are created at $HOME/.ssh/. It is fine to go ahead with the suggested path in the preceding prompt; so, let's hit Enter and continue: Created directory '/home/juampy/.ssh'. Enter passphrase (empty for no passphrase): ********* Enter same passphrase again: ********* If the .ssh directory does not exist for the current user, the ssh-keygen command will create it with the correct permissions. We are next prompted to enter a passphrase. It is highly recommended to set one as it makes our private key safer. Here is the rest of the output once we have entered a passphrase: Your identification has been saved in /home/juampy/.ssh/id_rsa. Your public key has been saved in /home/juampy/.ssh/id_rsa.pub. The key fingerprint is: 6g:bf:3j:a2:00:03:a6:00:e1:43:56:7a:a0:c7:e9:f3 juampy@juampy-box The key's randomart image is: +--[ RSA 2048]----+ |                 | |                 | |..               | |o..*             | |o + . . S        | | + * = . .       | | = O o . .     | |   *.o * . .     | |   .oE oo.     | +-----------------+ The result is a new hidden directory under our $HOME path named .ssh. This directory contains a private key file (id_rsa) and a public key file (id_rsa.pub). The former is to be kept secret by us, while the latter is the one we will copy into remote servers where we want to gain access. Now that we have a public key, we will announce it to the SSH agent so that it can be used without having to enter the passphrase every time: $ ssh-add ~/.ssh/id_rsa Identity added: /home/juampy/.ssh/id_rsa (/home/juampy/.ssh/id_rsa) Our key is ready to be used. Assuming that we know an SSH username and password to access the server that hosts the development environment of our website, we will now copy our public key into it. In the following command, replace exampledev and dev.example.com with the username and server's URL of your server: $ ssh-copy-id [email protected] [email protected]'s password: Now try logging into the machine, with "ssh '[email protected]'", and check in: ~/.ssh/authorized_keys to make sure we haven't added extra keys that you weren't Our public key has been copied to the server and now we do not need to enter a password to identify ourselves anymore when we log in to it. We could have logged on to the server ourselves and manually copied the key, but the benefit of using the ssh-copy-id command is that it takes care of setting the right permissions to the ~/.ssh/authorized_keys file. Let's test it by logging in to the server: $ ssh [email protected] Welcome! We are ready to set up remote site aliases and run commands using the credentials that we have just configured. We will do this in the next section. If you have any trouble setting up SSH authentication, you can find plenty of debugging tips at https://help.github.com/articles/generating-ssh-keys and http://git-scm.com/book/en/Git-on-the-Server-Generating-Your-SSH-Public-Key. Defining a group of remote site aliases for our project Before diving into the specifics of how to define a Drush site alias, let's assume the following scenario: you are part of a development team working on a project that has two environments, each one located in its own server: Development, which holds the bleeding edge version of the project's codebase. It can be reached at http://dev.example.com. Production, which holds the latest stable release and real data. It can be reached at http://www.example.com. Additionally, there might be a variable amount of local environments for each developer in their working machines; although, these do not need a site alias. Given the preceding scenario and assuming that we have SSH access to the development and production servers, we will create a group of site aliases that identify them. We will define this group at sites/all/drush/example.aliases.drushrc.php within our Drupal project: <?php /** * @file * * Site alias definitions for Example project. */   // Development environment. $aliases['dev'] = array( 'root' => '/var/www/exampledev/docroot', 'uri' => 'dev.example.com', 'remote-host' => 'dev.example.com', 'remote-user' => 'exampledev', );   // Production environment. $aliases['prod'] = array( 'root' => '/var/www/exampleprod/docroot', 'uri' => 'www.example.com', 'remote-host' => 'prod.example.com', 'remote-user' => 'exampleprod', ); The preceding file defines two arrays for the $aliases variable keyed by the environment name. Drush will find this group of site aliases when being invoked from the root of our Drupal project. There are many more settings available, which you can find by reading the contents of the drush topic docs-aliases command. These site aliases contain options known to us: root and uri refer to the remote root path and the hostname of the remote Drupal project. There are also two new settings: remote-host and remote-uri. The former defines the URL of the server hosting the website, while the latter is the user to authenticate Drush when connecting via SSH. Now that we have a group of Drush site aliases to work with, the following section will cover some examples using them. Using site aliases in commands Site aliases prepend a command name for Drush to bootstrap the site and then run the command there. Our site aliases are @example.dev and @example.prod. The word example comes from the filename example.aliases.drushrc.php, while dev and prod are the two keys that we added to the $aliases array. Let's see them in action with a few command examples: Check the status of the Development environment: $ cd /home/juampy/projects/example $ drush @example.dev status Drupal version                 : 7.26                           Site URI                       : http://dev.example.com         Database driver                : mysql                           Database username              : exampledev                     Drush temp directory           : /tmp                           ... Drush alias files              : /home/juampy/projects/example/sites/all/drush/example.aliases.drushrc.php     Drupal root                    : /var/www/exampledev/docroot ...                                           The preceding output shows the current status of our development environment. Drush sent the command via SSH to our development environment and rendered back the resulting output. Most Drush commands support site aliases. Let's see the next example. Log in to the development environment and copy all the files from the files directory located at the production environment: $ drush @example.dev site-ssh Welcome to example.dev server! $ cd `drush @example.dev drupal-directory` $ drush core-rsync @example.prod:%files @self:%files You will destroy data from /var/www/exampledev/docroot/sites/default/files and replace with data from [email protected]:/var/www/exampleprod/docroot/sites/default/files/ Do you really want to continue? (y/n): y Note the use of @self in the preceding command, which is a special Drush site alias that represents the current Drupal project where we are located. We are using @self instead of @example.dev because we are already logged inside the development environment. Now, we will move on to the next example. Open a connection with the Development environment's database: $ drush @example.dev sql-cli Welcome to the MySQL monitor. Commands end with ; or g. mysql> select database(); +------------+ | database() | +------------+ | exampledev | +------------+ 1 row in set (0.02 sec) The preceding command will be identical to the following set of commands: drush @example.dev site-ssh cd /var/www/exampledev drush sql-cli However, Drush is so clever that it opens the connection for us. Isn't this neat? This is one of the commands I use most frequently. Let's finish by looking at our last example. Log in as the administrator user in production: $ drush @example.prod user-login http://www.example.com/user/reset/1/some-long-token/login Created new window in existing browser session. The preceding command creates a login URL and attempts to open your default browser with it. I love Drush! Summary In this article, we covered practical examples with site aliases. We started by defining a site alias for our local Drupal project, and then went on to write a group of site aliases to manage remote environments for a hypothetical Drupal project with a development and production site. Before using site aliases for our remote environments, we covered the basics of setting up SSH in order for Drush to connect to these servers and run commands there. Resources for Article: Further resources on this subject: Installing and Configuring Drupal [article] Installing and Configuring Drupal Commerce [article] 25 Useful Extensions for Drupal 7 Themers [article]
Read more
  • 0
  • 0
  • 1100

article-image-advanced-less-coding
Packt
09 Feb 2015
40 min read
Save for later

Advanced Less Coding

Packt
09 Feb 2015
40 min read
In this article by Bass Jobsen, author of the book Less Web Development Cookbook, you will learn: Giving your rules importance with the !important statement Using mixins with multiple parameters Using duplicate mixin names Building a switch leveraging argument matching Avoiding individual parameters to leverage the @arguments variable Using the @rest... variable to use mixins with a variable number of arguments Using mixins as functions Passing rulesets to mixins Using mixin guards (as an alternative for the if…else statements) Building loops leveraging mixin guards Applying guards to the CSS selectors Creating color contrasts with Less Changing the background color dynamically Aggregating values under a single property (For more resources related to this topic, see here.) Giving your rules importance with the !important statement The !important statement in CSS can be used to get some style rules always applied no matter where that rules appears in the CSS code. In Less, the !important statement can be applied with mixins and variable declarations too. Getting ready You can write the Less code for this recipe with your favorite editor. After that, you can use the command-line lessc compiler to compile the Less code. Finally, you can inspect the compiled CSS code to see where the !important statements appear. To see the real effect of the !important statements, you should compile the Less code client side, with the client-side compiler less.js and watch the effect in your web browser. How to do it… Create an important.less file that contains the code like the following snippet: .mixin() { color: red; font-size: 2em; } p { &.important {    .mixin() !important; } &.unimportant {    .mixin(); } } After compiling the preceding Less code with the command-line lessc compiler, you will find the following code output produced in the console: p.important { color: red !important; font-size: 2em !important; } p.unimportant { color: red; font-size: 2em; } You can, for instance, use the following snippet of the HTML code to see the effect of the !important statements in your browser: <p class="important"   style="color:green;font-size:4em;">important</p> <p class="unimportant"   style="color:green;font-size:4em;">unimportant</p> Your HTML document should also include the important.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="important.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in the following screenshot:  How it works… In Less, you can use the !important statement not only for properties, but also with mixins. When !important is set for a certain mixin, all properties of this mixin will be declared with the !important statement. You can easily see this effect when inspecting the properties of the p.important selector, both the color and size property got the !important statement after compiling the code. There's more… You should use the !important statements with care as the only way to overrule an !important statement is to use another !important statement. The !important statement overrules the normal CSS cascading, specificity rules, and even the inline styles. Any incorrect or unnecessary use of the !important statements in your Less (or CCS) code will make your code messy and difficult to maintain. In most cases where you try to overrule a style rule, you should give preference to selectors with a higher specificity and not use the !important statements at all. With Less V2, you can also use the !important statement when declaring your variables. A declaration with the !important statement can look like the following code: @main-color: darkblue !important; Using mixins with multiple parameters In this section, you will learn how to use mixins with more than one parameter. Getting ready For this recipe, you will have to create a Less file, for instance, mixins.less. You can compile this mixins.less file with the command-line lessc compiler. How to do it… Create the mixins.less file and write down the following Less code into it: .mixin(@color; @background: black;) { background-color: @background; color: @color; } div { .mixin(red; white;); } Compile the mixins.less file by running the command shown in the console, as follows: lessc mixins.less Inspect the CSS code output on the console, and you will find that it looks like that shown, as follows: div { background-color: #ffffff; color: #ff0000; } How it works… In Less, parameters are either semicolon-separated or comma-separated. Using a semicolon as a separator will be preferred because the usage of the comma will be ambiguous. The comma separator is not used only to separate parameters, but is also used to define a csv list, which can be an argument itself. The mixin in this recipe accepts two arguments. The first parameter sets the @color variable, while the second parameter sets the @background variable and has a default value that has been set to black. In the argument list, the default values are defined by writing a colon behind the variable's name, followed by the value. Parameters with a default value are optional when calling the mixins. So the .color mixin in this recipe can also be called with the following line of code: .mixin(red); Because the second argument has a default value set to black, the .mixin(red); call also matches the .mixin(@color; @background:black){} mixin, as described in the Building a switch leveraging argument matching recipe. Only variables set as parameter of a mixin are set inside the scope of the mixin. You can see this when compiling the following Less code: .mixin(@color:blue){ color2: @color; } @color: red; div { color1: @color; .mixin; } The preceding Less code compiles into the following CSS code: div { color1: #ff0000; color2: #0000ff; } As you can see in the preceding example, setting @color inside the mixin to its default value does not influence the value of @color assigned in the main scope. So lazy loading is applied on only variables inside the same scope; nevertheless, you will have to note that variables assigned in a mixin will leak into the caller. The leaking of variables can be used to use mixins as functions, as described in the Using mixins as functions recipe. There's more… Consider the mixin definition in the following Less code: .mixin(@font-family: "Helvetica Neue", Helvetica, Arial,   sans-serif;) { font-family: @font-family; } The semicolon added at the end of the list prevents the fonts after the "Helvetica Neue" font name in the csv list from being read as arguments for this mixin. If the argument list contains any semicolon, the Less compiler will use semicolons as a separator. In the CSS3 specification, among others, the border and background shorthand properties accepts csv. Also, note that the Less compiler allows you to use the named parameters when calling mixins. This can be seen in the following Less code that uses the @color variable as a named parameter: .mixin(@width:50px; @color: yellow) { width: @width; color: @color; } span { .mixin(@color: green); } The preceding Less code will compile into the following CSS code: span { width: 50px; color: #008000; } Note that in the preceding code, #008000 is the hexadecimal representation for the green color. When using the named parameters, their order does not matter. Using duplicate mixin names When your Less code contains one or more mixins with the same name, the Less compiler compiles them all into the CSS code. If the mixin has parameters (see the Building a switch leveraging argument matching recipe) the number of parameters will also match. Getting ready Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a file called mixins.less that contains the following Less code: .mixin(){ height:50px; } .mixin(@color) { color: @color; }   .mixin(@width) { color: green; width: @width; }   .mixin(@color; @width) { color: @color; width: @width; }   .selector-1 { .mixin(red); } .selector-2 { .mixin(red; 500px); } Compile the Less code from step 1 by running the following command in the console: lessc mixins.less After running the command from the previous step, you will find the following Less code output on the console: .selector-1 { color: #ff0000; color: green; width: #ff0000; } .selector-2 { color: #ff0000; width: 500px; } How it works… The .selector-1 selector contains the .mixin(red); call. The .mixin(red); call does not match the .mixin(){}; mixin as the number of arguments does not match. On the other hand, both .mixin(@color){}; and .mixin(@width){}; match the color. For this reason, these mixins will compile into the CSS code. The .mixin(red; 500px); call inside the .selector-2 selector will match only the .mixin(@color; @width){}; mixin, so all other mixins with the same .mixin name will be ignored by the compiler when building the .selector-2 selector. The compiled CSS code for the .selector-1 selector also contains the width: #ff0000; property value as the .mixin(@width){}; mixin matches the call too. Setting the width property to a color value makes no sense in CSS as the Less compiler does not check for this kind of errors. In this recipe, you can also rewrite the .mixin(@width){}; mixin, as follows: .mixin(@width) when (ispixel(@width)){};. There's more… Maybe you have noted that the .selector-1 selector contains two color properties. The Less compiler does not remove duplicate properties unless the value also is the same. The CSS code sometimes should contain duplicate properties in order to provide a fallback for older browsers. Building a switch leveraging argument matching The Less mixin will compile into the final CSS code only when the number of arguments of the caller and the mixins match. This feature of Less can be used to build switches. Switches enable you to change the behavior of a mixin conditionally. In this recipe, you will create a mixin, or better yet, three mixins with the same name. Getting ready Use the command-line lessc compiler to evaluate the effect of this mixin. The compiler will output the final CSS to the console. You can use your favorite text editor to edit the Less code. This recipe makes use of browser-vendor prefixes, such as the -ms-transform prefix. CSS3 introduced vendor-specific rules, which offer you the possibility to write some additional CSS, applicable for only one browser. These rules allow browsers to implement proprietary CSS properties that would otherwise have no working standard (and might never actually become the standard). To find out which prefixes should be used for a certain property, you can consult the Can I use database (available at http://caniuse.com/). How to do it… Create a switch.less Less file, and write down the following Less code into it: @browserversion: ie9; .mixin(ie9; @degrees){ transform:rotate(@degrees); -ms-transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(ie10; @degrees){ transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(@_; @degrees){ transform:rotate(@degrees); } div { .mixin(@browserversion; 70deg); } Compile the Less code from step 1 by running the following command in the console: lessc switch.less Inspect the compiled CSS code that has been output to the console, and you will find that it looks like the following code: div { -ms-transform: rotate(70deg); -webkit-transform: rotate(70deg); transform: rotate(70deg); } Finally, run the following command and you will find that the compiled CSS wll indeed differ from that of step 2: lessc --modify-var="browserversion=ie10" switch.less Now the compiled CSS code will look like the following code snippet: div { -webkit-transform: rotate(70deg); transform: rotate(70deg); } How it works… The switch in this recipe is the @browserversion variable that can easily be changed just before compiling your code. Instead of changing your code, you can also set the --modify-var option of the compiler. Depending on the value of the @browserversion variable, the mixins that match will be compiled, and the other mixins will be ignored by the compiler. The .mixin(ie10; @degrees){} mixin matches the .mixin(@browserversion; 70deg); call only when the value of the @browserversion variable is equal to ie10. Note that the first ie10 argument of the mixin will be used only for matching (argument = ie10) and does not assign any value. You will note that the .mixin(@_; @degrees){} mixin will match each call no matter what the value of the @browserversion variable is. The .mixin(ie9,70deg); call also compiles the .mixin(@_; @degrees){} mixin. Although this should result in the transform: rotate(70deg); property output twice, you will find only one. Since the property got exactly the same value twice, the compiler outputs the property only once. There's more… Not only switches, but also mixin guards, as described in the Using mixin guards (as an alternative for the if…else statements) recipe, can be used to set some properties conditionally. Current versions of Less also support JavaScript evaluating; JavaScript code put between back quotes will be evaluated by the compiler, as can be seen in the following Less code example: @string: "example in lower case"; p { &:after { content: "`@{string}.toUpperCase()`"; } } The preceding code will be compiled into CSS, as follows: p:after { content: "EXAMPLE IN LOWER CASE"; } When using client-side compiling, JavaScript evaluating can also be used to get some information from the browser environment, such as the screen width (screen.width), but as mentioned already, you should not use client-side compiling for production environments. Because you can't be sure that future versions of Less still support JavaScript evaluating, and alternative compilers not written in JavaScript cannot evaluate the JavaScript code, you should always try to write your Less code without JavaScript. Avoiding individual parameters to leverage the @arguments variable In the Less code, the @arguments variable has a special meaning inside mixins. The @arguments variable contains all arguments passed to the mixin. In this recipe, you will use the @arguments variable together with the the CSS url() function to set a background image for a selector. Getting ready You can inspect the compiled CSS code in this recipe after compiling the Less code with the command-line lessc compiler. Alternatively, you can inspect the results in your browser using the client-side less.js compiler. When inspecting the result in your browser, you will also need an example image that can be used as a background image. Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a background.less file that contains the following Less code: .background(@color; @image; @repeat: no-repeat; @position:   top right;) { background: @arguments; }   div { .background(#000; url("./images/bg.png")); width:300px; height:300px; } Finally, inspect the compiled CSS code, and you will find that it will look like the following code snippet: div { background: #000000 url("./images/bg.png") no-repeat top     right; width: 300px; height: 300px; } How it works… The four parameters of the .background() mixin are assigned as a space-separated list to the @arguments variable. After that, the @arguments variable can be used to set the background property. Also, other CSS properties accept space-separated lists, for example, the margin and padding properties. Note that the @arguments variable does not contain only the parameters that have been set explicit by the caller, but also the parameters set by their default value. You can easily see this when inspecting the compiled CSS code of this recipe. The .background(#000; url("./images/bg.png")); caller doesn't set the @repeat or @position argument, but you will find their values in the compiled CSS code. Using the @rest... variable to use mixins with a variable number of arguments As you can also see in the Using mixins with multiple parameters and Using duplicate mixin names recipes, only matching mixins are compiled into the final CSS code. In some situations, you don't know the number of parameters or want to use mixins for some style rules no matter the number of parameters. In these situations, you can use the special ... syntax or the @rest... variable to create mixins that match independent of the number of parameters. Getting ready You will have to create a file called rest.less, and this file can be compiled with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called rest.less that contains the following Less code: .mixin(@a...) { .set(@a) when (iscolor(@a)) {    color: @a; } .set(@a) when (length(@a) = 2) {    margin: @a; } .set(@a); } p{ .mixin(red); } p { .mixin(2px;4px); } Compile the rest.less file from step 1 using the following command in the console: lessc rest.less Inspect the CSS code output to the console that will look like the following code: p { color: #ff0000; } p { margin: 2px 4px; } How it works… The special ... syntax (three dots) can be used as an argument for a mixin. Mixins with the ... syntax in their argument list match any number of arguments. When you put a variable name starting with an @ in front of the ... syntax, all parameters are assigned to that variable. You will find a list of examples of mixins that use the special ... syntax as follows: .mixin(@a; ...){}: This mixin matches 1-N arguments .mixin(...){}: This mixin matches 0-N arguments; note that mixin() without any argument matches only 0 arguments .mixin(@a: 1; @rest...){}: This mixin matches 0-N arguments; note that the first argument is assigned to the @a variable, and all other arguments are assigned as a space-separated list to @rest Because the @rest... variable contains a space-separated list, you can use the Less built-in list function. Using mixins as functions People who are used to functional programming expect a mixin to change or return a value. In this recipe, you will learn to use mixins as a function that returns a value. In this recipe, the value of the width property inside the div.small and div.big selectors will be set to the length of the longest side of a right-angled triangle based on the length of the two shortest sides of this triangle using the Pythagoras theorem. Getting ready The best and easiest way to inspect the results of this recipe will be compiling the Less code with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called pythagoras.less that contains the following Less code: .longestSide(@a,@b) { @length: sqrt(pow(@a,2) + pow(@b,2)); } div { &.small {    .longestSide(3,4);    width: @length; } &.big {    .longestSide(6,7);    width: @length; } } Compile the pythagoras.less file from step 1 using the following command in the console: lessc pyhagoras.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code snippet: div.small { width: 5; } div.big { width: 9.21954446; } How it works… Variables set inside a mixin become available inside the scope of the caller. This specific behavior of the Less compiler was used in this recipe to set the @length variable and to make it available in the scope of the div.small and div.big selectors and the caller. As you can see, you can use the mixin in this recipe more than once. With every call, a new scope is created and both selectors get their own value of @length. Also, note that variables set inside the mixin do not overwrite variables with the same name that are set in the caller itself. Take, for instance, the following code: .mixin() { @variable: 1; } .selector { @variable: 2; .mixin; property: @variable; } The preceding code will compile into the CSS code, as follows: .selector { property: 2; } There's more… Note that variables won't leak from the mixins to the caller in the following two situations: Inside the scope of the caller, a variable with the same name already has been defined (lazy loading will be applied) The variable has been previously defined by another mixin call (lazy loading will not be applied) Passing rulesets to mixins Since Version 1.7, Less allows you to pass complete rulesets as an argument for mixins. Rulesets, including the Less code, can be assigned to variables and passed into mixins, which also allow you to wrap blocks of the CSS code defined inside mixins. In this recipe, you will learn how to do this. Getting ready For this recipe, you will have to create a Less file called keyframes.less, for instance. You can compile this mixins.less file with the command-line lessc compiler. Finally, inspect the Less code output to the console. How to do it… Create the keyframes.less file, and write down the following Less code into it: // Keyframes .keyframe(@name; @roules) { @-webkit-keyframes @name {    @roules(); } @-o-keyframes @name {    @roules(); } @keyframes @name {    @roules(); } } .keyframe(progress-bar-stripes; { from { background-position: 40px 0; } to   { background-position: 0 0; } }); Compile the keyframes.less file by running the following command shown in the console: lessc keyframes.less Inspect the CSS code output on the console and you will find that it looks like the following code: @-webkit-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @-o-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } How it works… Rulesets wrapped between curly brackets are passed as an argument to the mixin. A mixin's arguments are assigned to a (local) variable. When you assign the ruleset to the @ruleset variable, you are enabled to call @ruleset(); to "mixin" the ruleset. Note that the passed rulesets can contain the Less code, such as built-in functions too. You can see this by compiling the following Less code: .mixin(@color; @rules) { @othercolor: green; @media (print) {    @rules(); } }   p { .mixin(red; {color: lighten(@othercolor,20%);     background-color:darken(@color,20%);}) } The preceding Less code will compile into the following CSS code: @media (print) { p {    color: #00e600;    background-color: #990000; } } A group of CSS properties, nested rulesets, or media declarations stored in a variable is called a detached ruleset. Less offers support for the detached rulesets since Version 1.7. There's more… As you could see in the last example in the previous section, rulesets passed as an argument can be wrapped in the @media declarations too. This enables you to create mixins that, for instance, wrap any passed ruleset into a @media declaration or class. Consider the example Less code shown here: .smallscreens-and-olderbrowsers(@rules) { .lt-ie9 & {    @rules(); } @media (min-width:768px) {    @rules(); } } nav { float: left; width: 20%; .smallscreens-and-olderbrowsers({    float: none;    width:100%; }); } The preceding Less code will compile into the CSS code, as follows: nav { float: left; width: 20%; } .lt-ie9 nav { float: none; width: 100%; } @media (min-width: 768px) { nav {    float: none;    width: 100%; } } The style rules wrapped in the .lt-ie9 class can, for instance, be used with Paul Irish's <html> conditional classes technique or Modernizr. Now you can call the .smallscreens-and-olderbrowsers(){} mixin anywhere in your code and pass any ruleset to it. All passed rulesets get wrapped in the .lt-ie9 class or the @media (min-width: 768px) declaration now. When your requirements change, you possibly have to change only these wrappers once. Using mixin guards (as an alternative for the if…else statements) Most programmers are used to and familiar with the if…else statements in their code. Less does not have these if…else statements. Less tries to follow the declarative nature of CSS when possible and for that reason uses guards for matching expressions. In Less, conditional execution has been implemented with guarded mixins. Guarded mixins use the same logical and comparison operators as the @media feature in CSS does. Getting ready You can compile the Less code in this recipe with the command-line lessc compiler. Also, check the compiler options; you can find them by running the lessc command in the console without any argument. In this recipe, you will have to use the –modify-var option. How to do it… Create a Less file named guards.less, which contains the following Less code: @color: white; .mixin(@color) when (luma(@color) >= 50%) { color: black; } .mixin(@color) when (luma(@color) < 50%) { color: white; }   p { .mixin(@color); } Compile the Less code in the guards.less using the command-line lessc compiler with the following command entered in the console: lessc guards.less Inspect the output written on the console, which will look like the following code: p { color: black; } Compile the Less code with different values set for the @color variable and see how to output change. You can use the command as follows: lessc --modify-var="color=green" guards.less The preceding command will produce the following CSS code: p {   color: white;   } Now, refer to the following command: lessc --modify-var="color=lightgreen" guards.less With the color set to light green, it will again produce the following CSS code: p {   color: black;   }   How it works… The use of guards to build an if…else construct can easily be compared with the switch expression, which can be found in the programming languages, such as PHP, C#, and pretty much any other object-oriented programming language. Guards are written with the when keyword followed by one or more conditions. When the condition(s) evaluates true, the code will be mixed in. Also note that the arguments should match, as described in the Building a switch leveraging argument matching recipe, before the mixin gets compiled. The syntax and logic of guards is the same as that of the CSS @media feature. A condition can contain the following comparison operators: >, >=, =, =<, and < Additionally, the keyword true is the only value that evaluates as true. Two or more conditionals can be combined with the and keyword, which is equivalent to the logical and operator or, on the other hand, with a comma as the logical or operator. The following code will show you an example of the combined conditionals: .mixin(@a; @color) when (@a<10) and (luma(@color) >= 50%) { } The following code contains the not keyword that can be used to negate conditions: .mixin(@a; @color) when not (luma(@color) >= 50%) { } There's more… Inside the guard conditions, (global) variables can also be compared. The following Less code example shows you how to use variables inside guards: @a: 10; .mixin() when (@a >= 10) {} The preceding code will also enable you to compile the different CSS versions with the same code base when using the modify-var option of the compiler. The effect of the guarded mixin described in the preceding code will be very similar with the mixins built in the Building a switch leveraging argument matching recipe. Note that in the preceding example, variables in the mixin's scope overwrite variables from the global scope, as can be seen when compiling the following code: @a: 10; .mixin(@a) when (@a < 10) {property: @a;} selector { .mixin(5); } The preceding Less code will compile into the following CSS code: selector { property: 5; } When you compare guarded mixins with the if…else constructs or switch expressions in other programming languages, you will also need a manner to create a conditional for the default situations. The built-in Less default() function can be used to create such a default conditional that is functionally equal to the else statement in the if…else constructs or the default statement in the switch expressions. The default() function returns true when no other mixins match (matching also takes the guards into account) and can be evaluated as the guard condition. Building loops leveraging mixin guards Mixin guards, as described besides others in the Using mixin guards (as an alternative for the if…else statements) recipe, can also be used to dynamically build a set of CSS classes. In this recipe, you will learn how to do this. Getting ready You can use your favorite editor to create the Less code in this recipe. How to do it… Create a shadesofblue.less Less file, and write down the following Less code into it: .shadesofblue(@number; @blue:100%) when (@number > 0) {   .shadesofblue(@number - 1, @blue - 10%);   @classname: e(%(".color-%a",@number)); @{classname} {    background-color: rgb(0, 0, @blue);    height:30px; } } .shadesofblue(10); You can, for instance, use the following snippet of the HTML code to see the effect of the compiled Less code from the preceding step: <div class="color-1"></div> <div class="color-2"></div> <div class="color-3"></div> <div class="color-4"></div> <div class="color-5"></div> <div class="color-6"></div> <div class="color-7"></div> <div class="color-8"></div> <div class="color-9"></div> <div class="color-10"></div> Your HTML document should also include the shadesofblue.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="shadesofblue.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in this screenshot: How it works… The CSS classes in this recipe are built with recursion. The recursion here has been done by the .shadesofblue(){} mixin calling itself with different parameters. The loop starts with the .shadesofblue(10); call. When the compiler reaches the .shadesofblue(@number - 1, @blue – 10%); line of code, it stops the current code and starts compiling the .shadesofblue(){} mixin again with @number decreased by one and @blue decreased by 10 percent. The process will be repeated till @number < 1. Finally, when the @number variable becomes equal to 0, the compiler tries to call the .shadesofblue(0,0); mixin, which does not match the when (@number > 0) guard. When no matching mixin is found, the compiler stops, compiles the rest of the code, and writes the first class into the CSS code, as follows: .color-1 { background-color: #00001a; height: 30px; } Then, the compiler starts again where it stopped before, at the .shadesofblue(2,20); call, and writes the next class into the CSS code, as follows: .color-2 { background-color: #000033; height: 30px; } The preceding code will be repeated until the tenth class. There's more… When inspecting the compiled CSS code, you will find that the height property has been repeated ten times, too. This kind of code repeating can be prevented using the :extend Less pseudo class. The following code will show you an example of the usage of the :extend Less pseudo class: .baseheight { height: 30px; } .mixin(@i: 2) when(@i > 0) { .mixin(@i - 1); .class@{i} {    width: 10*@i;    &:extend(.baseheight); } } .mixin(); Alternatively, in this situation, you can create a more generic selector, which sets the height property as follows: div[class^="color"-] { height: 30px; } Recursive loops are also useful when iterating over a list of values. Max Mikhailov, one of the members of the Less core team, wrote a wrapper mixin for recursive Less loops, which can be found at https://github.com/seven-phases-max. This wrapper contains the .for and .-each mixins that can be used to build loops. The following code will show you how to write a nested loop: @import "for"; #nested-loops { .for(3, 1); .-each(@i) {    .for(0, 2); .-each(@j) {      x: (10 * @i + @j);    } } } The preceding Less code will produce the following CSS code: #nested-loops { x: 30; x: 31; x: 32; x: 20; x: 21; x: 22; x: 10; x: 11; x: 12; } Finally, you can use a list of mixins as your data provider in some situations. The following Less code gives an example about using mixins to avoid recursion: .data() { .-("dark"; black); .-("light"; white); .-("accent"; pink); }   div { .data(); .-(@class-name; @color){    @class: e(@class-name);    &.@{class} {      color: @color;    } } } The preceding Less code will compile into the CSS code, as follows: div.dark { color: black; } div.light { color: white; }   div.accent { color: pink; } Applying guards to the CSS selectors Since Version 1.5 of Less, guards can be applied not only on mixins, but also on the CSS selectors. This recipe will show you how to apply guards on the CSS selectors directly to create conditional rulesets for these selectors. Getting ready The easiest way to inspect the effect of the guarded selector in this recipe will be using the command-line lessc compiler. How to do it… Create a Less file named darkbutton.less that contains the following code: @dark: true; button when (@dark){ background-color: black; color: white; } Compile the darkbutton.less file with the command-line lessc compiler by entering the following command into the console: lessc darkbutton.less Inspect the CSS code output on the console, which will look like the following code: button { background-color: black; color: white; } Now try the following command and you will find that the button selector is not compiled into the CSS code: lessc --modify-var="dark=false" darkbutton.less How it works… The guarded CSS selectors are ignored by the compiler and so not compiled into the CSS code when the guard evaluates false. Guards for the CSS selectors and mixins leverage the same comparison and logical operators. You can read in more detail how to create guards with these operators in Using mixin guards (as an alternative for the if…else statements) recipe. There's more… Note that the true keyword will be the only value that evaluates true. So the following command, which sets @dark equal to 1, will not generate the button selector as the guard evaluates false: lessc --modify-var="dark=1" darkbutton.less The following Less code will give you another example of applying a guard on a selector: @width: 700px; div when (@width >= 600px ){ border: 1px solid black; } The preceding code will output the following CSS code: div {   border: 1px solid black;   } On the other hand, nothing will be output when setting @width to a value smaller than 600 pixels. You can also rewrite the preceding code with the & feature referencing the selector, as follows: @width: 700px; div { & when (@width >= 600px ){    border: 1px solid black; } } Although the CSS code produced of the latest code does not differ from the first, it will enable you to add more properties without the need to repeat the selector. You can also add the code in a mixin, as follows: .conditional-border(@width: 700px) {    & when (@width >= 600px ){    border: 1px solid black; } width: @width; } Creating color contrasts with Less Color contrasts play an important role in the first impression of your website or web application. Color contrasts are also important for web accessibility. Using high contrasts between background and text will help the visually disabled, color blind, and even people with dyslexia to read your content more easily. The contrast() function returns a light (white by default) or dark (black by default) color depending on the input color. The contrast function can help you to write a dynamical Less code that always outputs the CSS styles that create enough contrast between the background and text colors. Setting your text color to white or black depending on the background color enables you to meet the highest accessibility guidelines for every color. A sample can be found at http://www.msfw.com/accessibility/tools/contrastratiocalculator.aspx, which shows you that either black or white always gives enough color contrast. When you use Less to create a set of buttons, for instance, you don't want some buttons with white text while others have black text. In this recipe, you solve this situation by adding a stroke to the button text (text shadow) when the contrast ratio between the button background and button text color is too low to meet your requirements. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. You will have to create some HTML and Less code, and you can use your favorite editor to do this. You will have to create the following file structure: How to do it… Create a Less file named contraststrokes.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @buttonTextColor: white; @ContrastRatio: 7; //AAA, small texts   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px black; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px white; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   button { padding:10px; border-radius:10px; color: @buttonTextColor; width:200px; }   .safe { .setcontrast(@safe); background-color: @safe; }   .danger { .setcontrast(@danger); background-color: @danger; }   .warning { .setcontrast(@warning); background-color: @warning; } Create an HTML file, and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>    <link rel="stylesheet/less" type="text/css"       href="contraststrokes.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">     warning</button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like what's shown in the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. How it works… The main purpose of this recipe is to show you how to write dynamical code based on the color contrast ratio. Web Content Accessibility Guidelines (WCAG) 2.0 covers a wide range of recommendations to make web content more accessible. They have defined the following three conformance levels: Conformance Level A: In this level, all Level A success criteria are satisfied Conformance Level AA: In this level, all Level A and AA success criteria are satisfied Conformance Level AAA: In this level, all Level A, AA, and AAA success criteria are satisfied If you focus only on the color contrast aspect, you will find the following paragraphs in the WCAG 2.0 guidelines. 1.4.1 Use of Color: Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. (Level A) 1.4.3 Contrast (Minimum): The visual presentation of text and images of text has a contrast ratio of at least 4.5:1 (Level AA) 1.4.6 Contrast (Enhanced): The visual presentation of text and images of text has a contrast ratio of at least 7:1 (Level AAA) The contrast ratio can be calculated with a formula that can be found at http://www.w3.org/TR/WCAG20/#contrast-ratiodef: (L1 + 0.05) / (L2 + 0.05) In the preceding formula, L1 is the relative luminance of the lighter of the colors, and L2 is the relative luminance of the darker of the colors. In Less, the relative luminance of a color can be found with the built-in luma() function. In the Less code of this recipe are the four guarded .setcontrast(){} mixins. The guard conditions, such as (luma(@backgroundcolor) =< luma(@buttonTextColor)) are used to find which of the @backgroundcolor and @buttonTextColor colors is the lighter one. Then the (((luma({the lighter color})+5)/(luma({the darker color})+5)) < @ContrastRatio) condition can, according to the preceding formula, be used to determine whether the contrast ratio between these colors meets the requirement (@ContrastRatio) or not. When the value of the calculated contrast ratio is lower than the value set by the @ContrastRatio, the text-shadow: 0 0 2px {color}; ruleset will be mixed in, where {color} will be white or black depending on the relative luminance of the color set by the @buttonTextColor variable. There's more… In this recipe, you added a stroke to the web text to improve the accessibility. First, you will have to bear in mind that improving the accessibility by adding a stroke to your text is not a proven method. Also, automatic testing of the accessibility (by calculating the color contrast ratios) cannot be done. Other options to solve this issue are to increase the font size or change the background color itself. You can read how to change the background color dynamically based on color contrast ratios in the Changing the background color dynamically recipe. When you read the exceptions of the 1.4.6 Contrast (Enhanced) paragraph of the WCAG 2.0 guidelines, you will find that large-scale text requires a color contrast ratio of 4.5 instead of 7.0 to meet the requirements of the AAA Level. Large-scaled text is defined as at least 18 point or 14 point bold or font size that would yield the equivalent size for Chinese, Japanese, and Korean (CJK) fonts. To try this, you could replace the text-shadow properties in the Less code of step 1 of this recipe with the font-size, 14pt, and font-weight, bold; declarations. After this, you can inspect the results in your browser again. Depending on, among others, the values you have chosen for the @buttonTextColor and @ContrastRatio variables, you will find something like the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. Note that when you set the @ContrastRatio variable to 7.0, the code does not check whether the larger font indeed meets the 4.5 contrast ratio requirement. Changing the background color dynamically When you define some basic colors to generate, for instance, a set of button elements, you can use the built-in contrast() function to set the font color. The built-in contrast() function provides the highest possible contrast, but does not guarantee that the contrast ratio is also high enough to meet your accessibility requirements. In this recipe, you will learn how to change your basic color automatically to meet the required contrast ratio. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. Use your favorite editor to create the HTML and Less code in this recipe. You will have to create the following file structure: How to do it… Create a Less file named backgroundcolors.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @ContrastRatio: 7.0; //AAA @precision: 1%; @buttonTextColor: black; @threshold: 43;   .setcontrastcolor(@startcolor) when (luma(@buttonTextColor)   < @threshold) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) {    .contrastcolor(lighten(@startcolor,@precision)); } .contrastcolor(@startcolor) when (@startcolor =     color("white")),(((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   .setcontrastcolor(@startcolor) when (default()) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@buttonTextColor)+5)/     (luma(@startcolor)+5)) < @ContrastRatio) {    .contrastcolor(darken(@startcolor,@precision)); } .contrastcolor(@startcolor) when (luma(@startcolor) = 100     ),(((luma(@buttonTextColor)+5)/(luma(@startcolor)+5))       >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   button { padding:10px; border-radius:10px; color:@buttonTextColor; width:200px; }   .safe { .setcontrastcolor(@safe); background-color: @contrastcolor; }   .danger { .setcontrastcolor(@danger); background-color: @contrastcolor; }   .warning { .setcontrastcolor(@warning); background-color: @contrastcolor; } Create an HTML file and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>      <link rel="stylesheet/less" type="text/css"       href="backgroundcolors.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">warning     </button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like the following screenshot: On the left-hand side of the preceding figure, you will see the original colored buttons, and on the right-hand side, you will find the high contrast buttons. How it works… The guarded .setcontrastcolor(){} mixins are used to determine the color set depending upon whether the @buttonTextColor variable is a dark color or not. When the color set by @buttonTextColor is a dark color, with a relative luminance below the threshold value set by the @threshold variable, the background colors should be made lighter. For light colors, the background colors should be made darker. Inside each .setcontrastcolor(){} mixin, a second set of mixins has been defined. These guarded .contrastcolor(){} mixins construct a recursive loop, as described in the Building loops leveraging mixin guards recipe. In each step of the recursion, the guards test whether the contrast ratio that is set by the @ContrastRatio variable has been reached or not. When the contrast ratio does not meet the requirements, the @startcolor variable will darken or lighten by the number of percent set by the @precision variable with the built-in darken() and lighten() functions. When the required contrast ratio has been reached or the color defining the @startcolor variable has become white or black, the modified color value of @startcolor will be assigned to the @contrastcolor variable. The guarded .contrastcolor(){} mixins are used as functions, as described in the Using mixins as functions recipe to assign the @contrastcolor variable that will be used to set the background-color property of the button selectors. There's more… A small value of the @precision variable will increase the number of recursions (possible) needed to find the required colors as there will be more and smaller steps needed. With the number of recursions also, the compilation time will increase. When you choose a bigger value for @precision, the contrast color found might differ from the start color more than needed to meet the contrast ratio requirement. When you choose, for instance, a dark button text color, which is not black, all or some base background colors will be set to white. The chances of finding the highest contrast for white increase for high values of the @ContrastRatio variable. The recursions will stop when white (or black) has been reached as you cannot make the white color lighter. When the recursion stops on reaching white or black, the colors set by the mixins in this recipe don't meet the required color contrast ratios. Aggregating values under a single property The merge feature of Less enables you to merge property values into a list under a single property. Each list can be either space-separated or comma-separated. The merge feature can be useful to define a property that accepts a list as a value. For instance, the background accepts a comma-separated list of backgrounds. Getting ready For this recipe, you will need a text editor and a Less compiler. How to do it… Create a file called defaultfonts.less that contains the following Less code: .default-fonts() { font-family+: Helvetica, Arial, sans-serif; } p { font-family+: "Helvetica Neue"; .default-fonts(); } Compile the defaultfonts.less file from step 1 using the following command in the console: lessc defaultfonts.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code: p { font-family: "Helvetica Neue", Helvetica, Arial, sans-   serif; } How it works… When the compiler finds the plus sign (+) before the assignment sign (:), it will merge the values into a CSV list and will not create a new property into the CSS code. There's more… Since Version 1.7 of Less, you can also merge the property's values separated by a space instead of a comma. For space-separated values, you should use the +_ sign instead of a + sign, as can be seen in the following code: .text-overflow(@text-overflow: ellipsis) { text-overflow+_ : @text-overflow; } p, .text-overflow { .text-overflow(); text-overflow+_ : ellipsis; } The preceding Less code will compile into the CSS code, as follows: p, .text-overflow { text-overlow: ellipsis ellipsis; } Note that the text-overflow property doesn't force an overflow to occur; you will have to explicitly set, for instance, the overflow property to hidden for the element. Summary This article walks you through the process of parameterized mixins and shows you how to use guards. A guard can be used with as if-else statements and make it possible to construct interactive loops in Less. Resources for Article: Further resources on this subject: Web Application Testing [article] LESS CSS Preprocessor [article] Bootstrap 3 and other applications [article]
Read more
  • 0
  • 0
  • 5510
article-image-migrating-wordpress-blog-middleman-and-deploying-to-amazon-part3
Mike Ball
09 Feb 2015
9 min read
Save for later

Part 3: Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
09 Feb 2015
9 min read
Part 3: Migrating WordPress blog content and deploying to production In parts 1 and 2 of this series, we created middleman-demo, a basic Middleman-based blog, imported content from WordPress, and deployed middleman-demo to Amazon S3. Now that middleman-demo has been deployed to production, let’s design a continuous integration workflow that automates builds and deployments. In part 3, we’ll cover the following: Testing middleman-demo with RSpec and Capybara Integrating with GitHub and Travis CI Configuring automated builds and deployments from Travis CI If you didn’t follow parts 1 and 2, or you no longer have your original middleman-demo code, you can clone mine and check out the part3 branch:  $ git clone http://github.com/mdb/middleman-demo && cd middleman-demo && git checkout part3 Create some automated tests In software development, the practice of continuous delivery serves to frequently deploy iterative software bug fixes and enhancements, such that users enjoy an ever-improving product. Automated processes, such as tests, assist in rapidly validating quality with each change. middleman-demo is a relatively simple codebase, though much of its build and release workflow can still be automated via continuous delivery. Let’s write some automated tests for middleman-demo using RSpec and Capybara. These tests can assert that the site continues to work as expected with each change. Add the gems to the middleman-demo Gemfile:  gem 'rspec'gem 'capybara' Install the gems: $ bundle install Create a spec directory to house tests: $ mkdir spec As is the convention in RSpec, create a spec/spec_helper.rb file to house the RSpec configuration: $ touch spec/spec_helper.rb Add the following configuration to spec/spec_helper.rb to run middleman-demo during test execution: require "middleman" require "middleman-blog" require 'rspec' require 'capybara/rspec' Capybara.app = Middleman::Application.server.inst do set :root, File.expand_path(File.join(File.dirname(__FILE__), '..')) set :environment, :development end Create a spec/features directory to house the middleman-demo RSpec test files: $ mkdir spec/features Create an RSpec spec file for the homepage: $ touch spec/features/index_spec.rb Let’s create a basic test confirming that the Middleman Demo heading is present on the homepage. Add the following to spec/features/index_spec.rb: require "spec_helper" describe 'index', type: :feature do before do visit '/' end it 'displays the correct heading' do expect(page).to have_selector('h1', text: 'Middleman Demo') end end Run the test and confirm that it passes: $ rspec You should see output like the following: Finished in 0.03857 seconds (files took 6 seconds to load) 1 example, 0 failures Next, add a test asserting that the first blog post is listed on the homepage; confirm it passes by running the rspec command: it 'displays the "New Blog" blog post' do expect(page).to have_selector('ul li a[href="/blog/2014/08/20/new-blog/"]', text: 'New Blog') end As an example, let’s add one more basic test, this time asserting that the New Blog text properly links to the corresponding blog post. Add the following to spec/features/index_spec.rb and confirm that the test passes: it 'properly links to the "New Blog" blog post' do click_link 'New Blog' expect(page).to have_selector('h2', text: 'New Blog') end middleman-demo can be further tested in this fashion. The extent to which the specs test every element of the site’s functionality is up to you. At what point can it be confidently asserted that the site looks good, works as expected, and can be publicly deployed to users? Push to GitHub Next, push your middleman-demo code to GitHub. If you forked my original github.com/mdb/middleman-demo repository, skip this section. 1. Create a GitHub repositoryIf you don’t already have a GitHub account, create one. Create a repository through GitHub’s web UI called middleman-demo. 2. What should you do if your version of middleman-demo is not a git repository?If your middleman-demo is already a git repository, skip to step 3. If you started from scratch and your code isn’t already in a git repository, let’s initialize one now. I’m assuming you have git installed and have some basic familiarity with it. Make a middleman-demo git repository: $ git init && git add . && git commit -m 'initial commit' Declare your git origin, where <your_git_url_from_step_1> is your GitHub middleman-demo repository URL: $ git remote add origin <your_git_url_from_step_1> Push to your GitHub repository: $ git push origin master You’re done; skip step 3 and move on to Integrate with Travis CI. 3. If you cloned my mdb/middleman-demo repository…If you cloned my middleman-demo git repository, you’ll need to add your newly created middleman-demo GitHub repository as an additional remote: $ git remote add my_origin <your_git_url_from_step_1> If you are working in a branch, merge all your changes to master. Then push to your GitHub repository: $ git push -u my_origin master Integrate with Travis CI Travis CI is a distributed continuous integration service that integrates with GitHub. It’s free for open source projects. Let’s configure Travis CI to run the middleman-demo tests when we push to the GitHub repository. Log in to Travis CI First, sign in to Travis CI using your GitHub credentials. Visit your profile. Find your middleman-demo repository in the “Repositories” list. Activate Travis CI for middleman-demo; click the toggle button “ON.” Create a .travis.ymlfile Travis CI looks for a .travis.yml YAML file in the root of a repository. YAML is a simple, human-readable markup language; it’s a popular option in authoring configuration files. The .travis.yml file informs Travis how to execute the project’s build. Create a .travis.yml file in the root of middleman-demo: $ touch .travis.yml Configure Travis CI to use Ruby 2.1 when building middleman-demo. Add the following YAML to the .travis.yml file: language: rubyrvm: 2.1 Next, declare how Travis CI can install the necessary gem dependencies to build middleman-demo; add the following: install: bundle install Let’s also add before_script, which runs the middleman-demo tests to ensure all tests pass in advance of a build: before_script: bundle exec rspec Finally, add a script that instructs Travis CI how to build middleman-demo: script: bundle exec middleman build At this point, the .travis.yml file should look like the following: language: ruby rvm: 2.1 install: bundle install before_script: bundle exec rspec script: bundle exec middleman build Commit the .travis.yml file: $ git add .travis.yml && git commit -m "added basic .travis.yml file" Now, after pushing to GitHub, Travis CI will attempt to install middleman-demo dependencies using Ruby 2.1, run its tests, and build the site. Travis CI’s command build output can be seen here: https://travis-ci.org/<your_github_username>/middleman-demo Add a build status badge Assuming the build passes, you should see a green build passing badge near the top right corner of the Travis CI UI on your Travis CI middleman-demo page. Let’s add this badge to the README.md file in middleman-demo, such that a build status badge reflecting the status of the most recent Travis CI build displays on the GitHub repository’s README. If one does not already exist, create a README.md file: $ touch README.md Add the following markdown, which renders the Travis CI build status badge: [![Build Status](https://travis-ci.org/<your_github_username>/middleman-demo.svg?branch=master)](https://travis-ci.org/<your_github_username>/middleman-demo) Configure continuous deployments Through continuous deployments, code is shipped to users as soon as a quality-validated change is committed. Travis CI can be configured to deploy a middleman-demo build with each successful build. Let’s configure Travis CI to continuously deploy middleman-demo to the S3 bucket created in part 2 of this tutorial series. First, install the travis command-line tools: $ gem install travis Use the travis command-line tools to set S3 deployments. Enter the following; you’ll be prompted for your S3 details (see the example below if you’re unsure how to answer): $ travis setup s3 An example response is: Access key ID: <your_aws_access_key_id> Secret access key: <your_aws_secret_access_key_id> Bucket: <your_aws_bucket> Local project directory to upload (Optional): build S3 upload directory (Optional): S3 ACL Settings (private, public_read, public_read_write, authenticated_read, bucket_owner_read, bucket_owner_full_control): |private| public_read Encrypt secret access key? |yes| yes Push only from <your_github_username>/middleman-demo? |yes| yes This automatically edits the .travis.yml file to include the following deploy information: deploy: provider: s3 access_key_id: <your_aws_access_key_id> secret_access_key: secure: <your_encrypted_aws_secret_access_key_id> bucket: <your_s3_bucket> local-dir: build acl: !ruby/string:HighLine::String public_read on: repo: <your_github_username>/middleman-demo Add one additional option, informing Travis to preserve the build directory for use during the deploy process: skip_cleanup: true The final .travis.yml file should look like the following: language: ruby rvm: 2.1 install: bundle install before_script: bundle exec rspec script: bundle exec middleman build deploy: provider: s3 access_key_id: <your_aws_access_key> secret_access_key: secure: <your_encrypted_aws_secret_access_key> bucket: <your_aws_bucket> local-dir: build skip_cleanup: true acl: !ruby/string:HighLine::String public_read on: repo: <your_github_username>/middleman-demo Confirm that your continuous integration works Commit your changes: $ git add .travis.yml && git commit -m "added travis deploy configuration" Push to GitHub and watch the build output on Travis CI: https://travis-ci.org/<your_github_username>/middleman-demo If all works as expected, Travis CI will run the middleman-demo tests, build the site, and deploy to the proper S3 bucket. Recap Throughout this series, we’ve examined the benefits of static site generators and covered some basics regarding Middleman blogging. We’ve learned how to use the wp2middleman gem to migrate content from a WordPress blog, and we’ve learned how to deploy Middleman to Amazon’s cloud-based Simple Storage Service (S3). We’ve configured Travis CI to run automated tests, produce a build, and automate deployments. Beyond what’s been covered within this series, there’s an extensive Middleman ecosystem worth exploring, as well as numerous additional features. Middleman’s custom extensions seek to extend basic Middleman functionality through third-party gems. Read more about Middleman at Middlemanapp.com. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications. 
Read more
  • 0
  • 0
  • 1918

article-image-nsb-and-security
Packt
06 Feb 2015
14 min read
Save for later

NSB and Security

Packt
06 Feb 2015
14 min read
This article by Rich Helton, the author of Learning NServiceBus Sagas, delves into the details of NSB and its security. In this article, we will cover the following: Introducing web security Cloud vendors Using .NET 4 Adding NServiceBus Benefits of NSB (For more resources related to this topic, see here.) Introducing web security According to the Top 10 list of 2013 by the Open Web Application Security Project (OWASP), found at https://www.owasp.org/index.php/Top10#OWASP_Top_10_for_2013, injection flaws still remain at the top among the ways to penetrate a web site. This is shown in the following screenshot: An injection flaw is a means of being able to access information or the site by injecting data into the input fields. This is normally used to bypass proper authentication and authorization. Normally, this is the data that the website has not seen in the testing efforts or considered during development. For references, I will consider some slides found at http://www.slideshare.net/rhelton_1/cweb-sec-oct27-2010-final. An instance of an injection flaw is to put SQL commands in form fields and even URL fields to try to get SQL errors and returns with further information. If the error is not generic, and a SQL exception occurs, it will sometimes return with table names. It may deny authorization for sa under the password table in SQL Server 2008. Knowing this gives a person knowledge of the SQL Server version, the sa user is being used, and the existence of a password table. There are many tools and websites for people on the Internet to practice their web security testing skills, rather than them literally being in IT security as a professional or amateur. Many of these websites are well-known and posted at places such as https://www.owasp.org/index.php/Phoenix/Tools. General disclaimer I do not endorse or encourage others to practice on websites without written permission from the website owner. Some of the live sites are as follows, and most are used to test web scanners: http://zero.webappsecurity.com/: This is developed by SPI Dynamics (now HP Security) for Web Inspect. It is an ASP site. http://crackme.cenzic.com/Kelev/view/home.php: This PHP site is from Cenzic. http://demo.testfire.net/: This is developed by WatchFire (now IBM Rational AppScan). It is an ASP site. http://testaspnet.vulnweb.com/: This is developed by Acunetix. It is a PHP site. http://webscantest.com/: This is developed by NT OBJECTives NTOSpider. It is a PHP site. There are many more sites and tools, and one would have to research them themselves. There are tools that will only look for SQL Injection. Hacking professionals who are very gifted and spend their days looking for only SQL injection would find these useful. We will start with SQL injection, as it is one of the most popular ways to enter a website. But before we start an analysis report on a website hack, we will document the website. Our target site will be http://zero.webappsecurity.com/. We will start with the EC-Council's Certified Ethical Hacker program, where they divide footprinting and scanning into seven basic steps: Information gathering Determining the network range Identifying active machines Finding open ports and access points OS fingerprinting Fingerprinting services Mapping the network We could also follow the OWASP Web Testing checklist, which includes: Information gathering Configuration testing Identity management testing Authentication testing Session management testing Data validation testing Error handling Cryptography Business logic testing Client-side testing The idea is to gather as much information on the website as possible before launching an attack, as there is no information gathered so far. To gather information on the website, you don't actually have to scan the website yourself at the start. There are many scanners that scan the website before you start. There are Google Bots gathering search information about the site, the Netcraft search engine gathering statistics about the site, as well as many domain search engines with contact information. If another person has hacked the site, there are sites and blogs where hackers talk about hacking a specific site, including what tools they used. They may even post security scans on the Internet, which could be found by googling. There is even a site (https://archive.org/) that is called the WayBack Machine as it keeps previous versions of websites that it scans for in archive. These are just some basic pieces, and any person who has studied for their Certified Ethical Hacker's exam should have all of this on their fingertips. We will discuss some of the benefits that Microsoft and Particular.net have taken into consideration to assist those who develop solutions in C#. We can search at http://web.archive.org/web/ or http://zero.webappsecurity.com/ for changes from the WayBack Machine, and we will see something like this: From this search engine, we look at what the screens looked like 2003, and walk through various changes to the present 2014. Actually, there were errors on archive copying the site in 2003, so this machine directed us to the first best copy on May 11, 2006, as shown in the following screenshot: Looking with Netcraft, we can see that it was first started in 2004, last rebooted in 2014, and is running Ubuntu, as shown in this screenshot: Next, we can try to see what Google tells us. There are many Google Hacking Databases that keep track of keywords in the Google Search Engine API. These keywords are expressions such as file: passwd to search for password files in Ubuntu, and many more. This is not a hacking book, and this site is well-known, so we will just search for webappsecurity.com file:passwd. This gives me more information than needed. On the first item, I get a sample web scan report of the available vulnerabilities in the site from 2008, as shown in the following screenshot: We can also see which links Google has already found by running http://zero.webappsecurity.com/, as shown in this screenshot: In these few steps, I have enough information to bring a targeted website attack to check whether these vulnerabilities are still active or not. I know the operating system of the website and have details of the history of the website. This is before I have even considered running tools to approach the website. To scan the website, for which permission is always needed ahead of time, there are multiple web scanners available. For a list of web scanners, one website is http://sectools.org/tag/web-scanners/. One of the favorites is built by the famed Googler Michal Zalewski, and is called skipfish. Skipfish is an open source tool written in the C language, and it can be used in Windows by compiling it in Cygwin libraries, which are Linux virtual libraries and tools for Windows. Skipfish has its own man pages at http://dev.man-online.org/man1/skipfish/, and it can be downloaded from https://code.google.com/p/skipfish/. Skipfish performs web crawling, fuzzing, and tests for many issues such as XSS and SQL Injection. In Skipfish's case, its fussing uses dictionaries to add more paths to websites, extensions, and keywords that are normally found as attack vectors through the experience of hackers, to apply to the website being scanned. For instance, it may not be apparent from the pages being scanned that there is an admin/index.html page available, but the dictionary will try to check whether the page is available. Skipfish results will appear as follows: The issue with Skipfish is that it is noisy, because of its fuzzer. Skipfish will try many scans and checks for links that might not exist, which will take some time and can be a little noisy out of the box. There are many configurations, and there is throttling of the scanning to try to hide the noise. An associated scan in HP's WebInspect scanner will appear like this: These are just automated means to inspect a website. These steps are common, and much of this material is known in web security. After an initial inspection of a website, a person may start making decisions on how to check their information further. Manually checking websites An experienced web security person may now start proceeding through more manual checks and less automated checking of websites after taking an initial look at the website. For instance, type Admin as the user ID and password, or type Guest instead of Admin, and the list progresses based on experience. Then try the Admin and password combination, then the Admin and password123 combination, and so on. A person inspecting a website might have a lot of time to try to perform penetration testing, and might try hundreds of scenarios. There are many tools and scripts to automate the process. As security analysts, we find many sites that give admin access just by using Admin and Admin as the user ID and password, respectively. To enhance personal skills, there are many tutorials to walk through. One thing to do is to pull down a live website that you can set up for practice, such as WebGoat, and go through the steps outlined in the tutorials from sites such as http://webappsecmovies.sourceforge.net/webgoat/. These sites will show a person how to perform SQL Injection testing through the WebGoat site. As part of these tutorials, there are plugins of Firefox to test security scripts, HTML, debug pieces and tamper with the website through the browser, as shown in this screenshot: Using .NET 4 can help Every page that is deployed to the Internet (and in many cases, the Intranet as well), constantly gets probed and prodded by scans, viruses, and network noise. There are so many pokes, probes, and prods on networks these days that most of them are seen as noise. By default, .NET 4 offers some validation and out-of-the-box support for Web requests. Using .NET 4, you may discover that some input types such as double quotes, single quotes, and even < are blocked in some form fields. You will get an error like what is shown in the following screenshot when trying to pass some of the values: This is very basic validation, and it will reside in the .NET version 4 framework's pooling pieces of Internet Information Services (IIS) for Windows. To further offer security following Microsoft's best enterprise practices, we may also consider using Model-View-Controller (MVC) and Entity Frameworks (EF). To get this information, we can review Microsoft Application Architecture Guide at http://msdn.microsoft.com/en-us/library/ff650706.aspx. The MVC design pattern is the most commonly used pattern in software and is designed as follows: This is a very common design pattern, so why is this important in security? What is helpful is that we can validate data requests and responses through the controllers, as well as provide data annotations for each data element for more validation. A common attack that appeared through viruses through the years is the buffer overflow. A buffer overflow is used to send a lot of data to the data elements. Validation can check whether there is sufficient data to counteract the buffer overflow. EF is a Microsoft framework used to provide an object-relationship mapper. Not only can it easily generate objects to and from the SQL Server through Visual Studio, but it can also use objects instead of SQL scripting. Since it does not use SQL, SQL Injection, which is an attack involving injecting SQL commands through input fields, can be counteracted. Even though some of these techniques will help mitigate many attack vectors, the gateway to backend processes is usually through the website. There are many more injection attack vectors. If stored procedures are used for SQL Server, a scan be tried to access any stored procedures that the website may be calling, as well as for any default stored procedures that may be lingering from default installations from SQL Server. So how do we add further validation and decouple the backend processes in an organization from the website? NServiceBus to the rescue NServiceBus is the most popular C# platform framework used to implement an Enterprise Service Bus (ESB) for service-oriented architecture (SOA). Basically, NSB hosts Windows services through its NServiceBus.Host.exe program, and interfaces these services through different message queuing components. A C# MVC-EF program can call web services directly, and when the web service receives an error, the website will receive the error directly in the MVC program. This creates a coupling of the web service and the website, where changes in the website can affect the web services and actions in the web services can affect the website. Because of this coupling, websites may have a Please do not refresh the page until the process is finished warning. Normally, it is wise to step away from the phone, tablet, or computer until the website is loaded. It could be that even though you may not touch the website, another process running on the machine may. A virus scanner, update, or multiple other processes running on the device could cause any glitch in the refreshing of anything on the device. With all the scans that could be happening on a website and that others on the Internet could be doing, it seems quite odd that a page would say Please don't' touch me, I am busy. In order to decouple the website from the web services, a service needs to be deployed between the website and web service. It helps if that service has a lot of out-of-the-box security features as well, to help protect the interaction between the website and web service. For this reason, a product such as NServiceBus is most helpful, where others have already laid the groundwork to have advanced security features in services tested through the industry by their use. Being the most common C# ESB platform has its advantages, as developers and architects ensure the integrity of the framework well before a new design starts using it. Benefits of NSB NSB provides many components needed for automation that are only found in ESBs. ESBs provide the following: Separation of duties: There is separation of duties from the frontend to the backend, allowing the frontend to fire a message to a service and continue in its processing, and not worrying about the results until it needs an update. Also, separation of workflow responsibility exists through separating out NSB services. One service could be used to send payments to a bank, and another service could be used to provide feedback of the current status of payment to the MVC-EF database so that a user may see their payment status. Message durability: Messages are saved in queues between services so that in case services are stopped, they can start from the messages in the queues when they restart, and the messages will persist until told otherwise. Workflow retries: Messages, or endpoints, can be told to retry a number of times until they completely fail and send an error. The error is automated to return to an error queue. For instance, a web service message can be sent to a bank, and it can be set to retry the web service every 5 minutes for 20 minutes before giving up completely. This is useful during any network or server issues. Monitoring: NSB ServicePulse can keep a heartbeat on its services. Other monitoring can easily be done on the NSB queues to report on the number of messages. Encryption: Messages between services and endpoints can be easily encrypted. High availability: Multiple services or subscribers could be processing the same or similar messages from various services that are living on different servers. When one server or service goes down, others could be made available to take over those that are already running. Summary If any website is on the Internet, it is being scanned by a multitude of means, from websites and others. It is wise to decouple external websites from backend processes through a means such as NServiceBus. Websites that are not decoupled from the backend can be acted upon by the external processes that it may be accomplishing, such as a web service to validate a credit card. These websites may say Do not refresh this page. Other conditions might occur to the website and be beyond your reach, refreshing the page to affect that interaction. The best solution is to decouple the website from these processes through NServiceBus. Resources for Article: Further resources on this subject: Mobile Game Design [Article] CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article]
Read more
  • 0
  • 0
  • 2836

article-image-transformations-using-mapreduce
Packt
05 Feb 2015
19 min read
Save for later

Transformations Using Map/Reduce

Packt
05 Feb 2015
19 min read
In this article written by Adam Boduch, author of the book Lo-Dash Essentials, we'll be looking at all the interesting things we can do with Lo-Dash and the map/reduce programming model. We'll start off with the basics, getting our feet wet with some basic mappings and basic reductions. As we progress through the article, we'll start introducing more advanced techniques to think in terms of map/reduce with Lo-Dash. The goal, once you've reached the end of this article, is to have a solid understanding of the Lo-Dash functions available that aid in mapping and reducing collections. Additionally, you'll start to notice how disparate Lo-Dash functions work together in the map/reduce domain. Ready? (For more resources related to this topic, see here.) Plucking values Consider that as your informal introduction to mapping because that's essentially what it's doing. It's taking an input collection and mapping it to a new collection, plucking only the properties we're interested in. This is shown in the following example: var collection = [ { name: 'Virginia', age: 45 }, { name: 'Debra', age: 34 }, { name: 'Jerry', age: 55 }, { name: 'Earl', age: 29 } ]; _.pluck(collection, 'age'); // → [ 45, 34, 55, 29 ] This is about as simple a mapping operation as you'll find. In fact, you can do the same thing with map(): var collection = [ { name: 'Michele', age: 58 }, { name: 'Lynda', age: 23 }, { name: 'William', age: 35 }, { name: 'Thomas', age: 41 } ]; _.map(collection, 'name'); // → // [ // "Michele", // "Lynda", // "William", // "Thomas" // ] As you'd expect, the output here is exactly the same as it would be with pluck(). In fact, pluck() is actually using the map() function under the hood. The callback passed to map() is constructed using property(), which just returns the specified property value. The map() function falls back to this plucking behavior when a string instead of a function is passed to it. With that brief introduction to the nature of mapping, let's dig a little deeper and see what's possible in mapping collections. Mapping collections In this section, we'll explore mapping collections. Mapping one collection to another ranges from composing really simple—as we saw in the preceding section—to sophisticated callbacks. These callbacks that map each item in the collection can include or exclude properties and can calculate new values. Besides, we can apply functions to these items. We'll also address the issue of filtering collections and how this can be done in conjunction with mapping. Including and excluding properties When applied to an object, the pick() function generates a new object containing only the specified properties. The opposite of this function, omit(), generates an object with every property except those specified. Since these functions work fine for individual object instances, why not use them in a collection? You can use both of these functions to shed properties from collections by mapping them to new ones, as shown in the following code: var collection = [ { first: 'Ryan', last: 'Coleman', age: 23 }, { first: 'Ann', last: 'Sutton', age: 31 }, { first: 'Van', last: 'Holloway', age: 44 }, { first: 'Francis', last: 'Higgins', age: 38 } ]; _.map(collection, function(item) { return _.pick(item, [ 'first', 'last' ]); }); // → // [ // { first: "Ryan", last: "Coleman" }, // { first: "Ann", last: "Sutton" }, // { first: "Van", last: "Holloway" }, // { first: "Francis", last: "Higgins" } // ] Here, we're creating a new collection using the map() function. The callback function supplied to map() is applied to each item in the collection. The item argument is the original item from the collection. The callback is expected to return the mapped version of that item and this version could be anything, including the original item itself. Be careful when manipulating the original item in map() callbacks. If the item is an object and it's referenced elsewhere in your application, it could have unintended consequences. We're returning a new object as the mapped item in the preceding code. This is done using the pick() function. We only care about the first and the last properties. Our newly mapped collection looks identical to the original, except that no item has an age property. This newly mapped collection is seen in the following code: var collection = [ { first: 'Clinton', last: 'Park', age: 19 }, { first: 'Dana', last: 'Hines', age: 36 }, { first: 'Pete', last: 'Ross', age: 31 }, { first: 'Annie', last: 'Cross', age: 48 } ]; _.map(collection, function(item) { return _.omit(item, 'first'); }); // → // [ // { last: "Park", age: 19 }, // { last: "Hines", age: 36 }, // { last: "Ross", age: 31 }, // { last: "Cross", age: 48 } // ] The preceding code follows the same approach as the pick() code. The only difference is that we're excluding the first property from the newly created collection. You'll also notice that we're passing a string containing a single property name instead of an array of property names. In addition to passing strings or arrays as the argument to pick() or omit(), we can pass in a function callback. This is suitable when it's not very clear which objects in a collection should have which properties. Using a callback like this inside a map() callback lets us perform detailed comparisons and transformations on collections while using very little code: function invalidAge(value, key) { return key === 'age' && value < 40; } var collection = [ { first: 'Kim', last: 'Lawson', age: 40 }, { first: 'Marcia', last: 'Butler', age: 31 }, { first: 'Shawna', last: 'Hamilton', age: 39 }, { first: 'Leon', last: 'Johnston', age: 67 } ]; _.map(collection, function(item) { return _.omit(item, invalidAge); }); // → // [ // { first: "Kim", last: "Lawson", age: 40 }, // { first: "Marcia", last: "Butler" }, // { first: "Shawna", last: "Hamilton" }, // { first: "Leon", last: "Johnston", age: 67 } // ] The new collection generated by this code excludes the age property for items where the age value is less than 40. The callback supplied to omit() is applied to each key-value pair in the object. This code is a good illustration of the conciseness achievable with Lo-Dash. There's a lot of iterative code running here and there is no for or while statement in sight. Performing calculations It's time now to turn our attention to performing calculations in our map() callbacks. This entails looking at the item and, based on its current state, computing a new value that will be ultimately mapped to the new collection. This could mean extending the original item's properties or replacing one with a newly computed value. Whichever the case, it's a lot easier to map these computations than to write your own logic that applies these functions to every item in your collection. This is explained using the following example: var collection = [ { name: 'Valerie', jqueryYears: 4, cssYears: 3 }, { name: 'Alonzo', jqueryYears: 1, cssYears: 5 }, { name: 'Claire', jqueryYears: 3, cssYears: 1 }, { name: 'Duane', jqueryYears: 2, cssYears: 0 } ]; _.map(collection, function(item) { return _.extend({ experience: item.jqueryYears + item.cssYears, specialty: item.jqueryYears >= item.cssYears ? 'jQuery' : 'CSS' }, item); }); // → // [ // { // experience": 7, // specialty": "jQuery", // name": "Valerie", // jqueryYears": 4, // cssYears: 3 // }, // { // experience: 6, // specialty: "CSS", // name: "Alonzo", // jqueryYears: 1, // cssYears: 5 // }, // { // experience: 4, // specialty: "jQuery", // name: "Claire", // jqueryYears: 3, // cssYears: 1 // }, // { // experience: 2, // specialty: "jQuery", // name: "Duane", // jqueryYears: 2, // cssYears: 0 // } // ] Here, we're mapping each item in the original collection to an extended version of it. Particularly, we're computing two new values for each item—experience and speciality. The experience property is simply the sum of the jqueryYears and cssYears properties. The speciality property is computed based on the larger value of the jqueryYears and cssYears properties. Earlier, I mentioned the need to be careful when modifying items in map() callbacks. In general, it's a bad idea. It's helpful to try and remember that map() is used to generate new collections, not to modify existing collections. Here's an illustration of the horrific consequences of not being careful: var app = {}, collection = [ { name: 'Cameron', supervisor: false }, { name: 'Lindsey', supervisor: true }, { name: 'Kenneth', supervisor: false }, { name: 'Caroline', supervisor: true } ]; app.supervisor = _.find(collection, { supervisor: true }); _.map(collection, function(item) { return _.extend(item, { supervisor: false }); }); console.log(app.supervisor); // → { name: "Lindsey", supervisor: false } The destructive nature of this callback is not obvious at all and next to impossible for programmers to track down and diagnose. Its nature is essentially resetting the supervisor attribute for each item. If these items are used anywhere else in the application, the supervisor property value will be clobbered whenever this map job is executed. If you need to reset values like this, ensure that the change is mapped to the new value and not made to the original. Mapping also works with primitive values as the item. Often, we'll have an array of primitive values that we'd like transformed into an alternative representation. For example, let's say you have an array of sizes, expressed in bytes. You can map those arrays to a new collection with those sizes expressed as human-readable values, using the following code: function bytes(b) { var units = [ 'B', 'K', 'M', 'G', 'T', 'P' ], target = 0; while (b >= 1024) { b = b / 1024; target++; } return (b % 1 === 0 ? b : b.toFixed(1)) + units[target] + (target === 0 ? '' : 'B'); } var collection = [ 1024, 1048576, 345198, 120120120 ]; _.map(collection, bytes); // → [ "1KB", "1MB", "337.1KB", "114.6MB" ] The bytes() function takes a numerical argument, which is the number of bytes to be formatted. This is the starting unit. We just keep incrementing the target unit until we have something that is less than 1024. For example, the last item in our collection maps to '114.6MB'. The bytes() function can be passed directly to map() since it's expecting values in our collection as they are. Calling functions We don't always have to write our own callback functions for map(). Wherever it makes sense, we're free to leverage Lo-Dash functions to map our collection items. For example, let's say we have a collection and we'd like to know the size of each item. There's a size() Lo-Dash function we can use as our map() callback, as follows: var collection = [ [ 1, 2 ], [ 1, 2, 3 ], { first: 1, second: 2 }, { first: 1, second: 2, third: 3 } ]; _.map(collection, _.size); // → [ 2, 3, 2, 3 ] This code has the added benefit that the size() function returns consistent results, no matter what kind of argument is passed to it. In fact, any function that takes a single argument and returns a new value based on that argument is a valid candidate for a map() callback. For instance, we could also map the minimum and maximum value of each item: var source = _.range(1000), collection = [ _.sample(source, 50), _.sample(source, 100), _.sample(source, 150) ]; _.map(collection, _.min); // → [ 20, 21, 1 ] _.map(collection, _.max); // → [ 931, 985, 991 ] What if we want to map each item of our collection to a sorted version? Since we do not sort the collection itself, we don't care about the item positions within the collection, but the items themselves, if they're arrays, for instance. Let's see what happens with the following code: var collection = [ [ 'Evan', 'Veronica', 'Dana' ], [ 'Lila', 'Ronald', 'Dwayne' ], [ 'Ivan', 'Alfred', 'Doug' ], [ 'Penny', 'Lynne', 'Andy' ] ]; _.map(collection, _.compose(_.first, function(item) { return _.sortBy(item); })); // → [ "Dana", "Dwayne", "Alfred", "Andy" ] This code uses the compose() function to construct a map() callback. The first function returns the sorted version of the item by passing it to sortBy(). The first() item of this sorted list is then returned as the mapped item. The end result is a new collection containing the alphabetically first item from each array in our collection, with three lines of code. This is not bad. Filtering and mapping Filtering and mapping are two closely related collection operations. Filtering extracts only those collection items that are of particular interest in a given context. Mapping transforms collections to produce new collections. But what if you only want to map a certain subset of your collection? Then it would make sense to chain together the filtering and mapping operations, right? Here's an example of what that might look like: var collection = [ { name: 'Karl', enabled: true }, { name: 'Sophie', enabled: true }, { name: 'Jerald', enabled: false }, { name: 'Angie', enabled: false } ]; _.compose( _.partialRight(_.map, 'name'), _.partialRight(_.filter, 'enabled') )(collection); // → [ "Karl", "Sophie" ] This map is executed using compose() to build a function that is called right away, with our collection as the argument. The function is composed of two partials. We're using partialRight() on both arguments because we want the collection supplied as the leftmost argument in both cases. The first partial function is filter(). We're partially applying the enabled argument. So this function will filter our collection before it's passed to map(). This brings us to our next partial in the function composition. The result of filtering the collection is passed to map(), which has the name argument partially applied. The end result is a collection with enabled name strings. The important thing to note about the preceding code is that the filtering operation takes place before the map() function is run. We could have stored the filtered collection in an intermediate variable instead of streamlining with compose(). Regardless of flavor, it's important that the items in your mapped collection correspond to the items in the source collection. It's conceivable to filter out the items in the map() callback by not returning anything, but this is ill-advised as it doesn't map well, both figuratively and literally. Mapping objects The previous section focused on collections and how to map them. But wait, objects are collections too, right? That is indeed correct, but it's worth differentiating between the more traditional collections, arrays, and plain objects. The main reason is that there are implications with ordering and keys when performing map/reduce. At the end of the day, arrays and objects serve different use cases with map/reduce, and this article tries to acknowledge these differences. Now we'll start looking at some techniques Lo-Dash programmers employ when working with objects and mapping them to collections. There are a number of factors to consider such as the keys within an object and calling methods on objects. We'll take a look at the relationship between key-value pairs and how they can be used in a mapping context. Working with keys We can use the keys of a given object in interesting ways to map the object to a new collection. For example, we can use the keys() function to extract the keys of an object and map them to values other than the property value, as shown in the following example: var object = { first: 'Ronald', last: 'Walters', employer: 'Packt' }; _.map(_.sortBy(_.keys(object)), function(item) { return object[item]; }); // → [ "Packt", "Ronald", "Walters" ] The preceding code builds an array of property values from object. It does so using map(), which is actually mapping the keys() array of object. These keys are sorted using sortBy(). So Packt is the first element of the resulting array because employer is alphabetically first in the object keys. Sometimes, it's desirable to perform lookups in other objects and map those values to a target object. For example, not all APIs return everything you need for a given page, packaged in a neat little object. You have to do joins and build the data you need. This is shown in the following code: var users = {}, preferences = {}; _.each(_.range(100), function() { var id = _.uniqueId('user-'); users[id] = { type: 'user' }; preferences[id] = { emailme: !!(_.random()) }; }); _.map(users, function(value, key) { return _.extend({ id: key }, preferences[key]); }); // → // [ // { id: "user-1", emailme: true }, // { id: "user-2", emailme: false }, // ... // ] This example builds two objects, users and preferences. In the case of each object, the keys are user identifiers that we're generating with uniqueId(). The user objects just have some dummy attribute in them, while the preferences objects have an emailme attribute, set to a random Boolean value. Now let's say we need quick access to this preference for all users in the users object. As you can see, it's straightforward to implement using map() on the users object. The callback function returns a new object with the user ID. We extend this object with the preference for that particular user by looking at them by key. Calling methods Objects aren't limited to storing primitive strings and numbers. Properties can store functions as their values, or methods, as they're commonly referred. However, depending on the context where you're using your object, methods aren't always callable, especially if you have little or no control over the context where your objects are used. One technique that's helpful in situations such as these is mapping the result of calling these methods and using this result in the context in question. Let's see how this can be done with the following code: var object = { first: 'Roxanne', last: 'Elliot', name: function() { return this.first + ' ' + this.last; }, age: 38, retirement: 65, working: function() { return this.retirement - this.age; } }; _.map(object, function(value, key) { var item = {}; item[key] = _.isFunction(value) ? object[key]() : value return item; }); // → // [ // { first: "Roxanne" }, // { last: "Elliot" }, // { name: "Roxanne Elliot" }, // { age: 38 }, // { retirement: 65 }, // { working: 27 } // ] _.map(object, function(value, key) { var item = {}; item[key] = _.result(object, key); return item; }); // → // [ // { first: "Roxanne" }, // { last: "Elliot" }, // { name: "Roxanne Elliot" }, // { age: 38 }, // { retirement: 65 }, // { working: 27 } // ] Here, we have an object with both primitive property values and methods that use these properties. Now we'd like to map the results of calling those methods and we will experiment with two different approaches. The first approach uses the isFunction() function to determine whether the property value is callable or not. If it is, we call it and return that value. The second approach is a little easier to implement and achieves the same outcome. The result() function is applied to the object using the current key. This tests whether we're working with a function or not, so our code doesn't have to. In the first approach to mapping method invocations, you might have noticed that we're calling the method using object[key]() instead of value(). The former retains the context as the object variable, but the latter loses the context, since it is invoked as a plain function without any object. So when you're writing mapping callbacks that call methods and not getting the expected results, make sure the method's context is intact. Perhaps, you have an object but you're not sure which properties are methods. You can use functions() to figure this out and then map the results of calling each method to an array, as shown in the following code: var object = { firstName: 'Fredrick', lastName: 'Townsend', first: function() { return this.firstName; }, last: function() { return this.lastName; } }; var methods = _.map(_.functions(object), function(item) { return [ _.bindKey(object, item) ]; }); _.invoke(methods, 0); // → [ "Fredrick", "Townsend" ] The object variable has two methods, first() and last(). Assuming we didn't know about these methods, we can find them using functions(). Here, we're building a methods array using map(). The input is an array containing the names of all the methods of the given object. The value we're returning is interesting. It's a single-value array; you'll see why in a moment. The value of this array is a function built by passing the object and the name of the method to bindKey(). This function, when invoked, will always use object as its context. Lastly, we use invoke() to invoke each method in our methods array, building a new result array. Recall that our map() callback returned an array. This was a simple hack to make invoke() work, since it's a convenient way to call methods. It generally expects a key as the second argument, but a numerical index works just as well, since they're both looked up as same. Mapping key-value pairs Just because you're working with an object doesn't mean it's ideal, or even necessary. That's what map() is for—mapping what you're given to what you need. For instance, the property values are sometimes all that matter for what you're doing, and you can dispense with the keys entirely. For that, we have the values() function and we feed the values to map(): var object = { first: 'Lindsay', last: 'Castillo', age: 51 }; _.map(_.filter(_.values(object), _.isString), function(item) { return '<strong>' + item + '</strong>'; }); // → [ "<strong>Lindsay</strong>", "<strong>Castillo</strong>" ] All we want from the object variable here is a list of property values, which are strings, so that we can format them. In other words, the fact that the keys are first, last, and age is irrelevant. So first, we call values() to build an array of values. Next, we pass that array to filter(), removing anything that's not a string. We then pass the output of this to map, where we're able to map the string using <strong/> tags. The opposite might also be true—the value is completely meaningless without its key. If that's the case, it may be fitting to map key-value pairs to a new collection, as shown in the following example: function capitalize(s) { return s.charAt(0).toUpperCase() + s.slice(1); } function format(label, value) { return '<label>' + capitalize(label) + ':</label>' + '<strong>' + value + '</strong>'; } var object = { first: 'Julian', last: 'Ramos', age: 43 }; _.map(_.pairs(object), function(pair) { return format.apply(undefined, pair); }); // → // [ // "<label>First:</label><strong>Julian</strong>", // "<label>Last:</label><strong>Ramos</strong>", // "<label>Age:</label><strong>43</strong>" // ] We're passing the result of running our object through the pairs() function to map(). The argument passed to our map callback function is an array, the first element being the key and the second being the value. It so happens that the format() function expects a key and a value to format the given string, so we're able to use format.apply() to call the function, passing it the pair array. This approach is just a matter of taste. There's no need to call pairs() before map(). We could just as easily have called format directly. But sometimes, this approach is preferred, and the reasons, not least of which is the style of the programmer, are wide and varied. Summary This article introduced you to the map/reduce programming model and how Lo-Dash tools help realize it in your application. First, we examined mapping collections, including how to choose which properties get included and how to perform calculations. We then moved on to mapping objects. Keys can have an important role in how objects get mapped to new objects and collections. There are also methods and functions to consider when mapping. Resources for Article: Further resources on this subject: The First Step [article] Recursive directives [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 3536
article-image-building-next-generation-web-meteor
Packt
05 Feb 2015
9 min read
Save for later

Building the next generation Web with Meteor

Packt
05 Feb 2015
9 min read
This article by Fabian Vogelsteller, the author of Building Single-page Web Apps with Meteor, explores the full-stack framework of Meteor. Meteor is not just a JavaScript library such as jQuery or AngularJS. It's a full-stack solution that contains frontend libraries, a Node.js-based server, and a command-line tool. All this together lets us write large-scale web applications in JavaScript, on both the server and client, using a consistent API. (For more resources related to this topic, see here.) Even with Meteor being quite young, already a few companies such as https://lookback.io, https://respond.ly and https://madeye.io use Meteor already in their production environment. If you want to see for yourself what's made with Meteor, take a look at http://madewith.meteor.com. Meteor makes it easy for us to build web applications quickly and takes care of the boring processes such as file linking, minifying, and concatenating of files. Here are a few highlights of what is possible with Meteor: We can build complex web applications amazingly fast using templates that automatically update themselves when data changes We can push new code to all clients on the fly while they are using our app Meteor core packages come with a complete account solution, allowing a seamless integration with Facebook, Twitter, and more Data will automatically be synced across clients, keeping every client in the same state in almost real time Latency compensation will make our interface appear super fast while the server response happens in the background With Meteor, we never have to link files with the <script> tags in HTML. Meteor's command-line tool automatically collects JavaScript or CSS files in our application's folder and links them in the index.html file, which is served to clients on initial page load. This makes structuring our code in separate files as easy as creating them. Meteor's command-line tool also watches all files inside our application's folder for changes and rebuilds them on the fly when they change. Additionally, it starts a Meteor server that serves the app's files to the clients. When a file changes, Meteor reloads the site of every client while preserving its state. This is called a hot code reload. In production, the build process also concatenates and minifies our CSS and JavaScript files. By simply adding the less and coffee core packages, we can even write all styles in LESS and code in CoffeeScript with no extra effort. The command-line tool is also the tool for deploying and bundling our app so that we can run it on a remote server. Sounds awesome? Let's take a look at what's needed to use Meteor Adding basic packages Packages in Meteor are libraries that can be added to our projects. The nice thing about Meteor packages is that they are self-contained units, which run out of the box. They mostly add either some templating functionality or provide extra objects in the global namespace of our project. Packages can also add features to Meteor's build process like the stylus package, which lets us write our app's style files with the stylus pre-processor syntax. Writing templates in Meteor Normally when we build websites, we build the complete HTML on the server side. This was quite straightforward; every page is built on the server, then it is sent to the client, and at last JavaScript added some additional animation or dynamic behavior to it. This is not so in single-page apps, where each page needs to be already in the client's browser so that it can be shown at will. Meteor solves that problem by providing templates that exists in JavaScript and can be placed in the DOM at some point. These templates can have nested templates, allowing for and easy way to reuse and structure an app's HTML layout. Since Meteor is so flexible in terms of folder and file structure, any *.html page can contain a template and will be parsed during Meteor's build process. This allows us to put all templates in the my-meteor-blog/client/templates folder. This folder structure is chosen as it helps us organizing templates while our app grows. Meteor template engine is called Spacebars, which is a derivative of the handlebars template engine. Spacebars is built on top of Blaze, which is Meteor's reactive DOM update engine. Meteor and databases Meteor currently uses MongoDB by default to store data on the server, although there are drivers planned for relational databases, too. If you are adventurous, you can try one of the community-built SQL drivers, such as the numtel:mysql package from https://atmospherejs.com/numtel/mysql. MongoDB is a NoSQL database. This means it is based on a flat document structure instead of a relational table structure. Its document approach makes it ideal for JavaScript as documents are written in BJSON, which is very similar to the JSON format. Meteor has a database everywhere approach, which means we have the same API to query the database on the client as well as on the server. Yet, when we query the database on the client, we are only able to access data that we published to a client. MongoDB uses a datastructure called a collection, which is the equivalent of a table in an SQL database. Collections contain documents, where each document has its own unique ID. These documents are JSON-like structures and can contain properties with values, even with multiple dimensions: { "_id": "W7sBzpBbov48rR7jW", "myName": "My Document Name", "someProperty": 123456, "aNestedProperty": { "anotherOne": "With another string" } } These collections are used to store data in the servers MongoDB as well as the client-sides minimongo collections, which is an in-memory database mimicking the behavior of the real MongoDB. The MongoDB API let us use a simple JSON-based query language to get documents from a collection. We can pass additional options to only ask for specific fields or sort the returned documents. These are very powerful features, especially on the client side, to display data in various ways. Data everywhere In Meteor, we can use the browser console to update data, which means we update the database from the client. This works because Meteor automatically syncs these changes to the server and updates the database accordingly. This is happening because we have the autopublish and insecure core packages added to our project by default. The autopublish package publishes automatically all documents to every client, whereas the insecure package allows every client to update database records by its _id field. Obviously, this works well for prototyping but is infeasible for production, as every client could manipulate our database. If we remove the insecure package, we would need to add the "allow and deny" rules to determine what a client is allowed to update and what not; otherwise all updates will get denied. Differences between client and server collections Meteor has a database everywhere approach. This means it provides the same API on the client as on the server. The data flow is controlled using a publication subscription model. On the server sits the real MongoDB database, which stores data persistently. On the client Meteor has a package called minimongo, which is a pure in-memory database mimicking most of MongoDB's query and update functions. Every time a client connects to its Meteor server, Meteor downloads the documents the client subscribed to and stores them in its local minimongo database. From here, they can be displayed in a template or processed by functions. When the client updates a document, Meteor syncs it back to the server, where it is passed through any allow/deny functions before being persistently stored in the database. This works also in the other way, when a document in the server-side database changes, it will get automatically sync to every client that is subscribed to it, keeping every connected client up to date. Syncing data – the current Web versus the new Web In the current Web, most pages are either static files hosted on a server or dynamically generated by a server on a request. This is true for most server-side-rendered websites, for example, those written with PHP, Rails, or Django. Both of these techniques required no effort besides being displayed by the clients; therefore, they are called thin clients. In modern web applications, the idea of the browser has moved from thin clients to fat clients. This means most of the website's logic resides on the client and the client asks for the data it needs. Currently, this is mostly done via calls to an API server. This API server then returns data, commonly in JSON form, giving the client an easy way to handle it and use it appropriately. Most modern websites are a mixture of thin and fat clients. Normal pages are server-side-rendered, where only some functionality, such as a chat box or news feed, is updated using API calls. Meteor, however, is built on the idea that it's better to use the calculation power of all clients instead of one single server. A pure fat client or a single-page app contains the entire logic of a website's frontend, which is send down on the initial page load. The server then merely acts as a data source, sending only the data to the clients. This can happen by connecting to an API and utilizing AJAX calls, or as with Meteor, using a model called publication/subscription. In this model, the server offers a range of publications and each client decides which dataset it wants to subscribe to. Compared with AJAX calls, the developer doesn't have to take care of any downloading or uploading logic. The Meteor client syncs all of the data automatically in the background as soon as it subscribes to a specific dataset. When data on the server changes, the server sends the updated documents to the clients and vice versa, as shown in the following diagram: Summary Meteor comes with more great ways of building pure JavaScript applications such as simple routing and simple ways to make components, which can be packaged for others to use. Meteor's reactivity model, which allows you to rerun any function and template helpers at will, allows for great consistent interfaces and simple dependency tracking, which is a key for large-scale JavaScript applications. If you want to dig deeper, buy the book and read How to build your own blog as single-page web application in a simple step-by-step fashion by using Meteor, the next generation web! Resources for Article: Further resources on this subject: Quick start - creating your first application [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 913

article-image-google-app-engine
Packt
05 Feb 2015
11 min read
Save for later

Google App Engine

Packt
05 Feb 2015
11 min read
In this article by Massimiliano Pippi, author of the book Python for Google App Engine, in this article, you will learn how to write a web application and seeing the platform in action. Web applications commonly provide a set of features such as user authentication and data storage. App Engine provides the services and tools needed to implement such features. (For more resources related to this topic, see here.) In this article, we will see: Details of the webapp2 framework How to authenticate users Storing data on Google Cloud Datastore Building HTML pages using templates Experimenting on the Notes application To better explore App Engine and Cloud Platform capabilities, we need a real-world application to experiment on; something that's not trivial to write, with a reasonable list of requirements. A good candidate is a note-taking application; we will name it Notes. Notes enable the users to add, remove, and modify a list of notes; a note has a title and a body of text. Users can only see their personal notes, so they must authenticate before using the application. The main page of the application will show the list of notes for logged-in users and a form to add new ones. The code from the helloworld example is a good starting point. We can simply change the name of the root folder and the application field in the app.yaml file to match the new name we chose for the application, or we can start a new project from scratch named notes. Authenticating users The first requirement for our Notes application is showing the home page only to users who are logged in and redirect others to the login form; the users service provided by App Engine is exactly what we need and adding it to our MainHandler class is quite simple: import webapp2 from google.appengine.api import users class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: self.response.write('Hello Notes!') else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) app = webapp2.WSGIApplication([ ('/', MainHandler) ], debug=True) The user package we import on the second line of the previous code provides access to users' service functionalities. Inside the get() method of the MainHandler class, we first check whether the user visiting the page has logged in or not. If they have, the get_current_user() method returns an instance of the user class provided by App Engine and representing an authenticated user; otherwise, it returns None as output. If the user is valid, we provide the response as we did before; otherwise, we redirect them to the Google login form. The URL of the login form is returned using the create_login_url() method, and we call it, passing as a parameter the URL we want to redirect users to after a successful authentication. In this case, we want to redirect users to the same URL they are visiting, provided by webapp2 in the self.request.uri property. The webapp2 framework also provides handlers with a redirect() method we can use to conveniently set the right status and location properties of the response object so that the client browsers will be redirected to the login page. HTML templates with Jinja2 Web applications provide rich and complex HTML user interfaces, and Notes is no exception but, so far, response objects in our applications contained just small pieces of text. We could include HTML tags as strings in our Python modules and write them in the response body but we can imagine how easily it could become messy and hard to maintain the code. We need to completely separate the Python code from HTML pages and that's exactly what a template engine does. A template is a piece of HTML code living in its own file and possibly containing additional, special tags; with the help of a template engine, from the Python script, we can load this file, properly parse special tags, if any, and return valid HTML code in the response body. App Engine includes in the Python runtime a well-known template engine: the Jinja2 library. To make the Jinja2 library available to our application, we need to add this code to the app.yaml file under the libraries section: libraries: - name: webapp2 version: "2.5.2" - name: jinja2 version: latest We can put the HTML code for the main page in a file called main.html inside the application root. We start with a very simple page: <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title>Notes</title> </head> <body> <div class="container"> <h1>Welcome to Notes!</h1> <p> Hello, <b>{{user}}</b> - <a href="{{logout_url}}">Logout</a> </p> </div> </body> </html> Most of the content is static, which means that it will be rendered as standard HTML as we see it but there is a part that is dynamic and whose content depend on which data will be passed at runtime to the rendering process. This data is commonly referred to as template context. What has to be dynamic is the username of the current user and the link used to log out from the application. The HTML code contains two special elements written in the Jinja2 template syntax, {{user}} and {{logout_url}}, that will be substituted before the final output occurs. Back to the Python script; we need to add the code to initialize the template engine before the MainHandler class definition: import os import jinja2 jinja_env = jinja2.Environment( loader=jinja2.FileSystemLoader(os.path.dirname(__file__))) The environment instance stores engine configuration and global objects, and it's used to load templates instances; in our case, instances are loaded from HTML files on the filesystem in the same directory as the Python script. To load and render our template, we add the following code to the MainHandler.get() method: class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) Similar to how we get the login URL, the create_logout_url() method provided by the user service returns the absolute URI to the logout procedure that we assign to the logout_url variable. We then create the template_context dictionary that contains the context values we want to pass to the template engine for the rendering process. We assign the nickname of the current user to the user key in the dictionary and the logout URL string to the logout_url key. The get_template() method from the jinja_env instance takes the name of the file that contains the HTML code and returns a Jinja2 template object. To obtain the final output, we call the render() method on the template object passing in the template_context dictionary whose values will be accessed, specifying their respective keys in the HTML file with the template syntax elements {{user}} and {{logout_url}}. Handling forms The main page of the application is supposed to list all the notes that belong to the current user but there isn't any way to create such notes at the moment. We need to display a web form on the main page so that users can submit details and create a note. To display a form to collect data and create notes, we put the following HTML code right below the username and the logout link in the main.html template file: {% if note_title %} <p>Title: {{note_title}}</p> <p>Content: {{note_content}}</p> {% endif %} <h4>Add a new note</h4> <form action="" method="post"> <div class="form-group"> <label for="title">Title:</label> <input type="text" id="title" name="title" /> </div> <div class="form-group"> <label for="content">Content:</label> <textarea id="content" name="content"></textarea> </div> <div class="form-group"> <button type="submit">Save note</button> </div> </form> Before showing the form, a message is displayed only when the template context contains a variable named note_title. To do this, we use an if statement, executed between the {% if note_title %} and {% endif %} delimiters; similar delimiters are used to perform for loops or assign values inside a template. The action property of the form tag is empty; this means that upon form submission, the browser will perform a POST request to the same URL, which in this case is the home page URL. As our WSGI application maps the home page to the MainHandler class, we need to add a method to this class so that it can handle POST requests: class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) def post(self): user = users.get_current_user() if user is None: self.error(401) logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, 'note_title': self.request.get('title'), 'note_content': self.request.get('content'), } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) When the form is submitted, the handler is invoked and the post() method is called. We first check whether a valid user is logged in; if not, we raise an HTTP 401: Unauthorized error without serving any content in the response body. Since the HTML template is the same served by the get() method, we still need to add the logout URL and the user name to the context. In this case, we also store the data coming from the HTML form in the context. To access the form data, we call the get() method on the self.request object. The last three lines are boilerplate code to load and render the home page template. We can move this code in a separate method to avoid duplication: def _render_template(self, template_name, context=None): if context is None: context = {} template = jinja_env.get_template(template_name) return template.render(context) In the handler class, we will then use something like this to output the template rendering result: self.response.out.write( self._render_template('main.html', template_context)) We can try to submit the form and check whether the note title and content are actually displayed above the form. Summary Thanks to App Engine, we have already implemented a rich set of features with a relatively small effort so far. We have discovered some more details about the webapp2 framework and its capabilities, implementing a nontrivial request handler. We have learned how to use the App Engine users service to provide users authentication. We have delved into some fundamental details of Datastore and now we know how to structure data in grouped entities and how to effectively retrieve data with ancestor queries. In addition, we have created an HTML user interface with the help of the Jinja2 template library, learning how to serve static content such as CSS files. Resources for Article: Further resources on this subject: Machine Learning in IPython with scikit-learn [Article] Introspecting Maya, Python, and PyMEL [Article] Driving Visual Analyses with Automobile Data (Python) [Article]
Read more
  • 0
  • 0
  • 1040