Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
Packt
03 Jun 2015
9 min read
Save for later

Microsoft Azure – Developing Web API for Mobile Apps

Packt
03 Jun 2015
9 min read
Azure Websites is an excellent platform to deploy and manage the Web API, Microsoft Azure provides, however, another alternative in the form of Azure Mobile Services, which targets mobile application developers. In this article by Nikhil Sachdeva, coauthor of the book Building Web Services with Microsoft Azure, we delve into the capabilities of Azure Mobile Services and how it provides a quick and easy development ecosystem to develop Web APIs that support mobile apps. (For more resources related to this topic, see here.) Creating a Web API using Mobile Services In this section, we will create a Mobile Services-enabled Web API using Visual Studio 2013. For our fictitious scenario, we will create an Uber-like service but for medical emergencies. In the case of a medical emergency, users will have the option to send a request using their mobile device. Additionally, third-party applications and services can integrate with the Web API to display doctor availability. All requests sent to the Web API will follow the following process flow: The request will be persisted to a data store. An algorithm will find a doctor that matches the incoming request based on availability and proximity. Push Notifications will be sent to update the physician and patient. Creating the project Mobile Services provides two options to create a project: Using the Management portal, we can create a new Mobile Service and download a preassembled package that contains the Web API as well as the targeted mobile platform project Using Visual Studio templates The Management portal approach is easier to implement and does give a jumpstart by creating and configuring the project. However, for the scope of this article, we will use the Visual Studio template approach. For more information on creating a Mobile Services Web API using the Azure Management Portal, please refer to http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-windows-store-dotnet-get-started/. Azure Mobile Services provides a Visual Studio 2013 template to create a .NET Web API, we will use this template for our scenario. Note that the Azure Mobile Services template is only available from Visual Studio 2013 update 2 and onward. Creating a Mobile Service in Visual Studio 2013 requires the following steps: Create a new Azure Mobile Service project and assign it a Name, Location, and Solution. Click OK. In the next tab, we have a familiar ASP.NET project type dialog. However, we notice a few differences from the traditional ASP.NET dialog, which are as follows:    The Web API option is enabled by default and is the only choice available    The Authentication tab is disabled by default    The Test project option is disabled    The Host in the cloud option automatically suggests Mobile Services and is currently the only choice Select the default settings and click on OK. Visual Studio 2013 prompts developers to enter their Azure credentials in case they are not already logged in: For more information on Azure tools for Visual Studio, please refer visit https://msdn.microsoft.com/en-us/library/azure/ee405484.aspx. Since we are building a new Mobile Service, the next screen gathers information about how to configure the service. We can specify the existing Azure resources in our subscription or create new from within Visual Studio. Select the appropriate options and click on Create: The options are described here: Option Description Subscription This lists the name of the Azure subscription where the service will be deployed. Select from the dropdown if multiple subscriptions are available. Name This is the name of the Mobile Services deployment, this will eventually become the root DNS URL for the mobile service unless a custom domain is specified. (For example, contoso.azure-mobile.net). Runtime This allows selection of runtime. Note that as of writing this book, only the .NET framework was supported in Visual Studio, so this option is currently prepopulated and disabled. Region Select the Azure data center where the Web API will be deployed. As of writing this book, Mobile Services is available in the following regions: West US, East US, North Europe, East Asia, and West Japan. For details on latest regional availability, please refer to http://azure.microsoft.com/en-us/regions/#services. Database By default, a SQL Azure database gets associated with every Mobile Services deployment. It comes in handy if SQL is being used as the data store. However, in scenarios where different data stores such as the table storage or Mongo DB may be used, we still create this SQL database. We can select from a free 20 MB SQL database or an existing paid standard SQL database. For more information about SQL tiers, please visit http://azure.microsoft.com/en-us/pricing/details/sql-database. Server user name Provide the server name for the Azure SQL database. Server password Provide a password for the Azure SQL database. This process creates the required entities in the configured Azure subscription. Once completed, we have a new Web API project in the Visual Studio solution. The following screenshot is the representation of a new Mobile Service project: When we create a Mobile Service Web API project, the following NuGet packages are referenced in addition to the default ASP.NET Web API NuGet packages: Package Description WindowsAzure MobileServices Backend This package enables developers to build scalable and secure .NET mobile backend hosted in Microsoft Azure. We can also incorporate structured storage, user authentication, and push notifications. Assembly: Microsoft.WindowsAzure.Mobile.Service Microsoft Azure Mobile Services .NET Backend Tables This package contains the common infrastructure needed when exposing structured storage as part of the .NET mobile backend hosted in Microsoft Azure. Assembly: Microsoft.WindowsAzure.Mobile.Service.Tables Microsoft Azure Mobile Services .NET Backend Entity Framework Extension This package contains all types necessary to surface structured storage (using Entity Framework) as part of the .NET mobile backend hosted in Microsoft Azure. Assembly: Microsoft.WindowsAzure.Mobile.Service.Entity Additionally, the following third-party packages are installed: Package Description EntityFramework Since Mobile Services provides a default SQL database, it leverages Entity Framework to provide an abstraction for the data entities. AutoMapper AutoMapper is a convention based object-to-object mapper. It is used to map legacy custom entities to DTO objects in Mobile Services. OWIN Server and related assemblies Mobile Services uses OWIN as the default hosting mechanism. The current template also adds: Microsoft OWIN Katana packages to run the solution in IIS Owin security packages for Google, Azure AD, Twitter, Facebook Autofac This is the favorite Inversion of Control (IoC) framework. Azure Service Bus Microsoft Azure Service Bus provides Notification Hub functionality. We now have our Mobile Services Web API project created. The default project added by Visual Studio is not an empty project but a sample implementation of a Mobile Service-enabled Web API. In fact, a controller and Entity Data Model are already defined in the project. If we hit F5 now, we can see a running sample in the local Dev environment: Note that Mobile Services modifies the WebApiConfig file under the App_Start folder to accommodate some initialization and configuration changes: {    ConfigOptions options = new ConfigOptions();      HttpConfiguration config = ServiceConfig.Initialize     (new ConfigBuilder(options)); } In the preceding code, the ServiceConfig.Initialize method defined in the Microsoft.WindowsAzure.Mobile.Service assembly is called to load the hosting provider for our mobile service. It loads all assemblies from the current application domain and searches for types with HostConfigProviderAttribute. If it finds one, the custom host provider is loaded, or else the default host provider is used. Let's extend the project to develop our scenario. Defining the data model We now create the required entities and data model. Note that while the entities have been kept simple for this article, in the real-world application, it is recommended to define a data architecture before creating any data entities. For our scenario, we create two entities that inherit from Entity Data. These are described here. Record Record is an entity that represents data for the medical emergency. We use the Record entity when invoking CRUD operations using our controller. We also use this entity to update doctor allocation and status of the request as shown: namespace Contoso.Hospital.Entities {       /// <summary>    /// Emergency Record for the hospital    /// </summary> public class Record : EntityData    {        public string PatientId { get; set; }          public string InsuranceId { get; set; }          public string DoctorId { get; set; }          public string Emergency { get; set; }          public string Description { get; set; }          public string Location { get; set; }          public string Status { get; set; }           } } Doctor The Doctor entity represents the doctors that are registered practitioners in the area, the service will search for the availability of a doctor based on the properties of this entity. We will also assign the primary DoctorId to the Record type when a doctor is assigned to an emergency. The schema for the Doctor entity is as follows: amespace Contoso.Hospital.Entities {    public class Doctor: EntityData    {        public string Speciality{ get; set; }          public string Location { get; set; }               public bool Availability{ get; set; }           } } Summary In this article, we looked at a solution for developing a Web API that targets mobile developers. Resources for Article: Further resources on this subject: Security in Microsoft Azure [article] Azure Storage [article] High Availability, Protection, and Recovery using Microsoft Azure [article]
Read more
  • 0
  • 0
  • 1117

article-image-reactive-data-streams
Packt
03 Jun 2015
11 min read
Save for later

Reactive Data Streams

Packt
03 Jun 2015
11 min read
In this article by Shiti Saxena, author of the book Mastering Play Framework for Scala, we will discuss the Iteratee approach used to handle such situations. This article also covers the basics of handling data streams with a brief explanation of the following topics: Iteratees Enumerators Enumeratees (For more resources related to this topic, see here.) Iteratee Iteratee is defined as a trait, Iteratee[E, +A], where E is the input type and A is the result type. The state of an Iteratee is represented by an instance of Step, which is defined as follows: sealed trait Step[E, +A] {def it: Iteratee[E, A] = this match {case Step.Done(a, e) => Done(a, e)case Step.Cont(k) => Cont(k)case Step.Error(msg, e) => Error(msg, e)}}object Step {//done state of an iterateecase class Done[+A, E](a: A, remaining: Input[E]) extends Step[E, A]//continuing state of an iteratee.case class Cont[E, +A](k: Input[E] => Iteratee[E, A]) extendsStep[E, A]//error state of an iterateecase class Error[E](msg: String, input: Input[E]) extends Step[E,Nothing]} The input used here represents an element of the data stream, which can be empty, an element, or an end of file indicator. Therefore, Input is defined as follows: sealed trait Input[+E] {def map[U](f: (E => U)): Input[U] = this match {case Input.El(e) => Input.El(f(e))case Input.Empty => Input.Emptycase Input.EOF => Input.EOF}}object Input {//An input elementcase class El[+E](e: E) extends Input[E]// An empty inputcase object Empty extends Input[Nothing]// An end of file inputcase object EOF extends Input[Nothing]} An Iteratee is an immutable data type and each result of processing an input is a new Iteratee with a new state. To handle the possible states of an Iteratee, there is a predefined helper object for each state. They are: Cont Done Error Let's see the definition of the readLine method, which utilizes these objects: def readLine(line: List[Array[Byte]] = Nil): Iteratee[Array[Byte],String] = Cont {case Input.El(data) => {val s = data.takeWhile(_ != 'n')if (s.length == data.length) {readLine(s :: line)} else {Done(new String(Array.concat((s :: line).reverse: _*),"UTF-8").trim(), elOrEmpty(data.drop(s.length + 1)))}}case Input.EOF => {Error("EOF found while reading line", Input.Empty)}case Input.Empty => readLine(line)} The readLine method is responsible for reading a line and returning an Iteratee. As long as there are more bytes to be read, the readLine method is called recursively. On completing the process, an Iteratee with a completed state (Done) is returned, else an Iteratee with state continuous (Cont) is returned. In case the method encounters EOF, an Iteratee with state Error is returned. In addition to these, Play Framework exposes a companion Iteratee object, which has helper methods to deal with Iteratees. The API exposed through the Iteratee object is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Iteratee$. The Iteratee object is also used internally within the framework to provide some key features. For example, consider the request body parsers. The apply method of the BodyParser object is defined as follows: def apply[T](debugName: String)(f: RequestHeader =>Iteratee[Array[Byte], Either[Result, T]]): BodyParser[T] = newBodyParser[T] {def apply(rh: RequestHeader) = f(rh)override def toString = "BodyParser(" + debugName + ")"} So, to define BodyParser[T], we need to define a method that accepts RequestHeader and returns an Iteratee whose input is an Array[Byte] and results in Either[Result,T]. Let's look at some of the existing implementations to understand how this works. The RawBuffer parser is defined as follows: def raw(memoryThreshold: Int): BodyParser[RawBuffer] =BodyParser("raw, memoryThreshold=" + memoryThreshold) { request =>import play.core.Execution.Implicits.internalContextval buffer = RawBuffer(memoryThreshold)Iteratee.foreach[Array[Byte]](bytes => buffer.push(bytes)).map {_ =>buffer.close()Right(buffer)}} The RawBuffer parser uses Iteratee.forEach method and pushes the input received into a buffer. The file parser is defined as follows: def file(to: File): BodyParser[File] = BodyParser("file, to=" +to) { request =>import play.core.Execution.Implicits.internalContextIteratee.fold[Array[Byte], FileOutputStream](newFileOutputStream(to)) {(os, data) =>os.write(data)os}.map { os =>os.close()Right(to)}} The file parser uses the Iteratee.fold method to create FileOutputStream of the incoming data. Now, let's see the implementation of Enumerator and how these two pieces fit together. Enumerator Similar to the Iteratee, an Enumerator is also defined through a trait and backed by an object of the same name: trait Enumerator[E] {parent =>def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]]...}object Enumerator{def apply[E](in: E*): Enumerator[E] = in.length match {case 0 => Enumerator.emptycase 1 => new Enumerator[E] {def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]] =i.pureFoldNoEC {case Step.Cont(k) => k(Input.El(in.head))case _ => i}}case _ => new Enumerator[E] {def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]] =enumerateSeq(in, i)}}...} Observe that the apply method of the trait and its companion object are different. The apply method of the trait accepts Iteratee[E, A] and returns Future[Iteratee[E, A]], while that of the companion object accepts a sequence of type E and returns an Enumerator[E]. Now, let's define a simple data flow using the companion object's apply method; first, get the character count in a given (Seq[String]) line: val line: String = "What we need is not the will to believe, butthe wish to find out."val words: Seq[String] = line.split(" ")val src: Enumerator[String] = Enumerator(words: _*)val sink: Iteratee[String, Int] = Iteratee.fold[String,Int](0)((x, y) => x + y.length)val flow: Future[Iteratee[String, Int]] = src(sink)val result: Future[Int] = flow.flatMap(_.run) The variable result has the Future[Int] type. We can now process this to get the actual count. In the preceding code snippet, we got the result by following these steps: Building an Enumerator using the companion object's apply method: val src: Enumerator[String] = Enumerator(words: _*) Getting Future[Iteratee[String, Int]] by binding the Enumerator to an Iteratee: val flow: Future[Iteratee[String, Int]] = src(sink) Flattening Future[Iteratee[String,Int]] and processing it: val result: Future[Int] = flow.flatMap(_.run) Fetching the result from Future[Int]: Thankfully, Play provides a shortcut method by merging steps 2 and 3 so that we don't have to repeat the same process every time. The method is represented by the |>>> symbol. Using the shortcut method, our code is reduced to this: val src: Enumerator[String] = Enumerator(words: _*)val sink: Iteratee[String, Int] = Iteratee.fold[String, Int](0)((x, y)=> x + y.length)val result: Future[Int] = src |>>> sink Why use this when we can simply use the methods of the data type? In this case, do we use the length method of String to get the same value (by ignoring whitespaces)? In this example, we are getting the data as a single String but this will not be the only scenario. We need ways to process continuous data, such as a file upload, or feed data from various networking sites, and so on. For example, suppose our application receives heartbeats at a fixed interval from all the devices (such as cameras, thermometers, and so on) connected to it. We can simulate a data stream using the Enumerator.generateM method: val dataStream: Enumerator[String] = Enumerator.generateM {Promise.timeout(Some("alive"), 100 millis)} In the preceding snippet, the "alive" String is produced every 100 milliseconds. The function passed to the generateM method is called whenever the Iteratee bound to the Enumerator is in the Cont state. This method is used internally to build enumerators and can come in handy when we want to analyze the processing for an expected data stream. An Enumerator can be created from a file, InputStream, or OutputStream. Enumerators can be concatenated or interleaved. The Enumerator API is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Enumerator$. Using the Concurrent object The Concurrent object is a helper that provides utilities for using Iteratees, enumerators, and Enumeratees concurrently. Two of its important methods are: Unicast: It is useful when sending data to a single iterate. Broadcast: It facilitates sending the same data to multiple Iteratees concurrently. Unicast For example, the character count example in the previous section can be implemented as follows: val unicastSrc = Concurrent.unicast[String](channel =>channel.push(line))val unicastResult: Future[Int] = unicastSrc |>>> sink The unicast method accepts the onStart, onError, and onComplete handlers. In the preceding code snippet, we have provided the onStart method, which is mandatory. The signature of unicast is this: def unicast[E](onStart: (Channel[E]) ⇒ Unit,onComplete: ⇒ Unit = (),onError: (String, Input[E]) ⇒ Unit = (_: String, _: Input[E])=> ())(implicit ec: ExecutionContext): Enumerator[E] {…} So, to add a log for errors, we can define the onError handler as follows: val unicastSrc2 = Concurrent.unicast[String](channel => channel.push(line),onError = { (msg, str) => Logger.error(s"encountered $msg for$str")}) Now, let's see how broadcast works. Broadcast The broadcast[E] method creates an enumerator and a channel and returns a (Enumerator[E], Channel[E]) tuple. The enumerator and channel thus obtained can be used to broadcast data to multiple Iteratees: val (broadcastSrc: Enumerator[String], channel:Concurrent.Channel[String]) = Concurrent.broadcast[String]private val vowels: Seq[Char] = Seq('a', 'e', 'i', 'o', 'u')def getVowels(str: String): String = {val result = str.filter(c => vowels.contains(c))result}def getConsonants(str: String): String = {val result = str.filterNot(c => vowels.contains(c))result}val vowelCount: Iteratee[String, Int] = Iteratee.fold[String,Int](0)((x, y) => x + getVowels(y).length)val consonantCount: Iteratee[String, Int] =Iteratee.fold[String, Int](0)((x, y) => x +getConsonants(y).length)val vowelInfo: Future[Int] = broadcastSrc |>>> vowelCountval consonantInfo: Future[Int] = broadcastSrc |>>>consonantCountwords.foreach(w => channel.push(w))channel.end()vowelInfo onSuccess { case count => println(s"vowels:$count")}consonantInfo onSuccess { case count =>println(s"consonants:$count")} Enumeratee Enumeratee is also defined using a trait and its companion object with the same Enumeratee name. It is defined as follows: trait Enumeratee[From, To] {...def applyOn[A](inner: Iteratee[To, A]): Iteratee[From,Iteratee[To, A]]def apply[A](inner: Iteratee[To, A]): Iteratee[From, Iteratee[To,A]] = applyOn[A](inner)...} An Enumeratee transforms the Iteratee given to it as input and returns a new Iteratee. Let's look at a method that defines an Enumeratee by implementing the applyOn method. An Enumeratee's flatten method accepts Future[Enumeratee] and returns an another Enumeratee, which is defined as follows: def flatten[From, To](futureOfEnumeratee:Future[Enumeratee[From, To]]) = new Enumeratee[From, To] {def applyOn[A](it: Iteratee[To, A]): Iteratee[From,Iteratee[To, A]] =Iteratee.flatten(futureOfEnumeratee.map(_.applyOn[A](it))(dec))} In the preceding snippet, applyOn is called on the Enumeratee whose future is passed and dec is defaultExecutionContext. Defining an Enumeratee using the companion object is a lot simpler. The companion object has a lot of methods to deal with enumeratees, such as map, transform, collect, take, filter, and so on. The API is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Enumeratee$. Let's define an Enumeratee by working through a problem. The example we used in the previous section to find the count of vowels and consonants will not work correctly if a vowel is capitalized in a sentence, that is, the result of src |>>> vowelCount will be incorrect when the line variable is defined as follows: val line: String = "What we need is not the will to believe, but the wish to find out.".toUpperCase To fix this, let's alter the case of all the characters in the data stream to lowercase. We can use an Enumeratee to update the input provided to the Iteratee. Now, let's define an Enumeratee to return a given string in lowercase: val toSmallCase: Enumeratee[String, String] =Enumeratee.map[String] {s => s.toLowerCase} There are two ways to add an Enumeratee to the dataflow. It can be bound to the following: Enumerators Iteratees Summary In this article, we discussed the concept of Iteratees, Enumerators, and Enumeratees. We also saw how they were implemented in Play Framework and used internally. Resources for Article: Further resources on this subject: Play Framework: Data Validation Using Controllers [Article] Play Framework: Introduction to Writing Modules [Article] Integrating with other Frameworks [Article]
Read more
  • 0
  • 0
  • 1102

article-image-using-client-methods
Packt
26 May 2015
14 min read
Save for later

Using Client Methods

Packt
26 May 2015
14 min read
In this article by Isaac Strack, author of the book Meteor Cookbook, we will cover the following recipe: Using the HTML FileReader to upload images (For more resources related to this topic, see here.) Using the HTML FileReader to upload images Adding files via a web application is a pretty standard functionality nowadays. That doesn't mean that it's easy to do, programmatically. New browsers support Web APIs to make our job easier, and a lot of quality libraries/packages exist to help us navigate the file reading/uploading forests, but, being the coding lumberjacks that we are, we like to know how to roll our own! In this recipe, you will learn how to read and upload image files to a Meteor server. Getting ready We will be using a default project installation, with client, server, and both folders, and with the addition of a special folder for storing images. In a terminal window, navigate to where you would like your project to reside, and execute the following commands: $ meteor create imageupload $ cd imageupload $ rm imageupload.* $ mkdir client $ mkdir server $ mkdir both $ mkdir .images Note the dot in the .images folder. This is really important because we don't want the Meteor application to automatically refresh every time we add an image to the server! By creating the images folder as .images, we are hiding it from the eye-of-Sauron-like monitoring system built into Meteor, because folders starting with a period are "invisible" to Linux or Unix. Let's also take care of the additional Atmosphere packages we'll need. In the same terminal window, execute the following commands: $ meteor add twbs:bootstrap $ meteor add voodoohop:masonrify We're now ready to get started on building our image upload application. How to do it… We want to display the images we upload, so we'll be using a layout package (voodoohop:masonrify) for display purposes. We will also initiate uploads via drag and drop, to cut down on UI components. Lastly, we'll be relying on an npm module to make the file upload much easier. Let's break this down into a few steps, starting with the user interface. In the [project root]/client folder, create a file called imageupload.html and add the following templates and template inclusions: <body> <h1>Images!</h1> {{> display}} {{> dropzone}} </body>   <template name="display"> {{#masonryContainer    columnWidth=50    transitionDuration="0.2s"    id="MasonryContainer" }} {{#each imgs}} {{> img}} {{/each}} {{/masonryContainer}} </template>   <template name="dropzone"> <div id="dropzone" class="{{dropcloth}}">Drag images here...</div> </template>   <template name="img"> {{#masonryElement "MasonryContainer"}} <img src="{{src}}"    class="display-image"    style="width:{{calcWidth}}"/> {{/masonryElement}} </template> We want to add just a little bit of styling, including an "active" state for our drop zone, so that we know when we are safe to drop files onto the page. In your [project root]/client/ folder, create a new style.css file and enter the following CSS style directives: body { background-color: #f5f0e5; font-size: 2rem;   }   div#dropzone { position: fixed; bottom:5px; left:2%; width:96%; height:100px; margin: auto auto; line-height: 100px; text-align: center; border: 3px dashed #7f898d; color: #7f8c8d; background-color: rgba(210,200,200,0.5); }   div#dropzone.active { border-color: #27ae60; color: #27ae60; background-color: rgba(39, 174, 96,0.3); }   img.display-image { max-width: 400px; } We now want to create an Images collection to store references to our uploaded image files. To do this, we will be relying on EJSON. EJSON is Meteor's extended version of JSON, which allows us to quickly transfer binary files from the client to the server. In your [project root]/both/ folder, create a file called imgFile.js and add the MongoDB collection by adding the following line: Images = new Mongo.Collection('images'); We will now create the imgFile object, and declare an EJSON type of imgFile to be used on both the client and the server. After the preceding Images declaration, enter the following code: imgFile = function (d) { d = d || {}; this.name = d.name; this.type = d.type; this.source = d.source; this.size = d.size; }; To properly initialize imgFile as an EJSON type, we need to implement the fromJSONValue(), prototype(), and toJSONValue() methods. We will then declare imgFile as an EJSON type using the EJSON.addType() method. Add the following code just below the imgFile function declaration: imgFile.fromJSONValue = function (d) { return new imgFile({    name: d.name,    type: d.type,    source: EJSON.fromJSONValue(d.source),    size: d.size }); };   imgFile.prototype = { constructor: imgFile,   typeName: function () {    return 'imgFile' }, equals: function (comp) {    return (this.name == comp.name &&    this.size == comp.size); }, clone: function () {    return new imgFile({      name: this.name,      type: this.type,      source: this.source,      size: this.size    }); }, toJSONValue: function () {    return {      name: this.name,      type: this.type,      source: EJSON.toJSONValue(this.source),      size: this.size    }; } };   EJSON.addType('imgFile', imgFile.fromJSONValue); The EJSON code used in this recipe is heavily inspired by Chris Mather's Evented Mind file upload tutorials. We recommend checking out his site and learning even more about file uploading at https://www.eventedmind.com. Even though it's usually cleaner to put client-specific and server-specific code in separate files, because the code is related to the imgFile code we just entered, we are going to put it all in the same file. Just below the EJSON.addType() function call in the preceding step, add the following Meteor.isClient and Meteor.isServer code: if (Meteor.isClient){ _.extend(imgFile.prototype, {    read: function (f, callback) {        var fReader = new FileReader;      var self = this;      callback = callback || function () {};        fReader.onload = function() {        self.source = new Uint8Array(fReader.result);        callback(null,self);      };        fReader.onerror = function() {        callback(fReader.error);      };        fReader.readAsArrayBuffer(f);    } }); _.extend (imgFile, {    read: function (f, callback){      return new imgFile(f).read(f,callback);    } }); };   if (Meteor.isServer){ var fs = Npm.require('fs'); var path = Npm.require('path'); _.extend(imgFile.prototype, {    save: function(dirPath, options){      var fPath = path.join(process.env.PWD,dirPath,this.name);      var imgBuffer = new Buffer(this.source);      fs.writeFileSync(fPath, imgBuffer, options);    } }); }; Next, we will add some Images collection insert helpers. We will provide the ability to add either references (URIs) to images, or to upload files into our .images folder on the server. To do this, we need some Meteor.methods. In the [project root]/server/ folder, create an imageupload-server.js file, and enter the following code: Meteor.methods({ addURL : function(uri){    Images.insert({src:uri}); }, uploadIMG : function(iFile){    iFile.save('.images',{});    Images.insert({src:'images/'     +iFile.name}); } }); We now need to establish the code to process/serve images from the .images folder. We need to circumvent Meteor's normal asset serving capabilities for anything found in the (hidden) .images folder. To do this, we will use the fs npm module, and redirect any content requests accessing the Images/ folder address to the actual .images folder found on the server. Just after the Meteor.methods block entered in the preceding step, add the following WebApp.connectHandlers.use() function code: var fs = Npm.require('fs'); WebApp.connectHandlers.use(function(req, res, next) { var re = /^/images/(.*)$/.exec(req.url); if (re !== null) {    var filePath = process.env.PWD     + '/.images/'+ re[1];    var data = fs.readFileSync(filePath, data);    res.writeHead(200, {      'Content-Type': 'image'    });    res.write(data);    res.end(); } else {    next(); } }); Our images display template is entirely dependent on the Images collection, so we need to add the appropriate reactive Template.helpers function on the client side. In your [project root]/client/ folder, create an imageupload-client.js file, and add the following code: Template.display.helpers({ imgs: function () {    return Images.find(); } }); If we add pictures we don't like and want to remove them quickly, the easiest way to do that is by double clicking on a picture. So, let's add the code for doing that just below the Template.helpers method in the same file: Template.display.events({ 'dblclick .display-image': function (e) {    Images.remove({      _id: this._id    }); } }); Now for the fun stuff. We're going to add drag and drop visual feedback cues, so that whenever we drag anything over our drop zone, the drop zone will provide visual feedback to the user. Likewise, once we move away from the zone, or actually drop items, the drop zone should return to normal. We will accomplish this through a Session variable, which modifies the CSS class in the div.dropzone element, whenever it is changed. At the bottom of the imageupload-client.js file, add the following Template.helpers and Template.events code blocks: Template.dropzone.helpers({ dropcloth: function () {    return Session.get('dropcloth'); } });   Template.dropzone.events({ 'dragover #dropzone': function (e) {    e.preventDefault();    Session.set('dropcloth', 'active'); }, 'dragleave #dropzone': function (e) {    e.preventDefault();    Session.set('dropcloth');   } }); The last task is to evaluate what has been dropped in to our page drop zone. If what's been dropped is simply a URI, we will add it to the Images collection as is. If it's a file, we will store it, create a URI to it, and then append it to the Images collection. In the imageupload-client.js file, just before the final closing curly bracket inside the Template.dropzone.events code block, add the following event handler logic: 'dragleave #dropzone': function (e) {    ... }, 'drop #dropzone': function (e) {    e.preventDefault();    Session.set('dropcloth');      var files = e.originalEvent.dataTransfer.files;    var images = $(e.originalEvent.dataTransfer.getData('text/html')).find('img');    var fragment = _.findWhere(e.originalEvent.dataTransfer.items, {      type: 'text/html'    });    if (files.length) {      _.each(files, function (e, i, l) {        imgFile.read(e, function (error, imgfile) {          Meteor.call('uploadIMG', imgfile, function (e) {            if (e) {              console.log(e.message);            }          });        })      });    } else if (images.length) {      _.each(images, function (e, i, l) {        Meteor.call('addURL', $(e).attr('src'));      });    } else if (fragment) {      fragment.getAsString(function (e) {        var frags = $(e);        var img = _.find(frags, function (e) {          return e.hasAttribute('src');        });        if (img) Meteor.call('addURL', img.src);      });    }   } }); Save all your changes and open a browser to http://localhost:3000. Find some pictures from any web site, and drag and drop them in to the drop zone. As you drag and drop the images, the images will appear immediately on your web page, as shown in the following screenshot: As you drag and drop the dinosaur images in to the drop zone, they will be uploaded as shown in the following screenshot: Similarly, dragging and dropping actual files will just as quickly upload and then display images, as shown in the following screenshot: As the files are dropped, they are uploaded and saved in the .images/ folder: How it works… There are a lot of moving parts to the code we just created, but we can refine it down to four areas. First, we created a new imgFile object, complete with the internal functions added via the Object.prototype = {…} declaration. The functions added here ( typeName, equals, clone, toJSONValue and fromJSONValue) are primarily used to allow the imgFile object to be serialized and deserialized properly on the client and the server. Normally, this isn't needed, as we can just insert into Mongo Collections directly, but in this case it is needed because we want to use the FileReader and Node fs packages on the client and server respectively to directly load and save image files, rather than write them to a collection. Second, the underscore _.extend() method is used on the client side to create the read() function, and on the server side to create the save() function. read takes the file(s) that were dropped, reads the file into an ArrayBuffer, and then calls the included callback, which uploads the file to the server. The save function on the server side reads the ArrayBuffer, and writes the subsequent image file to a specified location on the server (in our case, the .images folder). Third, we created an ondropped event handler, using the 'drop #dropzone' event. This handler determines whether an actual file was dragged and dropped, or if it was simply an HTML <img> element, which contains a URI link in the src property. In the case of a file (determined by files.length), we call the imgFile.read command, and pass a callback with an immediate Meteor.call('uploadIMG'…) method. In the case of an <img> tag, we parse the URI from the src attribute, and use Meteor.call('addURL') to update the Images collection. Fourth, we have our helper functions for updating the UI. These include Template.helpers functions, Template.events functions, and the WebApp.connectedHandlers.use() function, used to properly serve uploaded images without having to update the UI each time a file is uploaded. Remember, Meteor will update the UI automatically on any file change. This unfortunately includes static files, such as images. To work around this, we store our images in a file invisible to Meteor (using .images). To redirect the traffic to that hidden folder, we implement the .use() method to listen for any traffic meant to hit the '/images/' folder, and redirect it accordingly. As with any complex recipe, there are other parts to the code, but this should cover the major aspects of file uploading (the four areas mentioned in the preceding section). There's more… The next logical step is to not simply copy the URIs from remote image files, but rather to download, save, and serve local copies of those remote images. This can also be done using the FileReader and Node fs libraries, and can be done either through the existing client code mentioned in the preceding section, or directly on the server, as a type of cron job. For more information on FileReader, please see the MDN FileReader article, located at https://developer.mozilla.org/en-US/docs/Web/API/FileReader. Summary In this article, you have learned the basic steps to upload images using the HTML FileReader. Resources for Article: Further resources on this subject: Meteor.js JavaScript Framework: Why Meteor Rocks! [article] Quick start - creating your first application [article] Building the next generation Web with Meteor [article]
Read more
  • 0
  • 0
  • 1400
Banner background image

article-image-nodejs-fundamentals
Packt
22 May 2015
17 min read
Save for later

Node.js Fundamentals

Packt
22 May 2015
17 min read
This article is written by Krasimir Tsonev, the author of Node.js By Example. Node.js is one of the most popular JavaScript-driven technologies nowadays. It was created in 2009 by Ryan Dahl and since then, the framework has evolved into a well-developed ecosystem. Its package manager is full of useful modules and developers around the world have started using Node.js in their production environments. In this article, we will learn about the following: Node.js building blocks The main capabilities of the environment The package management of Node.js (For more resources related to this topic, see here.) Understanding the Node.js architecture Back in the days, Ryan was interested in developing network applications. He found out that most high performance servers followed similar concepts. Their architecture was similar to that of an event loop and they worked with nonblocking input/output operations. These operations would permit other processing activities to continue before an ongoing task could be finished. These characteristics are very important if we want to handle thousands of simultaneous requests. Most of the servers written in Java or C use multithreading. They process every request in a new thread. Ryan decided to try something different—a single-threaded architecture. In other words, all the requests that come to the server are processed by a single thread. This may sound like a nonscalable solution, but Node.js is definitely scalable. We just have to run different Node.js processes and use a load balancer that distributes the requests between them. Ryan needed something that is event-loop-based and which works fast. As he pointed out in one of his presentations, big companies such as Google, Apple, and Microsoft invest a lot of time in developing high performance JavaScript engines. They have become faster and faster every year. There, event-loop architecture is implemented. JavaScript has become really popular in recent years. The community and the hundreds of thousands of developers who are ready to contribute made Ryan think about using JavaScript. Here is a diagram of the Node.js architecture: In general, Node.js is made up of three things: V8 is Google's JavaScript engine that is used in the Chrome web browser (https://developers.google.com/v8/) A thread pool is the part that handles the file input/output operations. All the blocking system calls are executed here (http://software.schmorp.de/pkg/libeio.html) The event loop library (http://software.schmorp.de/pkg/libev.html) On top of these three blocks, we have several bindings that expose low-level interfaces. The rest of Node.js is written in JavaScript. Almost all the APIs that we see as built-in modules and which are present in the documentation, are written in JavaScript. Installing Node.js A fast and easy way to install Node.js is by visiting and downloading the appropriate installer for your operating system. For OS X and Windows users, the installer provides a nice, easy-to-use interface. For developers that use Linux as an operating system, Node.js is available in the APT package manager. The following commands will set up Node.js and Node Package Manager (NPM): sudo apt-get updatesudo apt-get install nodejssudo apt-get install npm Running Node.js server Node.js is a command-line tool. After installing it, the node command will be available on our terminal. The node command accepts several arguments, but the most important one is the file that contains our JavaScript. Let's create a file called server.js and put the following code inside: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); If you run node ./server.js in your console, you will have the Node.js server running. It listens for incoming requests at localhost (127.0.0.1) on port 9000. The very first line of the preceding code requires the built-in http module. In Node.js, we have the require global function that provides the mechanism to use external modules. We will see how to define our own modules in a bit. After that, the scripts continue with the createServer and listen methods on the http module. In this case, the API of the module is designed in such a way that we can chain these two methods like in jQuery. The first one (createServer) accepts a function that is also known as a callback, which is called every time a new request comes to the server. The second one makes the server listen. The result that we will get in a browser is as follows: Defining and using modules JavaScript as a language does not have mechanisms to define real classes. In fact, everything in JavaScript is an object. We normally inherit properties and functions from one object to another. Thankfully, Node.js adopts the concepts defined by CommonJS—a project that specifies an ecosystem for JavaScript. We encapsulate logic in modules. Every module is defined in its own file. Let's illustrate how everything works with a simple example. Let's say that we have a module that represents this book and we save it in a file called book.js: // book.jsexports.name = 'Node.js by example';exports.read = function() {   console.log('I am reading ' + exports.name);} We defined a public property and a public function. Now, we will use require to access them: // script.jsvar book = require('./book.js');console.log('Name: ' + book.name);book.read(); We will now create another file named script.js. To test our code, we will run node ./script.js. The result in the terminal looks like this: Along with exports, we also have module.exports available. There is a difference between the two. Look at the following pseudocode. It illustrates how Node.js constructs our modules: var module = { exports: {} };var exports = module.exports;// our codereturn module.exports; So, in the end, module.exports is returned and this is what require produces. We should be careful because if at some point we apply a value directly to exports or module.exports, we may not receive what we need. Like at the end of the following snippet, we set a function as a value and that function is exposed to the outside world: exports.name = 'Node.js by example';exports.read = function() {   console.log('Iam reading ' + exports.name);}module.exports = function() { ... } In this case, we do not have an access to .name and .read. If we try to execute node ./script.js again, we will get the following output: To avoid such issues, we should stick to one of the two options—exports or module.exports—but make sure that we do not have both. We should also keep in mind that by default, require caches the object that is returned. So, if we need two different instances, we should export a function. Here is a version of the book class that provides API methods to rate the books and that do not work properly: // book.jsvar ratePoints = 0;exports.rate = function(points) {   ratePoints = points;}exports.getPoints = function() {   return ratePoints;} Let's create two instances and rate the books with different points value: // script.jsvar bookA = require('./book.js');var bookB = require('./book.js');bookA.rate(10);bookB.rate(20);console.log(bookA.getPoints(), bookB.getPoints()); The logical response should be 10 20, but we got 20 20. This is why it is a common practice to export a function that produces a different object every time: // book.jsmodule.exports = function() {   var ratePoints = 0;   return {     rate: function(points) {         ratePoints = points;     },     getPoints: function() {         return ratePoints;     }   }} Now, we should also have require('./book.js')() because require returns a function and not an object anymore. Managing and distributing packages Once we understand the idea of require and exports, we should start thinking about grouping our logic into building blocks. In the Node.js world, these blocks are called modules (or packages). One of the reasons behind the popularity of Node.js is its package management. Node.js normally comes with two executables—node and npm. NPM is a command-line tool that downloads and uploads Node.js packages. The official site, , acts as a central registry. When we create a package via the npm command, we store it there so that every other developer may use it. Creating a module Every module should live in its own directory, which also contains a metadata file called package.json. In this file, we have set at least two properties—name and version: {   "name": "my-awesome-nodejs-module",   "version": "0.0.1"} We can place whatever code we like in the same directory. Once we publish the module to the NPM registry and someone installs it, he/she will get the same files. For example, let's add an index.js file so that we have two files in the package: // index.jsconsole.log('Hello, this is my awesome Node.js module!'); Our module does only one thing—it displays a simple message to the console. Now, to upload the modules, we need to navigate to the directory containing the package.json file and execute npm publish. This is the result that we should see: We are ready. Now our little module is listed in the Node.js package manager's site and everyone is able to download it. Using modules In general, there are three ways to use the modules that are already created. All three ways involve the package manager: We may install a specific module manually. Let's say that we have a folder called project. We open the folder and run the following: npm install my-awesome-nodejs-module The manager automatically downloads the latest version of the module and puts it in a folder called node_modules. If we want to use it, we do not need to reference the exact path. By default, Node.js checks the node_modules folder before requiring something. So, just require('my-awesome-nodejs-module') will be enough. The installation of modules globally is a common practice, especially if we talk about command-line tools made with Node.js. It has become an easy-to-use technology to develop such tools. The little module that we created is not made as a command-line program, but we can still install it globally by running the following code: npm install my-awesome-nodejs-module -g Note the -g flag at the end. This is how we tell the manager that we want this module to be a global one. When the process finishes, we do not have a node_modules directory. The my-awesome-nodejs-module folder is stored in another place on our system. To be able to use it, we have to add another property to package.json, but we'll talk more about this in the next section. The resolving of dependencies is one of the key features of the package manager of Node.js. Every module can have as many dependencies as you want. These dependences are nothing but other Node.js modules that were uploaded to the registry. All we have to do is list the needed packages in the package.json file: {    "name": "another-module",    "version": "0.0.1",    "dependencies": {        "my-awesome-nodejs-module": "0.0.1"      } } Now we don't have to specify the module explicitly and we can simply execute npm install to install our dependencies. The manager reads the package.json file and saves our module again in the node_modules directory. It is good to use this technique because we may add several dependencies and install them at once. It also makes our module transferable and self-documented. There is no need to explain to other programmers what our module is made up of. Updating our module Let's transform our module into a command-line tool. Once we do this, users will have a my-awesome-nodejs-module command available in their terminals. There are two changes in the package.json file that we have to make: {   "name": "my-awesome-nodejs-module",   "version": "0.0.2",   "bin": "index.js"} A new bin property is added. It points to the entry point of our application. We have a really simple example and only one file—index.js. The other change that we have to make is to update the version property. In Node.js, the version of the module plays important role. If we look back, we will see that while describing dependencies in the package.json file, we pointed out the exact version. This ensures that in the future, we will get the same module with the same APIs. Every number from the version property means something. The package manager uses Semantic Versioning 2.0.0 (http://semver.org/). Its format is MAJOR.MINOR.PATCH. So, we as developers should increment the following: MAJOR number if we make incompatible API changes MINOR number if we add new functions/features in a backwards-compatible manner PATCH number if we have bug fixes Sometimes, we may see a version like 2.12.*. This means that the developer is interested in using the exact MAJOR and MINOR version, but he/she agrees that there may be bug fixes in the future. It's also possible to use values like >=1.2.7 to match any equal-or-greater version, for example, 1.2.7, 1.2.8, or 2.5.3. We updated our package.json file. The next step is to send the changes to the registry. This could be done again with npm publish in the directory that holds the JSON file. The result will be similar. We will see the new 0.0.2 version number on the screen: Just after this, we may run npm install my-awesome-nodejs-module -g and the new version of the module will be installed on our machine. The difference is that now we have the my-awesome-nodejs-module command available and if you run it, it displays the message written in the index.js file: Introducing built-in modules Node.js is considered a technology that you can use to write backend applications. As such, we need to perform various tasks. Thankfully, we have a bunch of helpful built-in modules at our disposal. Creating a server with the HTTP module We already used the HTTP module. It's perhaps the most important one for web development because it starts a server that listens on a particular port: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); We have a createServer method that returns a new web server object. In most cases, we run the listen method. If needed, there is close, which stops the server from accepting new connections. The callback function that we pass always accepts the request (req) and response (res) objects. We can use the first one to retrieve information about incoming request, such as, GET or POST parameters. Reading and writing to files The module that is responsible for the read and write processes is called fs (it is derived from filesystem). Here is a simple example that illustrates how to write data to a file: var fs = require('fs');fs.writeFile('data.txt', 'Hello world!', function (err) {   if(err) { throw err; }   console.log('It is saved!');}); Most of the API functions have synchronous versions. The preceding script could be written with writeFileSync, as follows: fs.writeFileSync('data.txt', 'Hello world!'); However, the usage of the synchronous versions of the functions in this module blocks the event loop. This means that while operating with the filesystem, our JavaScript code is paused. Therefore, it is a best practice with Node to use asynchronous versions of methods wherever possible. The reading of the file is almost the same. We should use the readFile method in the following way: fs.readFile('data.txt', function(err, data) {   if (err) throw err;   console.log(data.toString());}); Working with events The observer design pattern is widely used in the world of JavaScript. This is where the objects in our system subscribe to the changes happening in other objects. Node.js has a built-in module to manage events. Here is a simple example: var events = require('events'); var eventEmitter = new events.EventEmitter(); var somethingHappen = function() {    console.log('Something happen!'); } eventEmitter .on('something-happen', somethingHappen) .emit('something-happen'); The eventEmitter object is the object that we subscribed to. We did this with the help of the on method. The emit function fires the event and the somethingHappen handler is executed. The events module provides the necessary functionality, but we need to use it in our own classes. Let's get the book idea from the previous section and make it work with events. Once someone rates the book, we will dispatch an event in the following manner: // book.js var util = require("util"); var events = require("events"); var Class = function() { }; util.inherits(Class, events.EventEmitter); Class.prototype.ratePoints = 0; Class.prototype.rate = function(points) {    ratePoints = points;    this.emit('rated'); }; Class.prototype.getPoints = function() {    return ratePoints; } module.exports = Class; We want to inherit the behavior of the EventEmitter object. The easiest way to achieve this in Node.js is by using the utility module (util) and its inherits method. The defined class could be used like this: var BookClass = require('./book.js'); var book = new BookClass(); book.on('rated', function() {    console.log('Rated with ' + book.getPoints()); }); book.rate(10); We again used the on method to subscribe to the rated event. The book class displays that message once we set the points. The terminal then shows the Rated with 10 text. Managing child processes There are some things that we can't do with Node.js. We need to use external programs for the same. The good news is that we can execute shell commands from within a Node.js script. For example, let's say that we want to list the files in the current directory. The file system APIs do provide methods for that, but it would be nice if we could get the output of the ls command: // exec.js var exec = require('child_process').exec; exec('ls -l', function(error, stdout, stderr) {    console.log('stdout: ' + stdout);    console.log('stderr: ' + stderr);    if (error !== null) {        console.log('exec error: ' + error);    } }); The module that we used is called child_process. Its exec method accepts the desired command as a string and a callback. The stdout item is the output of the command. If we want to process the errors (if any), we may use the error object or the stderr buffer data. The preceding code produces the following screenshot: Along with the exec method, we have spawn. It's a bit different and really interesting. Imagine that we have a command that not only does its job, but also outputs the result. For example, git push may take a few seconds and it may send messages to the console continuously. In such cases, spawn is a good variant because we get an access to a stream: var spawn = require('child_process').spawn; var command = spawn('git', ['push', 'origin', 'master']); command.stdout.on('data', function (data) {    console.log('stdout: ' + data); }); command.stderr.on('data', function (data) {    console.log('stderr: ' + data); }); command.on('close', function (code) {    console.log('child process exited with code ' + code); }); Here, stdout and stderr are streams. They dispatch events and if we subscribe to these events, we will get the exact output of the command as it was produced. In the preceding example, we run git push origin master and sent the full command responses to the console. Summary Node.js is used by many companies nowadays. This proves that it is mature enough to work in a production environment. In this article, we saw what the fundamentals of this technology are. We covered some of the commonly used cases. Resources for Article: Further resources on this subject: AngularJS Project [article] Exploring streams [article] Getting Started with NW.js [article]
Read more
  • 0
  • 0
  • 3431

article-image-building-basic-express-site
Packt
12 May 2015
34 min read
Save for later

Building a Basic Express Site

Packt
12 May 2015
34 min read
In this article by Ben Augarten, Marc Kuo, Eric Lin, Aidha Shaikh, Fabiano Pereira Soriani, Geoffrey Tisserand, Chiqing Zhang, Kan Zhang, authors of the book Express.js Blueprints, we will see how it uses Google Chrome's JavaScript engine, V8, to execute code. Node.js is single-threaded and event-driven. It uses non-blocking I/O to squeeze every ounce of processing power out of the CPU. Express builds on top of Node.js, providing all of the tools necessary to develop robust web applications with node. In addition, by utilizing Express, one gains access to a host of open source software to help solve common pain points in development. The framework is unopinionated, meaning it does not guide you one way or the other in terms of implementation or interface. Because it is unopinionated, the developer has more control and can use the framework to accomplish nearly any task; however, the power Express offers is easily abused. In this book, you will learn how to use the framework in the right way by exploring the following different styles of an application: Setting up Express for a static site Local user authentication OAuth with passport Profile pages Testing (For more resources related to this topic, see here.) Setting up Express for a static site To get our feet wet, we'll first go over how to respond to basic HTTP requests. In this example, we will handle several GET requests, responding first with plaintext and then with static HTML. However, before we get started, you must install two essential tools: node and npm, which is the node package manager. Navigate to https://nodejs.org/download/ to install node and npm. Saying Hello, World in Express For those unfamiliar with Express, we will start with a basic example—Hello World! We'll start with an empty directory. As with any Node.js project, we will run the following code to generate our package.json file, which keeps track of metadata about the project, such as dependencies, scripts, licenses, and even where the code is hosted: $ npm init The package.json file keeps track of all of our dependencies so that we don't have versioning issues, don't have to include dependencies with our code, and can deploy fearlessly. You will be prompted with a few questions. Choose the defaults for all except the entry point, which you should set to server.js. There are many generators out there that can help you generate new Express applications, but we'll create the skeleton this time around. Let's install Express. To install a module, we use npm to install the package. We use the --save flag to tell npm to add the dependency to our package.json file; that way, we don't need to commit our dependencies to the source control. We can just install them based on the contents of the package.json file (npm makes this easy): $ npm install --save express We'll be using Express v4.4.0 throughout this book. Warning: Express v4.x is not backwards compatible with the versions before it. You can create a new file server.js as follows: var express = require('express'); var app = express();   app.get('/', function(req, res, next) { res.send('Hello, World!'); });   app.listen(3000); console.log('Express started on port 3000'); This file is the entry point for our application. It is here that we generate an application, register routes, and finally listen for incoming requests on port 3000. The require('express') method returns a generator of applications. We can continually create as many applications as we want; in this case, we only created one, which we assigned to the variable app. Next, we register a GET route that listens for GET requests on the server root, and when requested, sends the string 'Hello, World' to the client. Express has methods for all of the HTTP verbs, so we could have also done app.post, app.put, app.delete, or even app.all, which responds to all HTTP verbs. Finally, we start the app listening on port 3000, then log to standard out. It's finally time to start our server and make sure everything works as expected. $ node server.js We can validate that everything is working by navigating to http://localhost:3000 in our browser or curl -v localhost:3000 in your terminal. Jade templating We are now going to extract the HTML we send to the client into a separate template. After all, it would be quite difficult to render full HTML pages simply by using res.send. To accomplish this, we will use a templating language frequently in conjunction with Express -- jade. There are many templating languages that you can use with Express. We chose Jade because it greatly simplifies writing HTML and was created by the same developer of the Express framework. $ npm install --save jade After installing Jade, we're going to have to add the following code to server.js: app.set('view engine', 'jade'); app.set('views', __dirname + '/views');   app.get('/', function(req, res, next) { res.render('index'); }); The preceding code sets the default view engine for Express—sort of like telling Express that in the future it should assume that, unless otherwise specified, templates are in the Jade templating language. Calling app.set sets a key value pair for Express internals. You can think of this sort of application like wide configuration. We could call app.get (view engine) to retrieve our set value at any time. We also specify the folder that Express should look into to find view files. That means we should create a views directory in our application and add a file, index.jade to it. Alternatively, if you want to include many different template types, you could execute the following: app.engine('jade', require('jade').__express); app.engine('html', require('ejs').__express); app.get('/html', function(req, res, next) { res.render('index.html'); });   app.get(/'jade, function(req, res, next) { res.render('index.jade'); }); Here, we set custom template rendering based on the extension of the template we want to render. We use the Jade renderer for .jade extensions and the ejs renderer for .html extensions and expose both of our index files by different routes. This is useful if you choose one templating option and later want to switch to a new one in an incremental way. You can refer to the source for the most basic of templates. Local user authentication The majority of applications require user accounts. Some applications only allow authentication through third parties, but not all users are interested in authenticating through third parties for privacy reasons, so it is important to include a local option. Here, we will go over best practices when implementing local user authentication in an Express app. We'll be using MongoDB to store our users and Mongoose as an ODM (Object Document Mapper). Then, we'll leverage passport to simplify the session handling and provide a unified view of authentication. Downloading the example code You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. User object modeling We will leverage passportjs to handle user authentication. Passport centralizes all of the authentication logic and provides convenient ways to authenticate locally in addition to third parties, such as Twitter, Google, Github, and so on. First, install passport and the local authentication strategy as follows: $ npm install --save passport-local In our first pass, we will implement a local authentication strategy, which means that users will be able to register locally for an account. We start by defining a user model using Mongoose. Mongoose provides a way to define schemas for objects that we want to store in MongoDB and then provide a convenient way to map between stored records in the database and an in-memory representation. Mongoose also provides convenient syntax to make many MongoDB queries and perform CRUD operations on models. Our user model will only have an e-mail, password, and timestamp for now. Before getting started, we need to install Mongoose: $ npm install --save mongoose bcrypt validator Now we define the schema for our user in models/user.js as follows: Var mongoose = require('mongoose');   var userSchema = new mongoose.Schema({ email: {    type: String,    required: true,    unique: true }, password: {    type: String,    required: true }, created_at: {    type: Date,    default: Date.now } });   userSchema.pre('save', function(next) { if (!this.isModified('password')) {    return next(); } this.password = User.encryptPassword(this.password); next(); }); Here, we create a schema that describes our users. Mongoose has convenient ways to describe the required and unique fields as well as the type of data that each property should hold. Mongoose does all the validations required under the hood. We don't require many user fields for our first boilerplate application—e-mail, password, and timestamp to get us started. We also use Mongoose middleware to rehash a user's password if and when they decide to change it. Mongoose exposes several hooks to run user-defined callbacks. In our example, we define a callback to be invoked before Mongoose saves a model. That way, every time a user is saved, we'll check to see whether their password was changed. Without this middleware, it would be possible to store a user's password in plaintext, which is not only a security vulnerability but would break authentication. Mongoose supports two kinds of middleware – serial and parallel. Parallel middleware can run asynchronous functions and gets an additional callback to invoke; you'll learn more about Mongoose middleware later in this book. Now, we want to add validations to make sure that our data is correct. We'll use the validator library to accomplish this, as follows: Var validator = require('validator');   User.schema.path('email').validate(function(email) { return validator.isEmail(email); });   User.schema.path('password').validate(function(password) { return validator.isLength(password, 6); });   var User = mongoose.model('User', userSchema); module.exports = User; We added validations for e-mail and password length using a library called validator, which provides a lot of convenient validators for different types of fields. Validator has validations based on length, URL, int, upper case; essentially, anything you would want to validate (and don't forget to validate all user input!). We also added a host of helper functions regarding registration, authentication, as well as encrypting passwords that you can find in models/user.js. We added these to the user model to help encapsulate the variety of interactions we want using the abstraction of a user. For more information on Mongoose, see http://mongoosejs.com/. You can find more on passportjs at http://passportjs.org/. This lays out the beginning of a design pattern called MVC—model, view, controller. The basic idea is that you encapsulate separate concerns in different objects: the model code knows about the database, storage, and querying; the controller code knows about routing and requests/responses; and the view code knows what to render for users. Introducing Express middleware Passport is authentication middleware that can be used with Express applications. Before diving into passport, we should go over Express middleware. Express is a connect framework, which means it uses the connect middleware. Connecting internally has a stack of functions that handle requests. When a request comes in, the first function in the stack is given the request and response objects along with the next() function. The next() function when called, delegates to the next function in the middleware stack. Additionally, you can specify a path for your middleware, so it is only called for certain paths. Express lets you add middleware to an application using the app.use() function. In fact, the HTTP handlers we already wrote are a special kind of middleware. Internally, Express has one level of middleware for the router, which delegates to the appropriate handler. Middleware is extraordinarily useful for logging, serving static files, error handling, and more. In fact, passport utilizes middleware for authentication. Before anything else happens, passport looks for a cookie in the request, finds metadata, and then loads the user from the database, adds it to req, user, and then continues down the middleware stack. Setting up passport Before we can make full use of passport, we need to tell it how to do a few important things. First, we need to instruct passport how to serialize a user to a session. Then, we need to deserialize the user from the session information. Finally, we need to tell passport how to tell if a given e-mail/password combination represents a valid user as given in the following: // passport.js var passport = require('passport'); var LocalStrategy = require('passport-local').Strategy; var User = require('mongoose').model('User');   passport.serializeUser(function(user, done) { done(null, user.id); });   passport.deserializeUser(function(id, done) { User.findById(id, done); }); Here, we tell passport that when we serialize a user, we only need that user's id. Then, when we want to deserialize a user from session data, we just look up the user by their ID! This is used in passport's middleware, after the request is finished, we take req.user and serialize their ID to our persistent session. When we first get a request, we take the ID stored in our session, retrieve the record from the database, and populate the request object with a user property. All of this functionality is provided transparently by passport, as long as we provide definitions for these two functions as given in the following: function authFail(done) { done(null, false, { message: 'incorrect email/password combination' }); }   passport.use(new LocalStrategy(function(email, password, done) { User.findOne({    email: email }, function(err, user) {    if (err) return done(err);    if (!user) {      return authFail(done);    }    if (!user.validPassword(password)) {      return authFail(done);    }    return done(null, user); }); })); We tell passport how to authenticate a user locally. We create a new LocalStrategy() function, which, when given an e-mail and password, will try to lookup a user by e-mail. We can do this because we required the e-mail field to be unique, so there should only be one user. If there is no user, we return an error. If there is a user, but they provided an invalid password, we still return an error. If there is a user and they provided the correct password, then we tell passport that the authentication request was a success by calling the done callback with the valid user. Registering users Now, we add routes for registration, both a view with a basic form and backend logic to create a user. First, we will create a user controller. Up until now, we have thrown our routes in our server.js file, but this is generally bad practice. What we want to do is have separate controllers for each kind of route that we want. We have seen the model portion of MVC. Now it's time to take a look at controllers. Our user controller will have all the routes that manipulate the user model. Let's create a new file in a new directory, controllers/user.js: // controllers/user.js var User = require('mongoose').model('User');   module.exports.showRegistrationForm = function(req, res, next) { res.render('register'); };   module.exports.createUser = function(req, res, next) { User.register(req.body.email, req.body.password, function(err, user) {    if (err) return next(err);    req.login(user, function(err) {      if (err) return next(err);      res.redirect('/');    }); }); }; Note that the User model takes care of the validations and registration logic; we just provide callback. Doing this helps consolidate the error handling and generally makes the registration logic easier to understand. If the registration was successful, we call req.login, a function added by passport, which creates a new session for that user and that user will be available as req.user on subsequent requests. Finally, we register the routes. At this point, we also extract the routes we previously added to server.js to their own file. Let's create a new file called routes.js as follows: // routes.js app.get('/users/register', userRoutes.showRegistrationForm); app.post('/users/register', userRoutes.createUser); Now we have a file dedicated to associating controller handlers with actual paths that users can access. This is generally good practice because now we have a place to come visit and see all of our defined routes. It also helps unclutter our server.js file, which should be exclusively devoted to server configuration. For details, as well as the registration templates used, see the preceding code. Authenticating users We have already done most of the work required to authenticate users (or rather, passport has). Really, all we need to do is set up routes for authentication and a form to allow users to enter their credentials. First, we'll add handlers to our user controller: // controllers/user.js module.exports.showLoginForm = function(req, res, next) { res.render('login'); };   module.exports.createSession = passport.authenticate('local', { successRedirect: '/', failureRedirect: '/login' }); Let's deconstruct what's happening in our login post. We create a handler that is the result of calling passport.authenticate('local', …). This tells passport that the handler uses the local authentication strategy. So, when someone hits that route, passport will delegate to our LocalStrategy. If they provided a valid e-mail/password combination, our LocalStrategy will give passport the now authenticated user, and passport will redirect the user to the server root. If the e-mail/password combination was unsuccessful, passport will redirect the user to /login so they can try again. Then, we will bind these callbacks to routes in routes.js: app.get('/users/login', userRoutes.showLoginForm); app.post('/users/login', userRoutes.createSession); At this point, we should be able to register an account and login with those same credentials. (see tag 0.2 for where we are right now). OAuth with passport Now we will add support for logging into our application using Twitter, Google, and GitHub. This functionality is useful if users don't want to register a separate account for your application. For these users, allowing OAuth through these providers will increase conversions and generally make for an easier registration process for users. Adding OAuth to user model Before adding OAuth, we need to keep track of several additional properties on our user model. We keep track of these properties to make sure we can look up user accounts provided there is information to ensure we don't allow duplicate accounts and allow users to link multiple third-party accounts by using the following code: var userSchema = new mongoose.Schema({ email: {    type: String,    required: true,    unique: true }, password: {    type: String, }, created_at: {    type: Date,    default: Date.now }, twitter: String, google: String, github: String, profile: {    name: { type: String, default: '' },    gender: { type: String, default: '' },    location: { type: String, default: '' },    website: { type: String, default: '' },    picture: { type: String, default: '' } }, }); First, we add a property for each provider, in which we will store a unique identifier that the provider gives us when they authorize with that provider. Next, we will store an array of tokens, so we can conveniently access a list of providers that are linked to this account; this is useful if you ever want to let a user register through one and then link to others for viral marketing or extra user information. Finally, we keep track of some demographic information about the user that the providers give to us so we can provide a better experience for our users. Getting API tokens Now, we need to go to the appropriate third parties and register our application to receive application keys and secret tokens. We will add these to our configuration. We will use separate tokens for development and production purposes (for obvious reasons!). For security reasons, we will only have our production tokens as environment variables on our final deploy server, not committed to version control. I'll wait while you navigate to the third-party websites and add their tokens to your configuration as follows: // config.js twitter: {    consumerKey: process.env.TWITTER_KEY || 'VRE4lt1y0W3yWTpChzJHcAaVf',    consumerSecret: process.env.TWITTER_SECRET || 'TOA4rNzv9Cn8IwrOi6MOmyV894hyaJks6393V6cyLdtmFfkWqe',    callbackURL: '/auth/twitter/callback' }, google: {    clientID: process.env.GOOGLE_ID || '627474771522-uskkhdsevat3rn15kgrqt62bdft15cpu.apps.googleusercontent.com',    clientSecret: process.env.GOOGLE_SECRET || 'FwVkn76DKx_0BBaIAmRb6mjB',    callbackURL: '/auth/google/callback' }, github: {    clientID: process.env.GITHUB_ID || '81b233b3394179bfe2bc',    clientSecret: process.env.GITHUB_SECRET || 'de0322c0aa32eafaa84440ca6877ac5be9db9ca6',    callbackURL: '/auth/github/callback' } Of course, you should never commit your development keys publicly either. Be sure to either not commit this file or to use private source control. The best idea is to only have secrets live on machines ephemerally (usually as environment variables). You especially should not use the keys that I provided here! Third-party registration and login Now we need to install and implement the various third-party registration strategies. To install third-party registration strategies run the following command: npm install --save passport-twitter passport-google-oAuth passport-github Most of these are extraordinarily similar, so I will only show the TwitterStrategy, as follows: passport.use(new TwitterStrategy(config.twitter, function(req, accessToken, tokenSecret, profile, done) { User.findOne({ twitter: profile.id }, function(err, existingUser) {      if (existingUser) return done(null, existingUser);      var user = new User();      // Twitter will not provide an email address. Period.      // But a person's twitter username is guaranteed to be unique      // so we can "fake" a twitter email address as follows:      // [email protected] user.email = profile.username + "@twitter." + config.domain + ".com";      user.twitter = profile.id;      user.tokens.push({ kind: 'twitter', accessToken: accessToken, tokenSecret: tokenSecret });      user.profile.name = profile.displayName;      user.profile.location = profile._json.location;      user.profile.picture = profile._json.profile_image_url;      user.save(function(err) {        done(err, user);      });    }); })); Here, I included one example of how we would do this. First, we pass a new TwitterStrategy to passport. The TwitterStrategy takes our Twitter keys and callback information and a callback is used to make sure we can register the user with that information. If the user is already registered, then it's a no-op; otherwise we save their information and pass along the error and/or successfully saved user to the callback. For the others, refer to the source. Profile pages It is finally time to add profile pages for each of our users. To do so, we're going to discuss more about Express routing and how to pass request-specific data to Jade templates. Often times when writing a server, you want to capture some portion of the URL to use in the controller; this could be a user id, username, or anything! We'll use Express's ability to capture URL parts to get the id of the user whose profile page was requested. URL params Express, like any good web framework, supports extracting data from URL parts. For example, you can do the following: app.get('/users/:id', function(req, res, next) { console.log(req.params.id); } In the preceding example, we will print whatever comes after /users/ in the request URL. This allows an easy way to specify per user routes, or routes that only make sense in the context of a specific user, that is, a profile page only makes sense when you specify a specific user. We will use this kind of routing to implement our profile page. For now, we want to make sure that only the logged-in user can see their own profile page (we can change this functionality later): app.get('/users/:id', function(req, res, next) { if (!req.user || (req.user.id != req.params.id)) {    return next('Not found'); } res.render('users/profile', { user: req.user.toJSON() }); }); Here, we check first that the user is signed in and that the requested user's id is the same as the logged-in user's id. If it isn't, then we return an error. If it is, then we render the users/profile.jade template with req.user as the data. Profile templates We already looked at models and controllers at length, but our templates have been underwhelming. Finally, we'll show how to write some basic Jade templates. This section will serve as a brief introduction to the Jade templating language, but does not try to be comprehensive. The code for Profile templates is as follows: html body    h1      =user.email    h2      =user.created_at    - for (var prop in user.profile)      if user.profile[prop]        h4          =prop + "=" + user.profile[prop] Notably, because in the controller we passed in the user to the view, we can access the variable user and it refers to the logged-in user! We can execute arbitrary JavaScript to render into the template by prefixing it with = --. In these blocks, we can do anything we would normally do, including string concatenation, method invocation, and so on. Similarly, we can include JavaScript code that is not intended to be written as HTML by prefixing it with - like we did with the for loop. This basic template prints out the user's e-mail, the created_at timestamp, as well as all of the properties in their profile, if any. For a more in-depth look at Jade, please see http://jade-lang.com/reference/. Testing Testing is essential for any application. I will not dwell on the whys, but instead assume that you are angry with me for skipping this topic in the previous sections. Testing Express applications tend to be relatively straightforward and painless. The general format is that we make fake requests and then make certain assertions about the responses. We could also implement finer-grained unit tests for more complex logic, but up until now almost everything we did is straightforward enough to be tested on a per route basis. Additionally, testing at the API level provides a more realistic view of how real customers will be interacting with your website and makes tests less brittle in the face of refactoring code. Introducing Mocha Mocha is a simple, flexible, test framework runner. First, I would suggest installing Mocha globally so you can easily run tests from the command line as follows: $ npm install --save-dev –g mocha The --save-dev option saves mocha as a development dependency, meaning we don't have to install Mocha on our production servers. Mocha is just a test runner. We also need an assertion library. There are a variety of solutions, but should.js syntax, written by the same person as Express and Mocha, gives a clean syntax to make assertions: $ npm install --save-dev should The should.js syntax provides BDD assertions, such as 'hello'.should.equal('hello') and [1,2].should.have.length(2). We can start with a Hello World test example by creating a test directory with a single file, hello-world.js, as shown in the following code: var should = require('should');   describe('The World', function() { it('should say hello', function() {    'Hello, World'.should.equal('Hello, World'); }); it('should say hello asynchronously!', function(done) {    setTimeout(function() {      'Hello, World'.should.equal('Hello, World');      done();    }, 300); }); }); We have two different tests both in the same namespace, The World. The first test is an example of a synchronous test. Mocha executes the function we give to it, sees that no exception gets thrown and the test passes. If, instead, we accept a done argument in our callback, as we do in the second example, Mocha will intelligently wait until we invoke the callback before checking the validity of our test. For the most part, we will use the second version, in which we must explicitly invoke the done argument to finish our test because it makes more sense to test Express applications. Now, if we go back to the command line, we should be able to run Mocha (or node_modules/.bin/mocha if you didn't install it globally) and see that both of the tests we wrote pass! Testing API endpoints Now that we have a basic understanding of how to run tests using Mocha and make assertions with should syntax, we can apply it to test local user registration. First, we need to introduce another npm module that will help us test our server programmatically and make assertions about what kind of responses we expect. The library is called supertest: $ npm install --save-dev supertest The library makes testing Express applications a breeze and provides chainable assertions. Let's take a look at an example usage to test our create user route,as shown in the following code: var should = require('should'),    request = require('supertest'),    app = require('../server').app,    User = require('mongoose').model('User');   describe('Users', function() { before(function(done) {    User.remove({}, done); }); describe('registration', function() {    it('should register valid user', function(done) {      request(app)        .post('/users/register')       .send({          email: "[email protected]",          password: "hello world"        })        .expect(302)        .end(function(err, res) {          res.text.should.containEql("Redirecting to /");          done(err);        });    }); }); }); First, notice that we used two namespaces: Users and registration. Now, before we run any tests, we remove all users from the database. This is useful to ensure we know where we're starting the tests This will delete all of your saved users though, so it's useful to use a different database in the test environment. Node detects the environment by looking at the NODE_ENV environment variable. Typically it is test, development, staging, or production. We can do so by changing the database URL in our configuration file to use a different local database when in a test environment and then run Mocha tests with NODE_ENV=test mocha. Now, on to the interesting bits! Supertest exposes a chainable API to make requests and assertions about responses. To make a request, we use request(app). From there, we specify the HTTP method and path. Then, we can specify a JSON body to send to the server; in this case, an example user registration form. On registration, we expect a redirect, which is a 302 response. If that assertion fails, then the err argument in our callback will be populated, and the test will fail when we use done(err). Additionally, we validate that we were redirected to the route we expect, the server root /. Automate builds and deploys All of this development is relatively worthless without a smooth process to build and deploy your application. Fortunately, the node community has written a variety of task runners. Among these are Grunt and Gulp, two of the most popular task runners. Both work seamlessly with Express and provide a set of utilities for us to use, including concatenating and uglifying JavaScript, compiling sass/less, and reloading the server on local file changes. We'll focus on Grunt, for simplicity. Introducing the Gruntfile Grunt itself is a simple task runner, but its extensibility and plugin architecture lets you install third-party scripts to run in predefined tasks. To give us an idea of how we might use Grunt, we're going to write our css in sass and then use Grunt to compile sass to css. Through this example, we'll explore the different ideas that Grunt introduces. First, you need to install cli globally to install the plugin that compiles sass to css: $ npm install -g grunt-cli $ npm install --save grunt grunt-contrib-sass Now we need to create Gruntfile.js, which contains instructions for all of the tasks and build targets that we need. To do this perform the following: // Gruntfile.js module.exports = function(grunt) { grunt.loadNpmTasks('grunt-contrib-sass'); grunt.initConfig({    sass: {      dist: {        files: [{          expand: true,          cwd: "public/styles",          src: ["**.scss"],          dest: "dist/styles",          ext: ".css"        }]      }    } }); } Let's go over the major parts. Right at the top, we require the plugin we will use, grunt-contrib-sass. This tells grunt that we are going to configure a task called sass. In our definition of the task sass, we specify a target, dist, which is commonly used for tasks that produce production files (minified, concatenated, and so on). In that task, we build our file list dynamically, telling Grunt to look in /public/styles/ recursively for all .scss files, then compile them all to the same paths in /dist/styles. It is useful to have two parallel static directories, one for development and one for production, so we don't have to look at minified code in development. We can invoke this target by executing grunt sass or grunt sass:dist. It is worth noting that we don't explicitly concatenate the files in this task, but if we use @imports in our main sass file, the compiler will concatenate everything for us. We can also configure Grunt to run our test suite. To do this, let's add another plugin -- npm install --save-dev grunt-mocha-test. Now we have to add the following code to our Gruntfile.js file: grunt.loadNpmTasks('grunt-mocha-test'); grunt.registerTask('test', 'mochaTest'); ...   mochaTest: {    test: {      src: ["test/**.js"]    } } Here, the task is called mochaTest and we register a new task called test that simply delegates to the mochaTest task. This way, it is easier to remember how to run tests. Similarly, we could have specified a list of tasks to run if we passed an array of strings as the second argument to registerTask. This is a sampling of what can be accomplished with Grunt. For an example of a more robust Gruntfile, check out the source. Continuous integration with Travis Travis CI provides free continuous integration for open source projects as well as paid options for closed source applications. It uses a git hook to automatically test your application after every push. This is useful to ensure no regression was introduced. Also, there could be dependency problems only revealed in CI that local development masks; Travis is the first line of defense for these bugs. It takes your source, runs npm install to install the dependencies specified in package.json, and then runs the npm test to run your test suite. Travis accepts a configuration file called travis.yml. These typically look like this: language: node_js node_js: - "0.11" - "0.10" - "0.8" services: - mongodb We can specify the versions of node that we want to test against as well as the services that we rely on (specifically MongoDB). Now we have to update our test command in package.json to run grunt test. Finally, we have to set up a webhook for the repository in question. We can do this on Travis by enabling the repository. Now we just have to push our changes and Travis will make sure all the tests pass! Travis is extremely flexible and you can use it to accomplish most tasks related to continuous integration, including automatically deploying successful builds. Deploying Node.js applications One of the easiest ways to deploy Node.js applications is to utilize Heroku, a platform as a service provider. Heroku has its own toolbelt to create and deploy Heroku apps from your machine. Before getting started with Heroku, you will need to install its toolbelt. Please go to https://toolbelt.heroku.com/ to download the Heroku toolbelt. Once installed, you can log in to Heroku or register via the web UI and then run Heroku login. Heroku uses a special file, called the Procfile, which specifies exactly how to run your application. Our Procfile looks like this: web: node server.js Extraordinarily simple: in order to run the web server, just run node server.js. In order to verify that our Procfile is correct, we can run the following locally: $ foreman start Foreman looks at the Procfile and uses that to try to start our server. Once that runs successfully, we need to create a new application and then deploy our application to Heroku. Be sure to commit the Procfile to version control: $ heroku create$ git push heroku master Heroku will create a new application and URL in Heroku, as well as a git remote repository named heroku. Pushing that remote actually triggers a deploy of your code. If you do all of this, unfortunately your application will not work. We don't have a Mongo instance for our application to talk to! First we have to request MongoDB from Heroku: $ heroku addons:add mongolab // don't worry, it's free This spins up a shared MongoDB instance and gives our application an environment variable named MONOGOLAB_URI, which we should use as our MongoDB connect URI. We need to change our configuration file to reflect these changes. In our configuration file, in production, for our database URL, we should look at the environment variable MONGOLAB_URI. Also, be sure that Express is listening on process.env.PORT || 3000, or else you will receive stra With all of that set up, we can commit our changes and push the changes once again to Heroku. Hopefully, this time it works! To view the application logs for debugging purposes, just use the Heroku toolbelt: $ heroku logs One last thing about deploying Express applications: sometimes applications crash, software isn't perfect. We should anticipate crashes and have our application respond accordingly (by restarting itself). There are many server monitoring tools, including pm2 and forever. We use forever because of its simplicity. $ npm install --save forever Then, we update our Procfile to reflect our use of forever: // Procfileweb: node_modules/.bin/forever server.js Now, forever will automatically restart our application, if it crashes for any strange reason. You can also set up Travis to automatically push successful builds to your server, but that goes beyond the deployment we will do in this book. Summary In this article, we got our feet wet in the world of node and using the Express framework. We went over everything from Hello World and MVC to testing and deployments. You should feel comfortable using basic Express APIs, but also feel empowered to own a Node.js application across the entire stack. Resources for Article: Further resources on this subject: Testing a UI Using WebDriverJS [article] Applications of WebRTC [article] Amazon Web Services [article]
Read more
  • 0
  • 0
  • 1486

article-image-nodejs-building-maintainable-codebase
Benjamin Reed
06 May 2015
8 min read
Save for later

NodeJS: Building a Maintainable Codebase

Benjamin Reed
06 May 2015
8 min read
NodeJS has become the most anticipated web development technology since Ruby on Rails. This is not an introduction to Node. First, you must realize that NodeJS is not a direct competitor to Rails or Django. Instead, Node is a collection of libraries that allow JavaScript to run on the v8 runtime. Node powers many tools, and some of the tools have nothing to do with a scaling web application. For instance, GitHub’s Atom editor is built on top of Node. Its web application frameworks, like Express, are the competitors. This article can apply to all environments using Node. Second, Node is designed under the asynchronous ideology. Not all of the operations in Node are asynchronous. Many libraries offer synchronous and asynchronous options. A Node developer must decipher the best operation for his or her needs. Third, you should have a solid understanding of the concept of a callback in Node. Over the course of two weeks, a team attempted to refactor a Rails app to be an Express application. We loved the concepts behind Node, and we truly believed that all we needed was a barebones framework. We transferred our controller logic over to Express routes in a weekend. As a beginning team, I will analyze some of the pitfalls that we came across. Hopefully, this will help you identify strategies to tackle Node with your team. First, attempt to structure callbacks and avoid anonymous functions. As we added more and more logic, we added more and more callbacks. Everything was beautifully asynchronous, and our code would successfully run. However, we soon found ourselves debugging an anonymous function nested inside of other anonymous functions. In other words, the codebase was incredibly difficult to follow. Anyone starting out with Node could potentially notice the novice “spaghetti code.” Here’s a simple example of nested callbacks: router.put('/:id', function(req, res) { console.log("attempt to update bathroom"); models.User.find({ where: {id: req.param('id')} }).success(function (user) { var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; var raw_digest = req.param('digest') ? req.param('digest') : user.digest; user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.digest = raw_digest; user.updated_on = new Date(); user.save().success(function () { res.json(user); }).error(function () { res.json({"status": "error"}); }); }) .error(function() { res.json({"status": "error"}); }) }); Notice that there are many success and error callbacks. Locating a specific callback is not difficult if the whitespace is perfect or the developer can count closing brackets back up to the destination. However, this is pretty nasty to any newcomer. And this illegibility will only increase as the application becomes more complex. A developer may get this response: {"status": "error"} Where did this response come from? Did the ORM fail to update the object? Did it fail to find the object in the first place? A developer could add descriptions to the json in the chained error callbacks, but there has to be a better way. Let’s extract some of the callbacks into separate methods: router.put('/:id', function(req, res) { var id = req.param('id'); var query = { where: {id: id} }; // search for user models.User.find(query).success(function (user) { // parse req parameters var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; // set user attributes user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.updated_on = new Date(); // attempt to save user user.save() .success(SuccessHandler.userSaved(res, user)) .error(ErrorHandler.userNotSaved(res, id)); }) .error(ErrorHandler.userNotFound(res, id)) }); var ErrorHandler = { userNotFound: function(res, user_id) { res.json({"status": "error", "description": "The user with the specified id could not be found.", "user_id": user_id}); }, userNotSaved: function(res, user_id) { res.json({"status": "error", "description": "The update to the user with the specified id could not be completed.", "user_id": user_id}); } }; var SuccessHandler = { userSaved: function(res, user) { res.json(user); } } This seemed to help clean up our minimal sample. There is now only one anonymous function. The code seems to be a lot more readable and independent. However, our code is still cluttered by chaining success and error callbacks. One could make these global mutable variables, or, perhaps we can consider another approach. Futures, also known as promises, are becoming more prominent. Twitter has adopted them in Scala. It is definitely something to consider. Next, do what makes your team comfortable and productive. At the same time, do not compromise the integrity of the project. There are numerous posts that encourage certain styles over others. There are also extensive posts on the subject of CoffeeScript. If you aren’t aware, CoffeeScript is a language with some added syntactic flavor that compiles to JavaScript. Our team was primarily ruby developers, and it definitely appealed to us. When we migrated some of the project over to CoffeeScript, we found that our code was a lot shorter and appeared more legible. GitHub uses CoffeeScript for the Atom text editor to this day, and the Rails community has openly embraced it. The majority of node module documentation will use JavaScript, so CoffeeScript developers will have to become acquainted with translation. There are some problems with CoffeeScript being ES6 ready, and there are some modules that are clearly not meant to be utilized in CoffeeScript. CoffeeScript is an open source project, but it has appears to have a good backbone and a stable community. If your developers are more comfortable with it, utilize it. When it comes to open source projects, everyone tends to trust them. In the purest form, open source projects are absolutely beautiful. They make the lives of all of the developers better. Nobody has to re-implement the wheel unless they choose. Obviously, both Node and CoffeeScript are open source. However, the community is very new, and it is dangerous to assume that any package you find on NPM is stable. For us, the problem occurred when we searched for an ORM. We truly missed ActiveRecord, and we assumed that other projects would work similarly.  We tried several solutions, and none of them interacted the way we wanted. Besides expressing our entire schema in a JavaScript format, we found relations to be a bit of a hack. Settling on one, we ran our server. And our database cleared out. That’s fine in development, but we struggled to find a way to get it into production. We needed more documentation. Also, the module was not designed with CoffeeScript in mind. We practically needed to revert to JavaScript. In contrast, the Node community has openly embraced some NoSQL databases, such as MongoDB. They are definitely worth considering.   Either way, make sure that your team’s dependencies are very well documented. There should be a written documentation for each exposed object, function, etc. To sum everything up, this article comes down to two fundamental things learned in any computer science class: write modular code and document everything. Do your research on Node and find a style that is legible for your team and any newcomers. A NodeJS project can only be maintained if developers utilizing the framework recognize the importance of the project in the future. If your code is messy now, it will only become messier. If you cannot find necessary information in a module’s documentation, you probably will miss other information when there is a problem in production. Don’t take shortcuts. A node application can only be as good as its developers and dependencies. About the Author Benjamin Reed began Computer Science classes at a nearby university in Nashville during his sophomore year in high school. Since then, he has become an advocate for open source. He is now pursing degrees in Computer Science and Mathematics fulltime. The Ruby community has intrigued him, and he openly expresses support for the Rails framework. When asked, he believes that studying Rails has led him to some of the best practices and, ultimately, has made him a better programmer. iOS development is one of his hobbies, and he enjoys scouting out new projects on GitHub. On GitHub, he’s appropriately named @codeblooded. On Twitter, he’s @benreedDev.
Read more
  • 0
  • 0
  • 2036
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-less-external-applications-and-frameworks
Packt
30 Apr 2015
11 min read
Save for later

Less with External Applications and Frameworks

Packt
30 Apr 2015
11 min read
In this article by Bass Jobsen, author of the book Less Web Development Essentials - Second Edition, we will cover the following topics: WordPress and Less Using Less with the Play framework, AngularJS, Meteor, and Rails (For more resources related to this topic, see here.) WordPress and Less Nowadays, WordPress is not only used for weblogs, but it can also be used as a content management system for building a website. The WordPress system, written in PHP, has been split into the core system, plugins, and themes. The plugins add additional functionalities to the system, and the themes handle the look and feel of a website built with WordPress. They work independently of each other and are also independent of the theme. The theme does not depend on plugins. WordPress themes define the global CSS for a website, but every plugin can also add its own CSS code. The WordPress theme developers can use Less to compile the CSS code of the themes and the plugins. Using the Sage theme by Roots with Less Sage is a WordPress starter theme. You can use it to build your own theme. The theme is based on HTML5 Boilerplate (http://html5boilerplate.com/) and Bootstrap. Visit the Sage theme website at https://roots.io/sage/. Sage can also be completely built using Gulp. More information about how to use Gulp and Bower for the WordPress development can be found at https://roots.io/sage/docs/theme-development/. After downloading Sage, the Less files can be found at assets/styles/. These files include Bootstrap's Less files. The assets/styles/main.less file imports the main Bootstrap Less file, bootstrap.less. Now, you can edit main.less to customize your theme. You will have to rebuild the Sage theme after the changes you make. You can use all of the Bootstrap's variables to customize your build. JBST with a built-in Less compiler JBST is also a WordPress starter theme. JBST is intended to be used with the so-called child themes. More information about the WordPress child themes can be found at https://codex.wordpress.org/Child_Themes. After installing JBST, you will find a Less compiler under Appearance in your Dashboard pane, as shown in the following screenshot: JBST's built-in Less compiler in the WordPress Dashboard The built-in Less compiler can be used to fully customize your website using Less. Bootstrap also forms the skeleton of JBST, and the default settings are gathered by the a11y bootstrap theme mentioned earlier. JBST's Less compiler can be used in the following different ways: First, the compiler accepts any custom-written Less (and CSS) code. For instance, to change the color of the h1 elements, you should simply edit and recompile the code as follows: h1 {color: red;} Secondly, you can edit Bootstrap's variables and (re)use Bootstrap's mixins. To set the background color of the navbar component and add a custom button, you can use the code block mentioned here in the Less compiler: @navbar-default-bg:             blue; .btn-colored { .button-variant(blue;red;green); } Thirdly, you can set JBST's built-in Less variables as follows: @footer_bg_color: black; Lastly, JBST has its own set of mixins. To set a custom font, you can edit the code as shown here: .include-custom-font(@family: arial,@font-path, @path:   @custom-font-dir, @weight: normal, @style: normal); In the preceding code, the parameters mentioned were used to set the font name (@family) and the path name to the font files (@path/@font-path). The @weight and @style parameters set the font's properties. For more information, visit https://github.com/bassjobsen/Boilerplate-JBST-Child-Theme. More Less code blocks can also be added to a special file (wpless2css/wpless2css.less or less/custom.less); these files will give you the option to add, for example, a library of prebuilt mixins. After adding the library using this file, the mixins can also be used with the built-in compiler. The Semantic UI WordPress theme The Semantic UI, as discussed earlier, offers its own WordPress plugin. The plugin can be downloaded from https://github.com/ProjectCleverWeb/Semantic-UI-WordPress. After installing and activating this theme, you can use your website directly with the Semantic UI. With the default setting, your website will look like the following screenshot: Website built with the Semantic UI WordPress theme WordPress plugins and Less As discussed earlier, the WordPress plugins have their own CSS. This CSS will be added to the page like a normal style sheet, as shown here: <link rel='stylesheet' id='plugin-name'   href='//domain/wp-content/plugin-name/plugin-name.css?ver=2.1.2'     type='text/css' media='all' /> Unless a plugin provides the Less files for their CSS code, it will not be easy to manage its styles with Less. The WP Less to CSS plugin The WP Less to CSS plugin, which can be found at http://wordpress.org/plugins/wp-less-to-css/, offers the possibility of styling your WordPress website with Less. As seen earlier, you can enter the Less code along with the built-in compiler of JBST. This code will then be compiled into the website's CSS. This plugin compiles Less with the PHP Less compiler, Less.php. Using Less with the Play framework The Play framework helps you in building lightweight and scalable web applications by using Java or Scala. It will be interesting to learn how to integrate Less with the workflow of the Play framework. You can install the Play framework from https://www.playframework.com/. To learn more about the Play framework, you can also read, Learning Play! Framework 2, Andy Petrella, Packt Publishing. To read Petrella's book, visit https://www.packtpub.com/web-development/learning-play-framework-2. To run the Play framework, you need JDK 6 or later. The easiest way to install the Play framework is by using the Typesafe activator tool. After installing the activator tool, you can run the following command: > activator new my-first-app play-scala The preceding command will install a new app in the my-first-app directory. Using the play-java option instead of the play-scala option in the preceding command will lead to the installation of a Java-based app. Later on, you can add the Scala code in a Java app or the Java code in a Scala app. After installing a new app with the activator command, you can run it by using the following commands: cd my-first-app activator run Now, you can find your app at http://localhost:9000. To enable the Less compilation, you should simply add the sbt-less plugin to your plugins.sbt file as follows: addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6") After enabling the plugin, you can edit the build.sbt file so as to configure Less. You should save the Less files into app/assets/stylesheets/. Note that each file in app/assets/stylesheets/ will compile into a separate CSS file. The CSS files will be saved in public/stylesheets/ and should be called in your templates with the HTML code shown here: <link rel="stylesheet"   href="@routes.Assets.at("stylesheets/main.css")"> In case you are using a library with more files imported into the main file, you can define the filters in the build.sbt file. The filters for these so-called partial source files can look like the following code: includeFilter in (Assets, LessKeys.less) := "*.less" excludeFilter in (Assets, LessKeys.less) := "_*.less" The preceding filters ensure that the files starting with an underscore are not compiled into CSS. Using Bootstrap with the Play framework Bootstrap is a CSS framework. Bootstrap's Less code includes many files. Keeping your code up-to-date by using partials, as described in the preceding section, will not work well. Alternatively, you can use WebJars with Play for this purpose. To enable the Bootstrap WebJar, you should add the code shown here to your build.sbt file: libraryDependencies += "org.webjars" % "bootstrap" % "3.3.2" When using the Bootstrap WebJar, you can import Bootstrap into your project as follows: @import "lib/bootstrap/less/bootstrap.less"; AngularJS and Less AngularJS is a structural framework for dynamic web apps. It extends the HTML syntax, and this enables you to create dynamic web views. Of course, you can use AngularJS with Less. You can read more about AngularJS at https://angularjs.org/. The HTML code shown here will give you an example of what repeating the HTML elements with AngularJS will look like: <!doctype html> <html ng-app> <head>    <title>My Angular App</title> </head> <body ng-app>      <ul>      <li ng-repeat="item in [1,2,3]">{{ item }}</li>    </ul> <script   src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.12/&    angular.min.js"></script> </body> </html> This code should make your page look like the following screenshot: Repeating the HTML elements with AngularJS The ngBoilerplate system The ngBoilerplate system is an easy way to start a project with AngularJS. The project comes with a directory structure for your application and a Grunt build process, including a Less task and other useful libraries. To start your project, you should simply run the following commands on your console: > git clone git://github.com/ngbp/ngbp > cd ngbp > sudo npm -g install grunt-cli karma bower > npm install > bower install > grunt watch And then, open ///path/to/ngbp/build/index.html in your browser. After installing ngBoilerplate, you can write the Less code into src/less/main.less. By default, only src/less/main.less will be compiled into CSS; other libraries and other codes should be imported into this file. Meteor and Less Meteor is a complete open-source platform for building web and mobile apps in pure JavaScript. Meteor focuses on fast development. You can publish your apps for free on Meteor's servers. Meteor is available for Linux and OS X. You can also install it on Windows. Installing Meteor is as simple as running the following command on your console: > curl https://install.meteor.com | /bin/sh You should install the Less package for compiling the CSS code of the app with Less. You can install the Less package by running the command shown here: > meteor add less Note that the Less package compiles every file with the .less extension into CSS. For each file with the .less extension, a separate CSS file is created. When you use the partial Less files that should only be imported (with the @import directive) and not compiled into the CSS code itself, you should give these partials the .import.less extension. When using the CSS frameworks or libraries with many partials, renaming the files by adding the .import.less extension will hinder you in updating your code. Also running postprocess tasks for the CSS code is not always possible. Many packages for Meteor are available at https://atmospherejs.com/. Some of these packages can help you solve the issue with using partials mentioned earlier. To use Bootstrap, you can use the meteor-bootstrap package. The meteor-bootstrap package can be found at https://github.com/Nemo64/meteor-bootstrap. The meteor-bootstrap package requires the installation of the Less package. Other packages provide you postprocsess tasks, such as autoprefixing your code. Ruby on rails and Less Ruby on Rails, or Rails, for short is a web application development framework written in the Ruby language. Those who want to start developing with Ruby on Rails can read the Getting Started with Rails guide, which can be found at http://guides.rubyonrails.org/getting_started.html. In this section, you can read how to integrate Less into a Ruby on Rails app. After installing the tools and components required for starting with Rails, you can launch a new application by running the following command on your console: > rails new blog Now, you should integrate Less with Rails. You can use less-rails (https://github.com/metaskills/less-rails) to bring Less to Rails. Open the Gemfile file, comment on the sass-rails gem, and add the less-rails gem, as shown here: #gem 'sass-rails', '~> 5.0' gem 'less-rails' # Less gem 'therubyracer' # Ruby Then, create a controller called welcome with an action called index by running the following command: > bin/rails generate controller welcome index The preceding command will generate app/views/welcome/index.html.erb. Open app/views/welcome/index.html.erb and make sure that it contains the HTML code as shown here: <h1>Welcome#index</h1> <p>Find me in app/views/welcome/index.html.erb</p> The next step is to create a file, app/assets/stylesheets/welcome.css.less, with the Less code. The Less code in app/assets/stylesheets/welcome.css.less looks as follows: @color: red; h1 { color: @color; } Now, start a web server with the following command: > bin/rails server Finally, you can visit the application at http://localhost:3000/. The application should look like the example shown here: The Rails app Summary In this article, you learned how to use Less WordPress, Play, Meteor, AngularJS, Ruby on Rails. Resources for Article: Further resources on this subject: Media Queries with Less [article] Bootstrap 3 and other applications [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 1618

article-image-using-mock-objects-test-interactions
Packt
23 Apr 2015
25 min read
Save for later

Using Mock Objects to Test Interactions

Packt
23 Apr 2015
25 min read
In this article by Siddharta Govindaraj, author of the book Test-Driven Python Development, we will look at the Event class. The Event class is very simple: receivers can register with the event to be notified when the event occurs. When the event fires, all the receivers are notified of the event. (For more resources related to this topic, see here.) A more detailed description is as follows: Event classes have a connect method, which takes a method or function to be called when the event fires When the fire method is called, all the registered callbacks are called with the same parameters that are passed to the fire method Writing tests for the connect method is fairly straightforward—we just need to check that the receivers are being stored properly. But, how do we write the tests for the fire method? This method does not change any state or store any value that we can assert on. The main responsibility of this method is to call other methods. How do we test that this is being done correctly? This is where mock objects come into the picture. Unlike ordinary unit tests that assert on object state, mock objects are used to test that the interactions between multiple objects occurs as it should. Hand writing a simple mock To start with, let us look at the code for the Event class so that we can understand what the tests need to do. The following code is in the file event.py in the source directory: class Event:    """A generic class that provides signal/slot functionality"""      def __init__(self):        self.listeners = []      def connect(self, listener):        self.listeners.append(listener)      def fire(self, *args, **kwargs):        for listener in self.listeners:            listener(*args, **kwargs) The way this code works is fairly simple. Classes that want to get notified of the event should call the connect method and pass a function. This will register the function for the event. Then, when the event is fired using the fire method, all the registered functions will be notified of the event. The following is a walk-through of how this class is used: >>> def handle_event(num): ...   print("I got number {0}".format(num)) ... >>> event = Event() >>> event.connect(handle_event) >>> event.fire(3) I got number 3 >>> event.fire(10) I got number 10 As you can see, every time the fire method is called, all the functions that registered with the connect method get called with the given parameters. So, how do we test the fire method? The walk-through above gives a hint. What we need to do is to create a function, register it using the connect method, and then verify that the method got notified when the fire method was called. The following is one way to write such a test: import unittest from ..event import Event   class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        called = False        def listener():            nonlocal called            called = True          event = Event()        event.connect(listener)        event.fire()        self.assertTrue(called) Put this code into the test_event.py file in the tests folder and run the test. The test should pass. The following is what we are doing: First, we create a variable named called and set it to False. Next, we create a dummy function. When the function is called, it sets called to True. Finally, we connect the dummy function to the event and fire the event. If the dummy function was successfully called when the event was fired, then the called variable would be changed to True, and we assert that the variable is indeed what we expected. The dummy function we created above is an example of a mock. A mock is simply an object that is substituted for a real object in the test case. The mock then records some information such as whether it was called, what parameters were passed, and so on, and we can then assert that the mock was called as expected. Talking about parameters, we should write a test that checks that the parameters are being passed correctly. The following is one such test:    def test_a_listener_is_passed_right_parameters(self):        params = ()        def listener(*args, **kwargs):            nonlocal params            params = (args, kwargs)        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape":"square"}), params) This test is the same as the previous one, except that it saves the parameters that are then used in the assert to verify that they were passed properly. At this point, we can see some repetition coming up in the way we set up the mock function and then save some information about the call. We can extract this code into a separate class as follows: class Mock:    def __init__(self):        self.called = False        self.params = ()      def __call__(self, *args, **kwargs):        self.called = True        self.params = (args, kwargs) Once we do this, we can use our Mock class in our tests as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called)      def test_a_listener_is_passed_right_parameters(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape": "square"}),             listener.params) What we have just done is to create a simple mocking class that is quite lightweight and good for simple uses. However, there are often times when we need much more advanced functionality, such as mocking a series of calls or checking the order of specific calls. Fortunately, Python has us covered with the unittest.mock module that is supplied as a part of the standard library. Using the Python mocking framework The unittest.mock module provided by Python is an extremely powerful mocking framework, yet at the same time it is very easy to use. Let us redo our tests using this library. First, we need to import the mock module at the top of our file as follows: from unittest import mock Next, we rewrite our first test as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called) The only change that we've made is to replace our own custom Mock class with the mock.Mock class provided by Python. That is it. With that single line change, our test is now using the inbuilt mocking class. The unittest.mock.Mock class is the core of the Python mocking framework. All we need to do is to instantiate the class and pass it in where it is required. The mock will record if it was called in the called instance variable. How do we check that the right parameters were passed? Let us look at the rewrite of the second test as follows:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_called_with(5, shape="square") The mock object automatically records the parameters that were passed in. We can assert on the parameters by using the assert_called_with method on the mock object. The method will raise an assertion error if the parameters don't match what was expected. In case we are not interested in testing the parameters (maybe we just want to check that the method was called), then we can pass the value mock.ANY. This value will match any parameter passed. There is a subtle difference in the way normal assertions are called compared to assertions on mocks. Normal assertions are defined as a part of the unittest.Testcase class. Since our tests inherit from that class, we call the assertions on self, for example, self.assertEquals. On the other hand, the mock assertion methods are a part of the mock object, so you call them on the mock object, for example, listener.assert_called_with. Mock objects have the following four assertions available out of the box: assert_called_with: This method asserts that the last call was made with the given parameters assert_called_once_with: This assertion checks that the method was called exactly once and was with the given parameters assert_any_call: This checks that the given call was made at some point during the execution assert_has_calls: This assertion checks that a list of calls occurred The four assertions are very subtly different, and that shows up when the mock has been called more than one. The assert_called_with method only checks the last call, so if there was more than one call, then the previous calls will not be asserted. The assert_any_call method will check if a call with the given parameters occurred anytime during execution. The assert_called_once_with assertion asserts for a single call, so if the mock was called more than once during execution, then this assert would fail. The assert_has_calls assertion can be used to assert that a set of calls with the given parameters occurred. Note that there might have been more calls than what we checked for in the assertion, but the assertion would still pass as long as the given calls are present. Let us take a closer look at the assert_has_calls assertion. Here is how we can write the same test using this assertion:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_has_calls([mock.call(5, shape="square")]) The mocking framework internally uses _Call objects to record calls. The mock.call function is a helper to create these objects. We just call it with the expected parameters to create the required call objects. We can then use these objects in the assert_has_calls assertion to assert that the expected call occurred. This method is useful when the mock was called multiple times and we want to assert only some of the calls. Mocking objects While testing the Event class, we only needed to mock out single functions. A more common use of mocking is to mock a class. Take a look at the implementation of the Alert class in the following: class Alert:    """Maps a Rule to an Action, and triggers the action if the rule    matches on any stock update"""      def __init__(self, description, rule, action):        self.description = description        self.rule = rule        self.action = action      def connect(self, exchange):        self.exchange = exchange        dependent_stocks = self.rule.depends_on()        for stock in dependent_stocks:            exchange[stock].updated.connect(self.check_rule)      def check_rule(self, stock):        if self.rule.matches(self.exchange):            self.action.execute(self.description) Let's break down how this class works as follows: The Alert class takes a Rule and an Action in the initializer. When the connect method is called, it takes all the dependent stocks and connects to their updated event. The updated event is an instance of the Event class that we saw earlier. Each Stock class has an instance of this event, and it is fired whenever a new update is made to that stock. The listener for this event is the self.check_rule method of the Alert class. In this method, the alert checks if the new update caused a rule to be matched. If the rule matched, it calls the execute method on the Action. Otherwise, nothing happens. This class has a few requirements, as shown in the following, that need to be met. Each of these needs to be made into a unit test. If a stock is updated, the class should check if the rule matches If the rule matches, then the corresponding action should be executed If the rule doesn't match, then nothing happens There are a number of different ways in which we could test this; let us go through some of the options. The first option is not to use mocks at all. We could create a rule, hook it up to a test action, and then update the stock and verify that the action was executed. The following is what such a test would look like: import unittest from datetime import datetime from unittest import mock   from ..alert import Alert from ..rule import PriceRule from ..stock import Stock   class TestAction:    executed = False      def execute(self, description):        self.executed = True   class AlertTest(unittest.TestCase):    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)       action = TestAction()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        self.assertTrue(action.executed) This is the most straightforward option, but it requires a bit of code to set up and there is the TestAction that we need to create just for the test case. Instead of creating a test action, we could instead replace it with a mock action. We can then simply assert on the mock that it got executed. The following code shows this variation of the test case:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)        action = mock.MagicMock()       alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") A couple of observations about this test: If you notice, alert is not the usual Mock object that we have been using so far, but a MagicMock object. A MagicMock object is like a Mock object but it has special support for Python's magic methods which are present on all classes, such as __str__, hasattr. If we don't use MagicMock, we may sometimes get errors or strange behavior if the code uses any of these methods. The following example illustrates the difference: >>> from unittest import mock >>> mock_1 = mock.Mock() >>> mock_2 = mock.MagicMock() >>> len(mock_1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'Mock' has no len() >>> len(mock_2) 0 >>>  In general, we will be using MagicMock in most places where we need to mock a class. Using Mock is a good option when we need to mock stand alone functions, or in rare situations where we specifically don't want a default implementation for the magic methods. The other observation about the test is the way methods are handled. In the test above, we created a mock action object, but we didn't specify anywhere that this mock class should contain an execute method and how it should behave. In fact, we don't need to. When a method or attribute is accessed on a mock object, Python conveniently creates a mock method and adds it to the mock class. Therefore, when the Alert class calls the execute method on our mock action object, that method is added to our mock action. We can then check that the method was called by asserting on action.execute.called. The downside of Python's behavior of automatically creating mock methods when they are accessed is that a typo or change in interface can go unnoticed. For example, suppose we rename the execute method in all the Action classes to run. But if we run our test cases, it still passes. Why does it pass? Because the Alert class calls the execute method, and the test only checks that the execute method was called, which it was. The test does not know that the name of the method has been changed in all the real Action implementations and that the Alert class will not work when integrated with the actual actions. To avoid this problem, Python supports using another class or object as a specification. When a specification is given, the mock object only creates the methods that are present in the specification. All other method or attribute accesses will raise an error. Specifications are passed to the mock at initialization time via the spec parameter. Both the Mock as well as MagicMock classes support setting a specification. The following code example shows the difference when a spec parameter is set compared to a default Mock object: >>> from unittest import mock >>> class PrintAction: ...     def run(self, description): ...         print("{0} was executed".format(description)) ...   >>> mock_1 = mock.Mock() >>> mock_1.execute("sample alert") # Does not give an error <Mock name='mock.execute()' id='54481752'>   >>> mock_2 = mock.Mock(spec=PrintAction) >>> mock_2.execute("sample alert") # Gives an error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 557, in __getattr__    raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'execute' Notice in the above example that mock_1 goes ahead and executes the execute method without any error, even though the method has been renamed in the PrintAction. On the other hand, by giving a spec, the method call to the nonexistent execute method raises an exception. Mocking return values The second variant above showed how we could use a mock Action class in the test instead of a real one. In the same way, we can also use a mock rule instead of creating a PriceRule in the test. The alert calls the rule to see whether the new stock update caused the rule to be matched. What the alert does depends on whether the rule returned True or False. All the mocks we've created so far have not had to return a value. We were just interested in whether the right call was made or not. If we mock the rule, then we will have to configure it to return the right value for the test. Fortunately, Python makes that very simple to do. All we have to do is to set the return value as a parameter in the constructor to the mock object as follows: >>> matches = mock.Mock(return_value=True) >>> matches() True >>> matches(4) True >>> matches(4, "abcd") True As we can see above, the mock just blindly returns the set value, irrespective of the parameters. Even the type or number of parameters is not considered. We can use the same procedure to set the return value of a method in a mock object as follows: >>> rule = mock.MagicMock() >>> rule.matches = mock.Mock(return_value=True) >>> rule.matches() True >>>  There is another way to set the return value, which is very convenient when dealing with methods in mock objects. Each mock object has a return_value attribute. We simply set this attribute to the return value and every call to the mock will return that value, as shown in the following: >>> from unittest import mock >>> rule = mock.MagicMock() >>> rule.matches.return_value = True >>> rule.matches() True >>>  In the example above, the moment we access rule.matches, Python automatically creates a mock matches object and puts it in the rule object. This allows us to directly set the return value in one statement without having to create a mock for the matches method. Now that we've seen how to set the return value, we can go ahead and change our test to use a mocked rule object, as shown in the following:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") There are two calls that the Alert makes to the rule: one to the depends_on method and the other to the matches method. We set the return value for both of them and the test passes. In case no return value is explicitly set for a call, the default return value is to return a new mock object. The mock object is different for each method that is called, but is consistent for a particular method. This means if the same method is called multiple times, the same mock object will be returned each time. Mocking side effects Finally, we come to the Stock class. This is the final dependency of the Alert class. We're currently creating Stock objects in our test, but we could replace it with a mock object just like we did for the Action and PriceRule classes. The Stock class is again slightly different in behavior from the other two mock objects. The update method doesn't just return a value—it's primary behavior in this test is to trigger the updated event. Only if this event is triggered will the rule check occur. In order to do this, we must tell our mock stock class to fire the event when the update event is called. Mock objects have a side_effect attribute to enable us to do just this. There are many reasons we might want to set a side effect. Some of them are as follows: We may want to call another method, like in the case of the Stock class, which needs to fire the event when the update method is called. To raise an exception: this is particularly useful when testing error situations. Some errors such as a network timeout might be very difficult to simulate, and it is better to test using a mock that simply raises the appropriate exception. To return multiple values: these may be different values each time the mock is called, or specific values, depending on the parameters passed. Setting the side effect is just like setting the return value. The only difference is that the side effect is a lambda function. When the mock is executed, the parameters are passed to the lambda function and the lambda is executed. The following is how we would use this with a mocked out Stock class:    def test_action_is_executed_when_rule_matches(self):        goog = mock.MagicMock(spec=Stock)        goog.updated = Event()        goog.update.side_effect = lambda date, value:                goog.updated.fire(self)        exchange = {"GOOG": goog}      rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)         exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") So what is going on in that test? First, we create a mock of the Stock class instead of using the real one. Next, we add in the updated event. We need to do this because the Stock class creates the attribute at runtime in the __init__ scope. Because the attribute is set dynamically, MagicMock does not pick up the attribute from the spec parameter. We are setting an actual Event object here. We could set it as a mock as well, but it is probably overkill to do that. Finally, we set the side effect for the update method in the mock stock object. The lambda takes the two parameters that the method does. In this particular example, we just want to fire the event, so the parameters aren't used in the lambda. In other cases, we might want to perform different actions based on the values of the parameters. Setting the side_effect attribute allows us to do that. Just like with the return_value attribute, the side_effect attribute can also be set in the constructor. Run the test and it should pass. The side_effect attribute can also be set to an exception or a list. If it is set to an exception, then the given exception will be raised when the mock is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = Exception() >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 941, in _mock_call    raise effect Exception If it is set to a list, then the mock will return the next element of the list each time it is called. This is a good way to mock a function that has to return different values each time it is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = [1, 2, 3] >>> m() 1 >>> m() 2 >>> m() 3 >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 944, in _mock_call    result = next(effect) StopIteration As we have seen, the mocking framework's method of handling side effects using the side_effect attribute is very simple, yet quite powerful. How much mocking is too much? In the previous few sections, we've seen the same test written with different levels of mocking. We started off with a test that didn't use any mocks at all, and subsequently mocked out each of the dependencies one by one. Which one of these solutions is the best? As with many things, this is a point of personal preference. A purist would probably choose to mock out all dependencies. My personal preference is to use real objects when they are small and self-contained. I would not have mocked out the Stock class. This is because mocks generally require some configuration with return values or side effects, and this configuration can clutter the test and make it less readable. For small, self-contained classes, it is simpler to just use the real object. At the other end of the spectrum, classes that might interact with external systems, or that take a lot of memory, or are slow are good candidates for mocking out. Additionally, objects that require a lot of dependencies on other object to initialize are candidates for mocking. With mocks, you just create an object, pass it in, and assert on parts that you are interested in checking. You don't have to create an entirely valid object. Even here there are alternatives to mocking. For example, when dealing with a database, it is common to mock out the database calls and hardcode a return value into the mock. This is because the database might be on another server, and accessing it makes the tests slow and unreliable. However, instead of mocks, another option could be to use a fast in-memory database for the tests. This allows us to use a live database instead of a mocked out database. Which approach is better depends on the situation. Mocks versus stubs versus fakes versus spies We've been talking about mocks so far, but we've been a little loose on the terminology. Technically, everything we've talked about falls under the category of a test double. A test double is some sort of fake object that we use to stand in for a real object in a test case. Mocks are a specific kind of test double that record information about calls that have been made to it, so that we can assert on them later. Stubs are just an empty do-nothing kind of object or method. They are used when we don't care about some functionality in the test. For example, imagine we have a method that performs a calculation and then sends an e-mail. If we are testing the calculation logic, we might just replace the e-mail sending method with an empty do-nothing method in the test case so that no e-mails are sent out while the test is running. Fakes are a replacement of one object or system with a simpler one that facilitates easier testing. Using an in-memory database instead of the real one, or the way we created a dummy TestAction earlier in this article would be examples of fakes. Finally, spies are objects that are like middlemen. Like mocks, they record the calls so that we can assert on them later, but after recording, they continue execution to the original code. Spies are different from the other three in the sense that they do not replace any functionality. After recording the call, the real code is still executed. Spies sit in the middle and do not cause any change in execution pattern. Summary In this article, you looked at how to use mocks to test interactions between objects. You saw how to hand write our own mocks, followed by using the mocking framework provided in the Python standard library. Resources for Article: Further resources on this subject: Analyzing a Complex Dataset [article] Solving problems – closest good restaurant [article] Importing Dynamic Data [article]
Read more
  • 0
  • 0
  • 4440

article-image-third-party-libraries
Packt
21 Apr 2015
21 min read
Save for later

Third Party Libraries

Packt
21 Apr 2015
21 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, the author believes that our TypeScript development environment would not amount to much if we were not able to reuse the myriad of existing JavaScript libraries, frameworks and general goodness. However, in order to use a particular third party library with TypeScript, we will first need a matching definition file. Soon after TypeScript was released, Boris Yankov set up a github repository to house TypeScript definition files for third party JavaScript libraries. This repository, named DefinitelyTyped (https://github.com/borisyankov/DefinitelyTyped) quickly became very popular, and is currently the place to go for high-quality definition files. DefinitelyTyped currently has over 700 definition files, built up over time from hundreds of contributors from all over the world. If we were to measure the success of TypeScript within the JavaScript community, then the DefinitelyTyped repository would be a good indication of how well TypeScript has been adopted. Before you go ahead and try to write your own definition files, check the DefinitelyTyped repository to see if there is one already available. In this article, we will have a closer look at using these definition files, and cover the following topics: Choosing a JavaScript Framework Using TypeScript with Backbone Using TypeScript with Angular (For more resources related to this topic, see here.) Using third party libraries In this section of the article, we will begin to explore some of the more popular third party JavaScript libraries, their declaration files, and how to write compatible TypeScript for each of these frameworks. We will compare Backbone, and Angular which are all frameworks for building rich client-side JavaScript applications. During our discussion, we will see that some frameworks are highly compliant with the TypeScript language and its features, some are partially compliant, and some have very low compliance. Choosing a JavaScript framework Choosing a JavaScript framework or library to develop Single Page Applications is a difficult and sometimes daunting task. It seems that there is a new framework appearing every other month, promising more and more functionality for less and less code. To help developers compare these frameworks, and make an informed choice, Addy Osmani wrote an excellent article, named Journey Through the JavaScript MVC Jungle. (http://www.smashingmagazine.com/2012/07/27/journey-through-the-javascript-mvc-jungle/). In essence, his advice is simple – it's a personal choice – so try some frameworks out, and see what best fits your needs, your programming mindset, and your existing skill set. The TodoMVC project (http://todomvc.com), which Addy started, does an excellent job of implementing the same application in a number of MV* JavaScript frameworks. This really is a reference site for digging into a fully working application, and comparing for yourself the coding techniques and styles of different frameworks. Again, depending on the JavaScript library that you are using within TypeScript, you may need to write your TypeScript code in a specific way. Bear this in mind when choosing a framework - if it is difficult to use with TypeScript, then you may be better off looking at another framework with better integration. If it is easy and natural to work with the framework in TypeScript, then your productivity and overall development experience will be much better. We will look at some of the popular JavaScript libraries, along with their declaration files, and see how to write compatible TypeScript. The key thing to remember is that TypeScript generates JavaScript - so if you are battling to use a third party library, then crack open the generated JavaScript and see what the JavaScript code looks like that TypeScript is emitting. If the generated JavaScript matches the JavaScript code samples in the library's documentation, then you are on the right track. If not, then you may need to modify your TypeScript until the compiled JavaScript starts matching up with the samples. When trying to write TypeScript code for a third party JavaScript framework – particularly if you are working off the JavaScript documentation – your initial foray may just be one of trial and error. Along the way, you may find that you need to write your TypeScript in a specific way in order to match this particular third party library. The rest of this article shows how three different libraries require different ways of writing TypeScript. Backbone Backbone is a popular JavaScript library that gives structure to web applications by providing models, collections and views, amongst other things. Backbone has been around since 2010, and has gained a very large following, with a wealth of commercial websites using the framework. According to Infoworld.com, Backbone has over 1,600 Backbone related projects on GitHub that rate over 3 stars - meaning that it has a vast ecosystem of extensions and related libraries. Let's take a quick look at Backbone written in TypeScript. To follow along with the code in your own project, you will need to install the following NuGet packages: backbone.js ( currently at v1.1.2), and backbone.TypeScript.DefinitelyTyped (currently at version 1.2.3). Using inheritance with Backbone From the Backbone documentation, we find an example of creating a Backbone.Model in JavaScript as follows: var Note = Backbone.Model.extend(    {        initialize: function() {            alert("Note Model JavaScript initialize");        },        author: function () { },        coordinates: function () { },        allowedToEdit: function(account) {            return true;        }    } ); This code shows a typical usage of Backbone in JavaScript. We start by creating a variable named Note that extends (or derives from) Backbone.Model. This can be seen with the Backbone.Model.extend syntax. The Backbone extend function uses JavaScript object notation to define an object within the outer curly braces { … }. In the preceding code, this object has four functions: initialize, author, coordinates and allowedToEdit. According to the Backbone documentation, the initialize function will be called once a new instance of this class is created. The initialize function simply creates an alert to indicate that the function was called. The author and coordinates functions are blank at this stage, with only the allowedToEdit function actually doing something: return true. If we were to simply copy and paste the above JavaScript into a TypeScript file, we would generate the following compile error: Build: 'Backbone.Model.extend' is inaccessible. When working with a third party library, and a definition file from DefinitelyTyped, our first port of call should be to see if the definition file may be in error. After all, the JavaScript documentation says that we should be able to use the extend method as shown, so why is this definition file causing an error? If we open up the backbone.d.ts file, and then search to find the definition of the class Model, we will find the cause of the compilation error: class Model extends ModelBase {      /**    * Do not use, prefer TypeScript's extend functionality.    **/    private static extend(        properties: any, classProperties?: any): any; This declaration file snippet shows some of the definition of the Backbone Model class. Here, we can see that the extend function is defined as private static, and as such, it will not be available outside the Model class itself. This, however, seems contradictory to the JavaScript sample that we saw in the documentation. In the preceding comment on the extend function definition, we find the key to using Backbone in TypeScript: prefer TypeScript's extend functionality. This comment indicates that the declaration file for Backbone is built around TypeScript's extends keyword – thereby allowing us to use natural TypeScript inheritance syntax to create Backbone objects. The TypeScript equivalent to this code, therefore, must use the extends TypeScript keyword to derive a class from the base class Backbone.Model, as follows: class Note extends Backbone.Model {    initialize() {      alert("Note model Typescript initialize");    }    author() { }    coordinates() { }    allowedToEdit(account) {        return true;    } } We are now creating a class definition named Note that extends the Backbone.Model base class. This class then has the functions initialize, author, coordinates and allowedToEdit, similar to the previous JavaScript version. Our Backbone sample will now compile and run correctly. With either of these versions, we can create an instance of the Note object by including the following script within an HTML page: <script type="text/javascript">    $(document).ready( function () {        var note = new Note();    }); </script> This JavaScript sample simply waits for the jQuery document.ready event to be fired, and then creates an instance of the Note class. As documented earlier, the initialize function will be called when an instance of the class is constructed, so we would see an alert box appear when we run this in a browser. All of Backbone's core objects are designed with inheritance in mind. This means that creating new Backbone collections, views and routers will use the same extends syntax in TypeScript. Backbone, therefore, is a very good fit for TypeScript, because we can use natural TypeScript syntax for inheritance to create new Backbone objects. Using interfaces As Backbone allows us to use TypeScript inheritance to create objects, we can just as easily use TypeScript interfaces with any of our Backbone objects as well. Extracting an interface for the Note class above would be as follows: interface INoteInterface {    initialize();    author();    coordinates();    allowedToEdit(account: string); } We can now update our Note class definition to implement this interface as follows: class Note extends Backbone.Model implements INoteInterface {    // existing code } Our class definition now implements the INoteInterface TypeScript interface. This simple change protects our code from being modified inadvertently, and also opens up the ability to work with core Backbone objects in standard object-oriented design patterns. We could, if we needed to, apply the Factory Pattern to return a particular type of Backbone Model – or any other Backbone object for that matter. Using generic syntax The declaration file for Backbone has also added generic syntax to some class definitions. This brings with it further strong typing benefits when writing TypeScript code for Backbone. Backbone collections (surprise, surprise) house a collection of Backbone models, allowing us to define collections in TypeScript as follows: class NoteCollection extends Backbone.Collection<Note> {    model = Note;    //model: Note; // generates compile error    //model: { new (): Note }; // ok } Here, we have a NoteCollection that derives from, or extends a Backbone.Collection, but also uses generic syntax to constrain the collection to handle only objects of type Note. This means that any of the standard collection functions such as at() or pluck() will be strongly typed to return Note models, further enhancing our type safety and Intellisense. Note the syntax used to assign a type to the internal model property of the collection class on the second line. We cannot use the standard TypeScript syntax model: Note, as this causes a compile time error. We need to assign the model property to a the class definition, as seen with the model=Note syntax, or we can use the { new(): Note } syntax as seen on the last line. Using ECMAScript 5 Backbone also allows us to use ECMAScript 5 capabilities to define getters and setters for Backbone.Model classes, as follows: interface ISimpleModel {    Name: string;    Id: number; } class SimpleModel extends Backbone.Model implements ISimpleModel {    get Name() {        return this.get('Name');    }    set Name(value: string) {        this.set('Name', value);    }    get Id() {        return this.get('Id');    }    set Id(value: number) {        this.set('Id', value);    } } In this snippet, we have defined an interface with two properties, named ISimpleModel. We then define a SimpleModel class that derives from Backbone.Model, and also implements the ISimpleModel interface. We then have ES 5 getters and setters for our Name and Id properties. Backbone uses class attributes to store model values, so our getters and setters simply call the underlying get and set methods of Backbone.Model. Backbone TypeScript compatibility Backbone allows us to use all of TypeScript's language features within our code. We can use classes, interfaces, inheritance, generics and even ECMAScript 5 properties. All of our classes also derive from base Backbone objects. This makes Backbone a highly compatible library for building web applications with TypeScript. Angular AngularJs (or just Angular) is also a very popular JavaScript framework, and is maintained by Google. Angular takes a completely different approach to building JavaScript SPA's, introducing an HTML syntax that the running Angular application understands. This provides the application with two-way data binding capabilities, which automatically synchronizes models, views and the HTML page. Angular also provides a mechanism for Dependency Injection (DI), and uses services to provide data to your views and models. The example provided in the tutorial shows the following JavaScript: var phonecatApp = angular.module('phonecatApp', []); phonecatApp.controller('PhoneListCtrl', function ($scope) { $scope.phones = [    {'name': 'Nexus S',      'snippet': 'Fast just got faster with Nexus S.'},    {'name': 'Motorola XOOM™ with Wi-Fi',      'snippet': 'The Next, Next Generation tablet.'},    {'name': 'MOTOROLA XOOM™',      'snippet': 'The Next, Next Generation tablet.'} ]; }); This code snippet is typical of Angular JavaScript syntax. We start by creating a variable named phonecatApp, and register this as an Angular module by calling the module function on the angular global instance. The first argument to the module function is a global name for the Angular module, and the empty array is a place-holder for other modules that will be injected via Angular's Dependency Injection routines. We then call the controller function on the newly created phonecatApp variable with two arguments. The first argument is the global name of the controller, and the second argument is a function that accepts a specially named Angular variable named $scope. Within this function, the code sets the phones object of the $scope variable to be an array of JSON objects, each with a name and snippet property. If we continue reading through the tutorial, we find a unit test that shows how the PhoneListCtrl controller is used: describe('PhoneListCtrl', function(){    it('should create "phones" model with 3 phones', function() {      var scope = {},          ctrl = new PhoneListCtrl(scope);        expect(scope.phones.length).toBe(3); });   }); The first two lines of this code snippet use a global function called describe, and within this function another function called it. These two functions are part of a unit testing framework named Jasmine. We declare a variable named scope to be an empty JavaScript object, and then a variable named ctrl that uses the new keyword to create an instance of our PhoneListCtrl class. The new PhoneListCtrl(scope) syntax shows that Angular is using the definition of the controller just like we would use a normal class in TypeScript. Building the same object in TypeScript would allow us to use TypeScript classes, as follows: var phonecatApp = angular.module('phonecatApp', []);   class PhoneListCtrl {    constructor($scope) {        $scope.phones = [            { 'name': 'Nexus S',              'snippet': 'Fast just got faster' },            { 'name': 'Motorola',              'snippet': 'Next generation tablet' },            { 'name': 'Motorola Xoom',              'snippet': 'Next, next generation tablet' }        ];    } }; Our first line is the same as in our previous JavaScript sample. We then, however, use the TypeScript class syntax to create a class named PhoneListCtrl. By creating a TypeScript class, we can now use this class as shown in our Jasmine test code: ctrl = new PhoneListCtrl(scope). The constructor function of our PhoneListCtrl class now acts as the anonymous function seen in the original JavaScript sample: phonecatApp.controller('PhoneListCtrl', function ($scope) {    // this function is replaced by the constructor } Angular classes and $scope Let's expand our PhoneListCtrl class a little further, and have a look at what it would look like when completed: class PhoneListCtrl {    myScope: IScope;    constructor($scope, $http: ng.IHttpService, Phone) {        this.myScope = $scope;        this.myScope.phones = Phone.query();        $scope.orderProp = 'age';          _.bindAll(this, 'GetPhonesSuccess');    }    GetPhonesSuccess(data: any) {       this.myScope.phones = data;    } }; The first thing to note in this class, is that we are defining a variable named myScope, and storing the $scope argument that is passed in via the constructor, into this internal variable. This is again because of JavaScript's lexical scoping rules. Note the call to _.bindAll at the end of the constructor. This Underscore utility function will ensure that whenever the GetPhonesSuccess function is called, it will use the variable this in the context of the class instance, and not in the context of the calling code. The GetPhonesSuccess function uses the this.myScope variable within its implementation. This is why we needed to store the initial $scope argument in an internal variable. Another thing we notice from this code, is that the myScope variable is typed to an interface named IScope, which will need to be defined as follows: interface IScope {    phones: IPhone[]; } interface IPhone {    age: number;    id: string;    imageUrl: string;    name: string;    snippet: string; }; This IScope interface just contains an array of objects of type IPhone (pardon the unfortunate name of this interface – it can hold Android phones as well). What this means is that we don't have a standard interface or TypeScript type to use when dealing with $scope objects. By its nature, the $scope argument will change its type depending on when and where the Angular runtime calls it, hence our need to define an IScope interface, and strongly type the myScope variable to this interface. Another interesting thing to note on the constructor function of the PhoneListCtrl class is the type of the $http argument. It is set to be of type ng.IHttpService. This IHttpService interface is found in the declaration file for Angular. In order to use TypeScript with Angular variables such as $scope or $http, we need to find the matching interface within our declaration file, before we can use any of the Angular functions available on these variables. The last point to note in this constructor code is the final argument, named Phone. It does not have a TypeScript type assigned to it, and so automatically becomes of type any. Let's take a quick look at the implementation of this Phone service, which is as follows: var phonecatServices =     angular.module('phonecatServices', ['ngResource']);   phonecatServices.factory('Phone',    [        '$resource', ($resource) => {            return $resource('phones/:phoneId.json', {}, {                query: {                    method: 'GET',                    params: {                        phoneId: 'phones'                    },                    isArray: true                }            });        }    ] ); The first line of this code snippet again creates a global variable named phonecatServices, using the angular.module global function. We then call the factory function available on the phonecatServices variable, in order to define our Phone resource. This factory function uses a string named 'Phone' to define the Phone resource, and then uses Angular's dependency injection syntax to inject a $resource object. Looking through this code, we can see that we cannot easily create standard TypeScript classes for Angular to use here. Nor can we use standard TypeScript interfaces or inheritance on this Angular service. Angular TypeScript compatibility When writing Angular code with TypeScript, we are able to use classes in certain instances, but must rely on the underlying Angular functions such as module and factory to define our objects in other cases. Also, when using standard Angular services, such as $http or $resource, we will need to specify the matching declaration file interface in order to use these services. We can therefore describe the Angular library as having medium compatibility with TypeScript. Inheritance – Angular versus Backbone Inheritance is a very powerful feature of object-oriented programming, and is also a fundamental concept when using JavaScript frameworks. Using a Backbone controller or an Angular controller within each framework relies on certain characteristics, or functions being available. Each framework implements inheritance in a different way. As JavaScript does not have the concept of inheritance, each framework needs to find a way to implement it, so that the framework can allow us to extend base classes and their functionality. In Backbone, this inheritance implementation is via the extend function of each Backbone object. The TypeScript extends keyword follows a similar implementation to Backbone, allowing the framework and language to dovetail each other. Angular, on the other hand, uses its own implementation of inheritance, and defines functions on the angular global namespace to create classes (that is angular.module). We can also sometimes use the instance of an application (that is <appName>.controller) to create modules or controllers. We have found, though, that Angular uses controllers in a very similar way to TypeScript classes, and we can therefore simply create standard TypeScript classes that will work within an Angular application. So far, we have only skimmed the surface of both the Angular TypeScript syntax and the Backbone TypeScript syntax. The point of this exercise was to try and understand how TypeScript can be used within each of these two third party frameworks. Be sure to visit http://todomvc.com, and have a look at the full source-code for the Todo application written in TypeScript for both Angular and Backbone. They can be found on the Compile-to-JS tab in the example section. These running code samples, combined with the documentation on each of these sites, will prove to be an invaluable resource when trying to write TypeScript syntax with an external third party library such as Angular or Backbone. Angular 2.0 The Microsoft TypeScript team and the Google Angular team have just completed a months long partnership, and have announced that the upcoming release of Angular, named Angular 2.0, will be built using TypeScript. Originally, Angular 2.0 was going to use a new language named AtScript for Angular development. During the collaboration work between the Microsoft and Google teams, however, the features of AtScript that were needed for Angular 2.0 development have now been implemented within TypeScript. This means that the Angular 2.0 library will be classed as highly compatible with TypeScript, once the Angular 2.0 library, and the 1.5 edition of the TypeScript compiler are available. Summary In this article, we looked at three types of third party libraries, and discussed how to integrate these libraries with TypeScript. We explored Backbone, which can be categorized as a highly compliant third party library, Angular, which is a partially compliant library. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Introduction to TypeScript [article] Getting Ready with CoffeeScript [article]
Read more
  • 0
  • 0
  • 1077

article-image-our-first-api-go
Packt
14 Apr 2015
15 min read
Save for later

Our First API in Go

Packt
14 Apr 2015
15 min read
This article is penned by Nathan Kozyra, the author of the book, Mastering Go Web Services. This quickly introduces—or reintroduces—some core concepts related to Go setup and usage as well as the http package. (For more resources related to this topic, see here.) If you spend any time developing applications on the Web (or off it, for that matter), it won't be long before you find yourself facing the prospect of interacting with a web service or an API. Whether it's a library that you need or another application's sandbox with which you have to interact, the world of development relies in no small part on the cooperation among dissonant applications, languages, and formats. That, after all, is why we have APIs to begin with—to allow standardized communication between any two given platforms. If you spend a long amount of time working on the Web, you'll encounter bad APIs. By bad we mean APIs that are not all-inclusive, do not adhere to best practices and standards, are confusing semantically, or lack consistency. You'll encounter APIs that haphazardly use OAuth or simple HTTP authentication in some places and the opposite in others, or more commonly, APIs that ignore the stated purposes of HTTP verbs. Google's Go language is particularly well suited to servers. With its built-in HTTP serving, a simple method for XML and JSON encoding of data, high availability, and concurrency, it is the ideal platform for your API. We will cover the following topics in this article: Understanding requirements and dependencies Introducing the HTTP package Understanding requirements and dependencies Before we get too deep into the weeds in this article, it would be a good idea for us to examine the things that you will need to have installed. Installing Go It should go without saying that we will need to have the Go language installed. However, there are a few associated items that you will also need to install in order to do everything we do in this book. Go is available for Mac OS X, Windows, and most common Linux variants. You can download the binaries at http://golang.org/doc/install. On Linux, you can generally grab Go through your distribution's package manager. For example, you can grab it on Ubuntu with a simple apt-get install golang command. Something similar exists for most distributions. In addition to the core language, we'll also work a bit with the Google App Engine, and the best way to test with the App Engine is to install the Software Development Kit (SDK). This will allow us to test our applications locally prior to deploying them and simulate a lot of the functionality that is provided only on the App Engine. The App Engine SDK can be downloaded from https://developers.google.com/appengine/downloads. While we're obviously most interested in the Go SDK, you should also grab the Python SDK as there are some minor dependencies that may not be available solely in the Go SDK. Installing and using MySQL We'll be using quite a few different databases and datastores to manage our test and real data, and MySQL will be one of the primary ones. We will use MySQL as a storage system for our users; their messages and their relationships will be stored in our larger application (we will discuss more about this in a bit). MySQL can be downloaded from http://dev.mysql.com/downloads/. You can also grab it easily from a package manager on Linux/OS X as follows: Ubuntu: sudo apt-get install mysql-server mysql-client OS X with Homebrew: brew install mysql Redis Redis is the first of the two NoSQL datastores that we'll be using for a couple of different demonstrations, including caching data from our databases as well as the API output. If you're unfamiliar with NoSQL, we'll do some pretty simple introductions to results gathering using both Redis and Couchbase in our examples. If you know MySQL, Redis will at least feel similar, and you won't need the full knowledge base to be able to use the application in the fashion in which we'll use it for our purposes. Redis can be downloaded from http://redis.io/download. Redis can be downloaded on Linux/OS X using the following: Ubuntu: sudo apt-get install redis-server OS X with Homebrew: brew install redis Couchbase As mentioned earlier, Couchbase will be our second NoSQL solution that we'll use in various products, primarily to set short-lived or ephemeral key store lookups to avoid bottlenecks and as an experiment with in-memory caching. Unlike Redis, Couchbase uses simple REST commands to set and receive data, and everything exists in the JSON format. Couchbase can be downloaded from http://www.couchbase.com/download. For Ubuntu (deb), use the following command to download Couchbase: dpkg -i couchbase-server version.deb For OS X with Homebrew use the following command to download Couchbase: brew install https://github.com/couchbase/homebrew/raw/    stable/Library/Formula/libcouchbase.rb Nginx Although Go comes with everything you need to run a highly concurrent, performant web server, we're going to experiment with wrapping a reverse proxy around our results. We'll do this primarily as a response to the real-world issues regarding availability and speed. Nginx is not available natively for Windows. For Ubuntu, use the following command to download Nginx: apt-get install nginx For OS X with Homebrew, use the following command to download Nginx: brew install nginx Apache JMeter We'll utilize JMeter for benchmarking and tuning our API for performance. You have a bit of a choice here, as there are several stress-testing applications for simulating traffic. The two we'll touch on are JMeter and Apache's built-in Apache Benchmark (AB) platform. The latter is a stalwart in benchmarking but is a bit limited in what you can throw at your API, so JMeter is preferred. One of the things that we'll need to consider when building an API is its ability to stand up to heavy traffic (and introduce some mitigating actions when it cannot), so we'll need to know what our limits are. Apache JMeter can be downloaded from http://jmeter.apache.org/download_jmeter.cgi. Using predefined datasets While it's not entirely necessary to have our dummy dataset, you can save a lot of time as we build our social network by bringing it in because it is full of users, posts, and images. By using this dataset, you can skip creating this data to test certain aspects of the API and API creation. Our dummy dataset can be downloaded at https://github.com/nkozyra/masteringwebservices. Choosing an IDE A choice of Integrated Development Environment (IDE) is one of the most personal choices a developer can make, and it's rare to find a developer who is not steadfastly passionate about their favorite. Nothing in this article will require one IDE over another; indeed, most of Go's strength in terms of compiling, formatting, and testing lies at the command-line level. That said, we'd like to at least explore some of the more popular choices for editors and IDEs that exist for Go. Eclipse As one of the most popular and expansive IDEs available for any language, Eclipse is an obvious first mention. Most languages get their support in the form of an Eclipse plugin and Go is no exception. There are some downsides to this monolithic piece of software; it is occasionally buggy on some languages, notoriously slow for some autocompletion functions, and is a bit heavier than most of the other available options. However, the pluses are myriad. Eclipse is very mature and has a gigantic community from which you can seek support when issues arise. Also, it's free to use. Eclipse can be downloaded from http://eclipse.org/ Get the Goclipse plugin at http://goclipse.github.io/ Sublime Text Sublime Text is our particular favorite, but it comes with a large caveat—it is the only one listed here that is not free. This one feels more like a complete code/text editor than a heavy IDE, but it includes code completion options and the ability to integrate the Go compilers (or other languages' compilers) directly into the interface. Although Sublime Text's license costs $70, many developers find its elegance and speed to be well worth it. You can try out the software indefinitely to see if it's right for you; it operates as nagware unless and until you purchase a license. Sublime Text can be downloaded from http://www.sublimetext.com/2. LiteIDE LiteIDE is a much younger IDE than the others mentioned here, but it is noteworthy because it has a focus on the Go language. It's cross-platform and does a lot of Go's command-line magic in the background, making it truly integrated. LiteIDE also handles code autocompletion, go fmt, build, run, and test directly in the IDE and a robust package browser. It's free and totally worth a shot if you want something lean and targeted directly for the Go language. LiteIDE can be downloaded from https://code.google.com/p/golangide/. IntelliJ IDEA Right up there with Eclipse is the JetBrains family of IDE, which has spanned approximately the same number of languages as Eclipse. Ultimately, both are primarily built with Java in mind, which means that sometimes other language support can feel secondary. The Go integration here, however, seems fairly robust and complete, so it's worth a shot if you have a license. If you do not have a license, you can try the Community Edition, which is free. You can download IntelliJ IDEA at http://www.jetbrains.com/idea/download/ The Go language support plugin is available at http://plugins.jetbrains.com/plugin/?idea&id=5047 Some client-side tools Although the vast majority of what we'll be covering will focus on Go and API services, we will be doing some visualization of client-side interactions with our API. In doing so, we'll primarily focus on straight HTML and JavaScript, but for our more interactive points, we'll also rope in jQuery and AngularJS. Most of what we do for client-side demonstrations will be available at this book's GitHub repository at https://github.com/nkozyra/goweb under client. Both jQuery and AngularJS can be loaded dynamically from Google's CDN, which will prevent you from having to download and store them locally. The examples hosted on GitHub call these dynamically. To load AngularJS dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/libs/ angularjs/1.2.18/angular.min.js"></script> To load jQuery dynamically, use the following code: <script src="//ajax.googleapis.com/ajax/ libs/jquery/1.11.1/jquery.min.js"></script> Looking at our application Well in the book, we'll be building myriad small applications to demonstrate points, functions, libraries, and other techniques. However, we'll also focus on a larger project that mimics a social network wherein we create and return to users, statuses, and so on, via the API. For that you'll need to have a copy of it. Setting up our database As mentioned earlier, we'll be designing a social network that operates almost entirely at the API level (at least at first) as our master project in the book. Time and space wouldn't allow us to cover this here in the article. When we think of the major social networks (from the past and in the present), there are a few omnipresent concepts endemic among them, which are as follows: The ability to create a user and maintain a user profile The ability to share messages or statuses and have conversations based on them The ability to express pleasure or displeasure on the said statuses/messages to dictate the worthiness of any given message There are a few other features that we'll be building here, but let's start with the basics. Let's create our database in MySQL as follows: create database social_network; This will be the basis of our social network product in the book. For now, we'll just need a users table to store our individual users and their most basic information. We'll amend this to include more features as we go along: CREATE TABLE users ( user_id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, user_nickname VARCHAR(32) NOT NULL, user_first VARCHAR(32) NOT NULL, user_last VARCHAR(32) NOT NULL, user_email VARCHAR(128) NOT NULL, PRIMARY KEY (user_id), UNIQUE INDEX user_nickname (user_nickname) ) We won't need to do too much in this article, so this should suffice. We'll have a user's most basic information—name, nickname, and e-mail, and not much else. Introducing the HTTP package The vast majority of our API work will be handled through REST, so you should become pretty familiar with Go's http package. In addition to serving via HTTP, the http package comprises of a number of other very useful utilities that we'll look at in detail. These include cookie jars, setting up clients, reverse proxies, and more. The primary entity about which we're interested right now, though, is the http.Server struct, which provides the very basis of all of our server's actions and parameters. Within the server, we can set our TCP address, HTTP multiplexing for routing specific requests, timeouts, and header information. Go also provides some shortcuts for invoking a server without directly initializing the struct. For example, if you have a lot of default properties, you could use the following code: Server := Server { Addr: ":8080", Handler: urlHandler, ReadTimeout: 1000 * time.MicroSecond, WriteTimeout: 1000 * time.MicroSecond, MaxHeaderBytes: 0, TLSConfig: nil } You can simply execute using the following code: http.ListenAndServe(":8080", nil) This will invoke a server struct for you and set only the Addr and Handler  properties within. There will be times, of course, when we'll want more granular control over our server, but for the time being, this will do just fine. Let's take this concept and output some JSON data via HTTP for the first time. Quick hitter – saying Hello, World via API As mentioned earlier in this article, we'll go off course and do some work that we'll preface with quick hitter to denote that it's unrelated to our larger project. In this case, we just want to rev up our http package and deliver some JSON to the browser. Unsurprisingly, we'll be merely outputting the uninspiring Hello, world message to, well, the world. Let's set this up with our required package and imports: package main   import ( "net/http" "encoding/json" "fmt" ) This is the bare minimum that we need to output a simple string in JSON via HTTP. Marshalling JSON data can be a bit more complex than what we'll look at here, so if the struct for our message doesn't immediately make sense, don't worry. This is our response struct, which contains all of the data that we wish to send to the client after grabbing it from our API: type API struct { Message string "json:message" } There is not a lot here yet, obviously. All we're setting is a single message string in the obviously-named Message variable. Finally, we need to set up our main function (as follows) to respond to a route and deliver a marshaled JSON response: func main() {   http.HandleFunc("/api", func(w http.ResponseWriter, r    *http.Request) {      message := API{"Hello, world!"}      output, err := json.Marshal(message)      if err != nil {      fmt.Println("Something went wrong!")    }      fmt.Fprintf(w, string(output))   })   http.ListenAndServe(":8080", nil) } Upon entering main(), we set a route handling function to respond to requests at /api that initializes an API struct with Hello, world! We then marshal this to a JSON byte array, output, and after sending this message to our iowriter class (in this case, an http.ResponseWriter value), we cast that to a string. The last step is a kind of quick-and-dirty approach for sending our byte array through a function that expects a string, but there's not much that could go wrong in doing so. Go handles typecasting pretty simply by applying the type as a function that flanks the target variable. In other words, we can cast an int64 value to an integer by simply surrounding it with the int(OurInt64) function. There are some exceptions to this—types that cannot be directly cast and some other pitfalls, but that's the general idea. Among the possible exceptions, some types cannot be directly cast to others and some require a package like strconv to manage typecasting. If we head over to our browser and call localhost:8080/api (as shown in the following screenshot), you should get exactly what we expect, assuming everything went correctly: Summary We've touched on the very basics of developing a simple web service interface in Go. Admittedly, this particular version is extremely limited and vulnerable to attack, but it shows the basic mechanisms that we can employ to produce usable, formalized output that can be ingested by other services. At this point, you should have the basic tools at your disposal that are necessary to start refining this process and our application as a whole. Resources for Article: Further resources on this subject: Adding Authentication [article] C10K – A Non-blocking Web Server in Go [article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article]
Read more
  • 0
  • 0
  • 993
article-image-creating-responsive-project
Packt
08 Apr 2015
14 min read
Save for later

Creating a Responsive Project

Packt
08 Apr 2015
14 min read
In today's ultra connected world, a good portion of your students probably own multiple devices. Of course, they may want to take your eLearning course on all their devices. They might want to start the course on their desktop computer at work, continue it on their phone while commuting back home, and finish it at night on their tablet. In other situations, students might only have a mobile phone available to take the course, and sometimes the topic to teach only makes sense on a mobile device. To address these needs, you want to deliver your course on multiple screens. As of Captivate 6, you can publish your courses in HTML5, which makes them available on mobile devices that do not support the Flash technology. Now, Captivate 8 takes it one huge step further by introducing Responsive Projects. A Responsive Project is a project that you can optimize for the desktop, the tablet, and the mobile phone. It is like providing three different versions of the course in a single project. In this article, by Damien Bruyndonckx, author of the book Mastering Adobe Captivate 8, you will be introduced to the key concepts and techniques used to create a responsive project in Captivate 8. While reading, keep the following two things in mind. First, everything you have learned so far can be applied to a responsive project. Second, creating a responsive project requires more experience than what a book can offer. I hope that this article will give you a solid understanding of the core concepts in order to jump start your own discovery of Captivate 8 Responsive Projects. (For more resources related to this topic, see here.) About Responsive Projects A Responsive Project is meant to be used on multiple devices, including tablets and smartphones that do not support the Flash technology. Therefore, it can be published only in HTML5. This means that all the restrictions of a traditional HTML5 project also apply to a Responsive Project. For example, you will not be able to add Text Animations or Rollover Objects in a Responsive Project because these features are not supported in HTML5. Responsive design is not limited to eLearning projects made in Captivate. It is actually used by web designers and developers around the world to create websites that have the ability to automatically adapt themselves to the screen they are viewed on. To do so, they need to detect the screen width that is available to their content and adapt accordingly. Responsive Design by Ethan Marcotte If you want to know more about responsive design, I strongly recommend a book by Ethan Marcotte in the A Book Apart collection. This is the founding book of responsive design. If you have some knowledge of HTML and CSS, this is a must have resource in order to fully understand what responsive design is all about. More information on this book can be found at http://www.abookapart.com/products/responsive-web-design. Viewport size versus screen size At the heart of the responsive design approach is the width of the screen used by the student to consume the content. To be more exact, it is the width of the viewport that is detected—not the width of the screen. The viewport is the area that is actually available to the content. On a desktop or laptop computer, the difference between the screen width and the viewport width is very easy to understand. Let's do a simple experiment to grasp that concept hands-on: Open your default web browser and make sure it is in fullscreen mode. Browse to http://www.viewportsizes.com/mine. The main information provided by this page is the size of your viewport. Because your web browser is currently in fullscreen mode, the viewport size should be close (but not quite the same) to the resolution of your screen. Use your mouse to resize your browser window and see how the viewport size evolves. As shown in the following screenshot, the size of the viewport changes as you resize your browser window but the actual screen you use is always the same: This viewport concept is also valid on a mobile device, even though it may be a bit subtler to grasp. The following screenshot shows the http://www.viewportsizes.com/mine web page as viewed in the Safari mobile browser on an iPad mini held in landscape (left) and in portrait (right). As you can see, the viewport size changes but, once again, the actual screen used is always the same. Don't hesitate to perform these experiments on your own mobile devices and compare your results to mine. Another thing that might affect the viewport size on a mobile device is the browser used. The following screenshot shows the viewport size of the same iPad mini held in portrait mode in Safari mobile (left) and in Chrome mobile (right). Note that the viewport size is slightly different in Chrome than in Safari. This is due to the interface elements of the browser (such as the address bar and the tabs) that use a variable portion of the screen real estate in each browser. Understanding breakpoints Before setting up your own Responsive Project there is one more concept to explore. To discover this second concept, you will also perform a simple experiment with your desktop or laptop computer: Open the web browser of your desktop or laptop computer and maximize it to fullscreen size. Browse to http://courses.dbr-training.eu/8/goingmobile. This is the online version of the Responsive Project that you will build in this article. When viewed on a desktop or laptop computer in fullscreen mode, you should see a version of the course optimized for larger screens. Use your mouse to slowly scale your browser window down. Note how the size and the position of the elements are automatically recalculated as you resize the browser window. At some point, you should see that the height of the slide changes and that another layout is applied. The point at which the layout changes is situated at a width of exactly 768 px. In other words, if the width of the browser (actually the viewport) is above 768 px, one layout is applied, but if the width of the viewport falls under 768 px, another layout is applied. You just discovered a breakpoint. The layout that is applied after the breakpoint (in other words when the viewport width is lower than 768 px) is optimized for a tablet device held in portrait mode. Note that even though you are using a desktop or laptop computer, it is the tablet-optimized layout that is applied when the viewport width is at or under 768 px. Keep scaling the browser window down and see how the position and the size of the elements of the slide are recalculated in real time as you resize the browser window. This simple experiment should better explain what a breakpoint is and how these breakpoints work. Before moving on to the next section, let's take some time to summarize the important concepts uncovered in this section: The aim of responsive design is to provide an optimized viewing experience across a wide range of devices and form factors. To achieve this goal, responsive design uses fluid sizing and positioning techniques, responsive images, and breakpoints. Responsive design is not limited to eLearning courses made in Captivate, but is widely used in web and app design by thousands of designers around the world. A Captivate 8 Responsive Project can only be published in HTML5. The capabilities and restrictions of a standard HTML5 project also apply to a Responsive Project. A breakpoint defines the exact viewport width at which the layout breaks and another layout is applied. The breakpoints, and therefore the optimized layouts, are based on the width of the viewport and not on the detection of an actual device. This explains why the tablet-optimized layout is applied to the downsized browser window on a desktop computer. The viewport width and the screen width are two different things. In the next section, you will start the creation of your very first Responsive Project. To learn more about these concepts, there is a video course on Responsive eLearning with Captivate 8 available on Adobe KnowHow. The course itself is for a fee, but there is a free sample of 15 minutes that walks you through these concepts using another approach. I suggest you take some time to watch this 15-minute sample at https://www.adobeknowhow.com/courselanding/create-responsive-elearning-adobe-captivate-8. Setting up a Responsive Project It is now time to open Captivate and set up your first Responsive Project using the following steps: Open Captivate or close every open file. Switch to the New tab of the Welcome screen. Double-click on the Responsive Project thumbnail. Alternatively, you can also use the File | New Project | Responsive Project menu item. This action creates a new Responsive Project. Note that the choice to create a Responsive Project or a regular Captivate project must be done up front when creating the project. As of Captivate 8, it is not yet possible to take an existing non-responsive project and make it responsive after the fact. The workspace of Captivate should be very similar to what you are used to, with the exception of an extra ruler that spans across the top of the screen. This ruler contains three predefined breakpoints. As shown in the following screenshot, the first breakpoint is called the Primary breakpoint and is situated at 1024 pixels. Also, note that the breakpoint ruler is green when the Primary breakpoint is selected. You will now discover the other two breakpoints using the following steps. In the breakpoint ruler, click on the icon of a tablet to select the second breakpoint. The stage and all the elements it contains are resized. In the breakpoint ruler at the top of the stage, the second breakpoint is now selected. It is called the Tablet breakpoint and is situated at 768 pixels. Note the blue color associated with the Tablet breakpoint. In the breakpoint ruler, click on the icon of a smartphone to select the third and last breakpoint. Once again, the stage and the elements it contains are resized. The third breakpoint is called the Mobile breakpoint and is situated at 360 pixels. The orange color is associated with this third breakpoint. Adjusting the breakpoints In some situations, the default location of these three breakpoints works just fine But, in other situations, some adjustments are needed. In this project, you want to target the regular screen of a desktop or laptop computer in the Primary view, an iPad mini held in portrait in the Tablet view, and an iPhone 4 held in portrait in the Mobile view. You will now adjust the breakpoints to fit these particular specifications by using the following steps: Click on the Primary breakpoint in the breakpoints ruler to select it. Use your mouse to move the breakpoint all the way to the left. Captivate should stop at a width of 1280 pixels. It is not possible to have a stage wider than 1280 pixels in a Responsive Project. For this project, the default width of 1024 pixels is perfect, so you will now move this breakpoint back to its original location. Move the Primary breakpoint to the right until it is placed at 1024 pixels. Return to your web browser and browse to http://www.viewportsizes.com. Once on the website, type iPad in the Filter field at the top of the page. The portrait width of an iPad mini is 768 pixels. In Captivate, the Tablet breakpoint is placed at 768 pixels by default, which is perfectly fine for the needs of this project. Still on the http://www.viewportsizes.com website, type iPhone in the Filter field at the top of the page. The portrait width of an iPhone 4 is 320 pixels. In Captivate, the Mobile breakpoint is placed at 360 pixels by default. You will now move it to 320 pixels so that it matches the portrait width of an iPhone 4. Return to Captivate and select the Mobile breakpoint. Move the Mobile breakpoint to the right until it is placed at exactly 320 pixels. Note that the minimum width of the stage in the Mobile breakpoint is 320 pixels. In other words, the stage cannot be narrower than 320 pixels in a Responsive Project. The viewport size of your device Before moving on to the next section, take some time to inspect the http://viewportsizes.com site a little further. For example, type the name of the devices you own and compare their characteristics to the breakpoints of the current project. Will the project fit on your devices? How do you need to change the breakpoints so the project perfectly fits your devices? The breakpoints are now in place. But these breakpoints only take care of the width of the stage. In the next section, you will adjust the height of the stage in each breakpoint. Adjusting the slide height Captivate slides have a fixed height. This is the primary difference between a Captivate project and a regular responsive website whose page height is infinite. In this section, you will adjust the height of the stage in all three breakpoints. The steps are as follows: Still in Captivate, click on the desktop icon situated on the left side of the breakpoint switcher to return to the Primary view. On the far right of the breakpoint ruler, select the View Device Height checkbox. As shown in the following screenshot, a yellow border now surrounds the stage in the Primary view, and the slide height is displayed in the top left corner of the stage: For the Primary view, a slide height of 627 pixels is perfect. It matches the viewport size of an iPad held in landscape and provides a big enough area on a desktop or laptop computer. Click on the Tablet breakpoint to select it. Return to http://www.viewportsizes.com/ and type iPad in the filter field at the top of the page. According to the site, the height of an iPad is 1024 pixels. Use your mouse to drag the yellow rectangle situated at the bottom of the stage down until the stage height is around 950 pixels. It may be needed to reduce the zoom magnification to perform this action in good conditions. After this operation, the stage should look like the following screenshot (the zoom magnification has been reduced to 50 percent in the screenshot): With a height of 950 pixels, the Captivate slide can fit on an iPad screen and still account for the screen real estate consumed by the interface elements of the browser such as the address bar and the tabs. Still in the Tablet view, make sure the slide is the selected object and open the Properties panel. Note that, at the end of the Properties panel, the Slide Height property is currently unavailable. Click on the chain icon (Unlink from Device height) next to the Slide Height property. By default, the slide height is linked to the device height. By clicking on the chain icon you have broken the link between the slide height and the device (or viewport) height. This allows you to modify the height of the Captivate slide without modifying the height of the device. Use the Properties panel to change the Slide Height to 1024 pixels. On the stage, note that the slide is now a little bit higher than the yellow rectangle. This means that this particular slide will generate a vertical scrollbar on the tablet device held in portrait. Scrolling is something you want to avoid as much as possible, so you will now enable the link between the device height and the Slide Height. In the Properties panel, click on the chain icon next to the Slide Height property to enable the link. The slide height is automatically readjusted to the device height of 950 pixels. Use the breakpoint ruler to select the Mobile breakpoint. By default, the device height in the Mobile breakpoint is set to 415 pixels. According to the http://www.viewportsizes.com/ website, the screen of an iPhone 4 has a height of 480 pixels. A slide height of 415 pixels is perfect to accommodate the slide itself plus the interface elements of the mobile browser. Summary In this article, you learned the key concepts and techniques used to create a responsive project in Captivate 8. Resources for Article: Further resources on this subject: Publishing the project for mobile [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article] Creating Motion Through the Timeline [article]
Read more
  • 0
  • 0
  • 1768

article-image-optimizing-javascript-ios-hybrid-apps
Packt
01 Apr 2015
17 min read
Save for later

Optimizing JavaScript for iOS Hybrid Apps

Packt
01 Apr 2015
17 min read
In this article by Chad R. Adams, author of the book, Mastering JavaScript High Performance, we are going to take a look at the process of optimizing JavaScript for iOS web apps (also known as hybrid apps). We will take a look at some common ways of debugging and optimizing JavaScript and page performance, both in a device's web browser and a standalone app's web view. Also, we'll take a look at the Apple Web Inspector and see how to use it for iOS development. Finally, we will also gain a bit of understanding about building a hybrid app and learn the tools that help to better build JavaScript-focused apps for iOS. Moreover, we'll learn about a class that might help us further in this. We are going to learn about the following topics in the article: Getting ready for iOS development iOS hybrid development (For more resources related to this topic, see here.) Getting ready for iOS development Before starting this article with Xcode examples and using iOS Simulator, I will be displaying some native code and will use tools that haven't been covered in this course. Mobile app developments, regardless of platform, are books within themselves. When covering the build of the iOS project, I will be briefly going over the process of setting up a project and writing non-JavaScript code to get our JavaScript files into a hybrid iOS WebView for development. This is essential due to the way iOS secures its HTML5-based apps. Apps on iOS that use HTML5 can be debugged, either from a server or from an app directly, as long as the app's project is built and deployed in its debug setting on a host system (meaning the developers machine). Readers of this article are not expected to know how to build a native app from the beginning to the end. And that's completely acceptable, as you can copy-and-paste, and follow along as I go. But I will show you the code to get us to the point of testing JavaScript code, and the code used will be the smallest and the fastest possible to render your content. All of these code samples will be hosted as an Xcode project solution of some type on Packt Publishing's website, but they will also be shown here if you want to follow along, without relying on code samples. Now with that said, lets get started… iOS hybrid development Xcode is the IDE provided by Apple to develop apps for both iOS devices and desktop devices for Macintosh systems. As a JavaScript editor, it has pretty basic functions, but Xcode should be mainly used in addition to a project's toolset for JavaScript developers. It provides basic code hinting for JavaScript, HTML, and CSS, but not more than that. To install Xcode, we will need to start the installation process from the Mac App Store. Apple, in recent years, has moved its IDE to the Mac App Store for faster updates to developers and subsequently app updates for iOS and Mac applications. Installation is easy; simply log in with your Apple ID in the Mac App Store and download Xcode; you can search for it at the top or, if you look in the right rail among popular free downloads, you can find a link to the Xcode Mac App Store page. Once you reach this, click Install as shown in the following screenshot: It's important to know that, for the sake of simplicity in this article, we will not deploy an app to a device; so if you are curious about it, you will need to be actively enrolled in Apple's Developer Program. The cost is 99 dollars a year, or 299 dollars for an enterprise license that allows deployment of an app outside the control of the iOS App Store. If you're curious to learn more about deploying to a device, the code in this article will run on the device assuming that your certificates are set up on your end. For more information on this, check out Apple's iOS Developer Center documentation online at https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40012582. Once it's installed, we can open up Xcode and look at the iOS Simulator; we can do this by clicking XCode, followed by Open Developer Tool, and then clicking on iOS Simulator. Upon first opening iOS Simulator, we will see what appears to be a simulation of an iOS device, shown in the next screenshot. Note that this is a simulation, not a real iOS device (even if it feels pretty close). A neat trick for JavaScript developers working with local HTML files outside an app is that they can quickly drag-and-drop an HTML file. Due to this, the simulator will open the mobile version of Safari, the built-in browser for iPhone and iPads, and render the page as it would do on an iOS device; this is pretty helpful when testing pages before deploying them to a web server. Setting up a simple iOS hybrid app JavaScript performance on a built-in hybrid application can be much slower than the same page run on the mobile version of Safari. To test this, we are going to build a very simple web browser using Apple's new programming language Swift. Swift is an iOS-ready language that JavaScript developers should feel at home with. Swift itself follows a syntax similar to JavaScript but, unlike JavaScript, variables and objects can be given types allowing for stronger, more accurate coding. In that regard, Swift follows syntax similar to what can be seen in the ECMAScript 6 and TypeScript styles of coding practice. If you are checking these newer languages out, I encourage you to check out Swift as well. Now let's create a simple web view, also known as a UIWebView, which is the class used to create a web view in an iOS app. First, let's create a new iPhone project; we are using an iPhone to keep our app simple. Open Xcode and select the Create new XCode project project; then, as shown in the following screenshot, select the Single View Application option and click the Next button. On the next view of the wizard, set the product name as JS_Performance, the language to Swift, and the device to iPhone; the organization name should autofill with your name based on your account name in the OS. The organization identifier is a reverse domain name unique identifier for our app; this can be whatever you deem appropriate. For instructional purposes, here's my setup: Once your project names are set, click the Next button and save to a folder of your choice with Git repository left unchecked. When that's done, select Main.storyboard under your Project Navigator, which is found in the left panel. We should be in the storyboard view now. Let's open the Object Library, which can be found in the lower-right panel in the subtab with an icon of a square inside a circle. Search for Web View in the Object Library in the bottom-right search bar, and then drag that to the square view that represents our iOS view. We need to consider two more things before we link up an HTML page using Swift; we need to set constraints as native iOS objects will be stretched to fit various iOS device windows. To fill the space, you can add the constraints by selecting the UIWebView object and pressing Command + Option + Shift + = on your Mac keyboard. Now you should see a blue border appear briefly around your UIWebView. Lastly, we need to connect our UIWebView to our Swift code; for this, we need to open the Assistant Editor by pressing Command + Option + Return on the keyboard. We should see ViewController.swift open up in a side panel next to our Storyboard. To link this as a code variable, right-click (or option-click the UIWebView object) and, with the button held down, drag the UIWebView to line number 12 in the ViewController.swift code in our Assistant Editor. This is shown in the following diagram: Once that's done, a popup will appear. Now leave everything the same as it comes up, but set the name to webview; this will be the variable referencing our UIWebView. With that done, save your Main.storyboard file and navigate to your ViewController.swift file. Now take a look at the Swift code shown in the following screenshot, and copy it into the project; the important part is on line 19, which contains the filename and type loaded into the web view; which in this case, this is index.html. Now obviously, we don't have an index.html file, so let's create one. Go to File and then select New followed by the New File option. Next, under iOS select Empty Application and click Next to complete the wizard. Save the file as index.html and click Create. Now open the index.html file, and type the following code into the HTML page: <br />Hello <strong>iOS</strong> Now click Run (the play button in the main iOS task bar), and we should see our HTML page running inside our own app, as shown here: That's nice work! We built an iOS app with Swift (even if it's a simple app). Let's create a structured HTML page; we will override our Hello iOS text with the HTML shown in the following screenshot: Here, we use the standard console.time function and print a message to our UIWebView page when finished; if we hit Run in Xcode, we will see the Loop Completed message on load. But how do we get our performance information? How can we get our console.timeEnd function code on line 14 on our HTML page? Using Safari web inspector for JavaScript performance Apple does provide a Web Inspector for UIWebViews, and it's the same inspector for desktop Safari. It's easy to use, but has an issue: the inspector only works on iOS Simulators and devices that have started from an Xcode project. This limitation is due to security concerns for hybrid apps that may contain sensitive JavaScript code that could be exploited if visible. Let's check our project's embedded HTML page console. First, open desktop Safari on your Mac and enable developer mode. Launch the Preferences option. Under the Advanced tab, ensure that the Show develop menu in menu bar option is checked, as shown in the following screenshot: Next, let's rerun our Xcode project, start up iOS Simulator and then rerun our page. Once our app is running with the Loop Completed result showing, open desktop Safari and click Develop, then iOS Simulator, followed by index.html. If you look closely, you will see iOS simulator's UIWebView highlighted in blue when you place the mouse over index.html; a visible page is seen as shown in the following screenshot: Once we release the mouse on index.html, we Safari's Web Inspector window appears featuring our hybrid iOS app's DOM and console information. The Safari's Web Inspector is pretty similar to Chrome's Developer tools in terms of feature sets; the panels used in the Developer tools also exist as icons in Web Inspector. Now let's select the Console panel in Web Inspector. Here, we can see our full console window including our Timer console.time function test included in the for loop. As we can see in the following screenshot, the loop took 0.081 milliseconds to process inside iOS. Comparing UIWebView with Mobile Safari What if we wanted to take our code and move it to Mobile Safari to test? This is easy enough; as mentioned earlier in the article, we can drag-and-drop the index.html file into our iOS Simulator, and then the OS will open the mobile version of Safari and load the page for us. With that ready, we will need to reconnect Safari Web Inspector to the iOS Simulator and reload the page. Once that's done, we can see that our console.time function is a bit faster; this time it's roughly 0.07 milliseconds, which is a full .01 milliseconds faster than UIWebView, as shown here: For a small app, this is minimal in terms of a difference in performance. But, as an application gets larger, the delay in these JavaScript processes gets longer and longer. We can also debug the app using the debugging inspector in the Safari's Web Inspector tool. Click Debugger in the top menu panel in Safari's Web Inspector. We can add a break point to our embedded script by clicking a line number and then refreshing the page with Command + R. In the following screenshot, we can see the break occurring on page load, and we can see our scope variable displayed for reference in the right panel: We can also check page load times using the timeline inspector. Click Timelines at the top of the Web Inspector and now we will see a timeline similar to the Resources tab found in Chrome's Developer tools. Let's refresh our page with Command + R on our keyboard; the timeline then processes the page. Notice that after a few seconds, the timeline in the Web Inspector stops when the page fully loads, and all JavaScript processes stop. This is a nice feature when you're working with the Safari Web Inspector as opposed to Chrome's Developer tools. Common ways to improve hybrid performance With hybrid apps, we can use all the techniques for improving performance using a build system such as Grunt.js or Gulp.js with NPM, using JSLint to better optimize our code, writing code in an IDE to create better structure for our apps, and helping to check for any excess code or unused variables in our code. We can use best performance practices such as using strings to apply an HTML page (like the innerHTML property) rather than creating objects for them and applying them to the page that way, and so on. Sadly, the fact that hybrid apps do not perform as well as native apps still holds true. Now, don't let that dismay you as hybrid apps do have a lot of good features! Some of these are as follows: They are (typically) faster to build than using native code They are easier to customize They allow for rapid prototyping concepts for apps They are easier to hand off to other JavaScript developers rather than finding a native developer They are portable; they can be reused for another platform (with some modification) for Android devices, Windows Modern apps, Windows Phone apps, Chrome OS, and even Firefox OS They can interact with native code using helper libraries such as Cordova At some point, however, application performance will be limited to the hardware of the device, and it's recommended you move to native code. But, how do we know when to move? Well, this can be done using Color Blended Layers. The Color Blended Layers option applies an overlay that highlights slow-performing areas on the device display, for example, green for good performance and red for slow performance; the darker the color is, the more impactful will be the performance result. Rerun your app using Xcode and, in the Mac OS toolbar for iOS Simulator, select Debug and then Color Blended Layers. Once we do that, we can see that our iOS Simulator shows a green overlay; this shows us how much memory iOS is using to process our rendered view, both native and non-native code, as shown here: Currently, we can see a mostly green overlay with the exception of the status bar elements, which take up more render memory as they overlay the web view and have to be redrawn over that object repeatedly. Let's make a copy of our project and call it JS_Performance_CBL, and let's update our index.html code with this code sample, as shown in the following screenshot: Here, we have a simple page with an empty div; we also have a button with an onclick function called start. Our start function will update the height continuously using the setInterval function, increasing the height every millisecond. Our empty div also has a background gradient assigned to it with an inline style tag. CSS background gradients are typically a huge performance drain on mobile devices as they can potentially re-render themselves over and over as the DOM updates itself. Some other issues include listener events; some earlier or lower-end devices do not have enough RAM to apply an event listener to a page. Typically, it's a good practice to apply onclick attributes to HTML either inline or through JavaScript. Going back to the gradient example, let's run this in iOS Simulator and enable Color Blended Layers after clicking our HTML button to trigger the JavaScript animation. As expected, our div element that we've expanded now has a red overlay indicating that this is a confirmed performance issue, which is unavoidable. To correct this, we would need to remove the CSS gradient background, and it would show as green again. However, if we had to include a gradient in accordance with a design spec, a native version would be required. When faced with UI issues such as these, it's important to understand tools beyond normal developer tools and Web Inspectors, and take advantage of the mobile platform tools that provide better analysis of our code. Now, before we wrap this article, let's take note of something specific for iOS web views. The WKWebView framework At the time of writing, Apple has announced the WebKit framework, a first-party iOS library intended to replace UIWebView with more advanced and better performing web views; this was done with the intent of replacing apps that rely on HTML5 and JavaScript with better performing apps as a whole. The WebKit framework, also known in developer circles as WKWebView, is a newer web view that can be added to a project. WKWebView is also the base class name for this framework. This framework includes many features that native iOS developers can take advantage of. These include listening for function calls that can trigger native Objective-C or Swift code. For JavaScript developers like us, it includes a faster JavaScript runtime called Nitro, which has been included with Mobile Safari since iOS6. Hybrid apps have always run worse that native code. But with the Nitro JavaScript runtime, HTML5 has equal footing with native apps in terms of performance, assuming that our view doesn't consume too much render memory as shown in our color blended layers example. WKWebView does have limitations though; it can only be used for iOS8 or higher and it doesn't have built-in Storyboard or XIB support like UIWebView. So, using this framework may be an issue if you're new to iOS development. Storyboards are simply XML files coded in a specific way for iOS user interfaces to be rendered, while XIB files are the precursors to Storyboard. XIB files allow for only one view whereas Storyboards allow multiple views and can link between them too. If you are working on an iOS app, I encourage you to reach out to your iOS developer lead and encourage the use of WKWebView in your projects. For more information, check out Apple's documentation of WKWebView at their developer site at https://developer.apple.com/library/IOs/documentation/WebKit/Reference/WKWebView_Ref/index.html. Summary In this article, we learned the basics of creating a hybrid-application for iOS using HTML5 and JavaScript; we learned about connecting the Safari Web Inspector to our HTML page while running an application in iOS Simulator. We also looked at Color Blended Layers for iOS Simulator, and saw how to test for performance from our JavaScript code when it's applied to device-rendering performance issues. Now we are down to the wire. As for all JavaScript web apps before they go live to a production site, we need to smoke-test our JavaScript and web app code and see if we need to perform any final improvements before final deployment. Resources for Article: Further resources on this subject: GUI Components in Qt 5 [article] The architecture of JavaScriptMVC [article] JavaScript Promises – Why Should I Care? [article]
Read more
  • 0
  • 0
  • 4889

Packt
24 Mar 2015
15 min read
Save for later

REST – What You Didn't Know

Packt
24 Mar 2015
15 min read
Nowadays, topics such as cloud computing and mobile device service feeds, and other data sources being powered by cutting-edge, scalable, stateless, and modern technologies such as RESTful web services, leave the impression that REST has been invented recently. Well, to be honest, it is definitely not! In fact, REST was defined at the end of the 20th century. This article by Valentin Bojinov, author of the book RESTful Web API Design with Node.js, will walk you through REST's history and will teach you how REST couples with the HTTP protocol. You will look at the five key principles that need to be considered while turning an HTTP application into a RESTful-service-enabled application. You will also look at the differences between RESTful and SOAP-based services. Finally, you will learn how to utilize already existing infrastructure for your benefit. In this article, we will cover the following topics: A brief history of REST REST with HTTP RESTful versus SOAP-based services Taking advantage of existing infrastructure (For more resources related to this topic, see here.) A brief history of REST Let's look at a time when the madness around REST made everybody talk restlessly about it! This happened back in 1999, when a request for comments was submitted to the Internet Engineering Task Force (IETF: http://www.ietf.org/) via RFC 2616: "Hypertext Transfer Protocol - HTTP/1.1." One of its authors, Roy Fielding, later defined a set of principles built around the HTTP and URI standards. This gave birth to REST as we know it today. Let's look at the key principles around the HTTP and URI standards, sticking to which will make your HTTP application a RESTful-service-enabled application: Everything is a resource Each resource is identifiable by a unique identifier (URI) Use the standard HTTP methods Resources can have multiple representation Communicate statelessly Principle 1 – everything is a resource To understand this principle, one must conceive the idea of representing data by a specific format and not by a physical file. Each piece of data available on the Internet has a format that could be described by a content type. For example, JPEG Images; MPEG videos; html, xml, and text documents; and binary data are all resources with the following content types: image/jpeg, video/mpeg, text/html, text/xml, and application/octet-stream. Principle 2 – each resource is identifiable by a unique identifier Since the Internet contains so many different resources, they all should be accessible via URIs and should be identified uniquely. Furthermore, the URIs can be in a human-readable format (frankly I do believe they should be), despite the fact that their consumers are more likely to be software programmers rather than ordinary humans. The URI keeps the data self-descriptive and eases further development on it. In addition, using human-readable URIs helps you to reduce the risk of logical errors in your programs to a minimum. Here are a few sample examples of such URIs: http://www.mydatastore.com/images/vacation/2014/summer http://www.mydatastore.com/videos/vacation/2014/winter http://www.mydatastore.com/data/documents/balance?format=xml http://www.mydatastore.com/data/archives/2014 These human-readable URIs expose different types of resources in a straightforward manner. In the example, it is quite clear that the resource types are as follows: Images Videos XML documents Some kinds of binary archive documents Principle 3 – use the standard HTTP methods The native HTTP protocol (RFC 2616) defines eight actions, also known as verbs: GET POST PUT DELETE HEAD OPTIONS TRACE CONNECT The first four of them feel just natural in the context of resources, especially when defining actions for resource data manipulation. Let's make a parallel with relative SQL databases where the native language for data manipulation is CRUD (short for Create, Read, Update, and Delete) originating from the different types of SQL statements: INSERT, SELECT, UPDATE and DELETE respectively. In the same manner, if you apply the REST principles correctly, the HTTP verbs should be used as shown here: HTTP verb Action Response status code GET Request an existing resource "200 OK" if the resource exists, "404 Not Found" if it does not exist, and "500 Internal Server Error" for other errors PUT Create or update a resource "201 CREATED" if a new resource is created, "200 OK" if updated, and "500 Internal Server Error" for other errors POST Update an existing resource "200 OK" if the resource has been updated successfully, "404 Not Found" if the resource to be updated does not exist, and "500 Internal Server Error" for other errors DELETE Delete a resource "200 OK" if the resource has been deleted successfully, "404 Not Found" if the resource to be deleted does not exist, and "500 Internal Server Error" for other errors There is an exception in the usage of the verbs, however. I just mentioned that POST is used to create a resource. For instance, when a resource has to be created under a specific URI, then PUT is the appropriate request: PUT /data/documents/balance/22082014 HTTP/1.1 Content-Type: text/xml Host: www.mydatastore.com <?xml version="1.0" encoding="utf-8"?> <balance date="22082014"> <Item>Sample item</Item> <price currency="EUR">100</price> </balance> HTTP/1.1 201 Created Content-Type: text/xml Location: /data/documents/balance/22082014 However, in your application you may want to leave it up to the server REST application to decide where to place the newly created resource, and thus create it under an appropriate but still unknown or non-existing location. For instance, in our example, we might want the server to create the date part of the URI based on the current date. In such cases, it is perfectly fine to use the POST verb to the main resource URI and let the server respond with the location of the newly created resource: POST /data/documents/balance HTTP/1.1Content-Type: text/xmlHost: www.mydatastore.com<?xml version="1.0" encoding="utf-8"?><balance date="22082014"><Item>Sample item</Item><price currency="EUR">100</price></balance>HTTP/1.1 201 CreatedContent-Type: text/xmlLocation: /data/documents/balance Principle 4 – resources can have multiple representations A key feature of a resource is that they may be represented in a different form than the one it is stored. Thus, it can be requested or posted in different representations. As long as the specified format is supported, the REST-enabled endpoint should use it. In the preceding example, we posted an xml representation of a balance, but if the server supported the JSON format, the following request would have been valid as well: POST /data/documents/balance HTTP/1.1Content-Type: application/jsonHost: www.mydatastore.com{"balance": {"date": ""22082014"","Item": "Sample item","price": {"-currency": "EUR","#text": "100"}}}HTTP/1.1 201 CreatedContent-Type: application/jsonLocation: /data/documents/balance Principle 5 – communicate statelessly Resource manipulation operations through HTTP requests should always be considered atomic. All modifications of a resource should be carried out within an HTTP request in isolation. After the request execution, the resource is left in a final state, which implicitly means that partial resource updates are not supported. You should always send the complete state of the resource. Back to the balance example, updating the price field of a given balance would mean posting a complete JSON document that contains all of the balance data, including the updated price field. Posting only the updated price is not stateless, as it implies that the application is aware that the resource has a price field, that is, it knows its state. Another requirement for your RESTful application is to be stateless; the fact that once deployed in a production environment, it is likely that incoming requests are served by a load balancer, ensuring scalability and high availability. Once exposed via a load balancer, the idea of keeping your application state at server side gets compromised. This doesn't mean that you are not allowed to keep the state of your application. It just means that you should keep it in a RESTful way. For example, keep a part of the state within the URI. The statelessness of your RESTful API isolates the caller against changes at the server side. Thus, the caller is not expected to communicate with the same server in consecutive requests. This allows easy application of changes within the server infrastructure, such as adding or removing nodes. Remember that it is your responsibility to keep your RESTful APIs stateless, as the consumers of the API would expect it to be. Now that you know that REST is around 15 years old, a sensible question would be, "why has it become so popular just quite recently?" My answer to the question is that we as humans usually reject simple, straightforward approaches, and most of the time, we prefer spending more time on turning complex solutions into even more complex and sophisticated solutions. Take classical SOAP web services for example. Their various WS-* specifications are so many and sometimes loosely defined in order to make different solutions from different vendors interoperable. The WS-* specifications need to be unified by another specification, WS-BasicProfile. This mechanism defines extra interoperability rules in order to ensure that all WS-* specifications in SOAP-based web services transported over HTTP provide different means of transporting binary data. This is again described in other sets of specifications such as SOAP with Attachment References (SwaRef) and Message Transmission Optimisation Mechanism (MTOM), mainly because the initial idea of the web service was to execute business logic and return its response remotely, not to transport large amounts of data. Well, I personally think that when it comes to data transfer, things should not be that complex. This is where REST comes into place by introducing the concept of resources and standard means to manipulate them. The REST goals Now that we've covered the main REST principles, let's dive deeper into what can be achieved when they are followed: Separation of the representation and the resource Visibility Reliability Scalability Performance Separation of the representation and the resource A resource is just a set of information, and as defined by principle 4, it can have multiple representations. However, the state of the resource is atomic. It is up to the caller to specify the content-type header of the http request, and then it is up to the server application to handle the representation accordingly and return the appropriate HTTP status code: HTTP 200 OK in the case of success HTTP 400 Bad request if a unsupported content type is requested, or for any other invalid request HTTP 500 Internal Server error when something unexpected happens during the request processing For instance, let's assume that at the server side, we have balance resources stored in an XML file. We can have an API that allows a consumer to request the resource in various formats, such as application/json, application/zip, application/octet-stream, and so on. It would be up to the API itself to load the requested resource, transform it into the requested type (for example, json or xml), and either use zip to compress it or directly flush it to the HTTP response output. It is the Accept HTTP header that specifies the expected representation of the response data. So, if we want to request our balance data inserted in the previous section in XML format, the following request should be executed: GET /data/balance/22082014 HTTP/1.1Host: my-computer-hostnameAccept: text/xmlHTTP/1.1 200 OKContent-Type: text/xmlContent-Length: 140<?xml version="1.0" encoding="utf-8"?><balance date="22082014"><Item>Sample item</Item><price currency="EUR">100</price></balance> To request the same balance in JSON format, the Accept header needs to be set to application/json: GET /data/balance/22082014 HTTP/1.1Host: my-computer-hostnameAccept: application/jsonHTTP/1.1 200 OKContent-Type: application/jsonContent-Length: 120{"balance": {"date": "22082014","Item": "Sample item","price": {"-currency": "EUR","#text": "100"}}} Visibility REST is designed to be visible and simple. Visibility of the service means that every aspect of it should self-descriptive and should follow the natural HTTP language according to principles 3, 4, and 5. Visibility in the context of the outer world would mean that monitoring applications would be interested only in the HTTP communication between the REST service and the caller. Since the requests and responses are stateless and atomic, nothing more is needed to flow the behavior of the application and to understand whether anything has gone wrong. Remember that caching reduces the visibility of you restful applications and should be avoided. Reliability Before talking about reliability, we need to define which HTTP methods are safe and which are idempotent in the REST context. So let's first define what safe and idempotent methods are: An HTTP method is considered to be safe provided that when requested, it does not modify or cause any side effects on the state of the resource An HTTP method is considered to be idempotent if its response is always the same, no matter how many times it is requested The following table lists shows you which HTTP method is safe and which is idempotent: HTTP Method Safe Idempotent GET Yes Yes POST No No PUT No Yes DELETE No Yes Scalability and performance So far, I have often stressed on the importance of having stateless implementation and stateless behavior for a RESTful web application. The World Wide Web is an enormous universe, containing a huge amount of data and a few times more users eager to get that data. Its evolution has brought about the requirement that applications should scale easily as their load increases. Scaling applications that have a state is hardly possible, especially when zero or close-to-zero downtime is needed. That's why being stateless is crucial for any application that needs to scale. In the best-case scenario, scaling your application would require you to put another piece of hardware for a load balancer. There would be no need for the different nodes to sync between each other, as they should not care about the state at all. Scalability is all about serving all your clients in an acceptable amount of time. Its main idea is keep your application running and to prevent Denial of Service (DoS) caused by a huge amount of incoming requests. Scalability should not be confused with performance of an application. Performance is measured by the time needed for a single request to be processed, not by the total number of requests that the application can handle. The asynchronous non-blocking architecture and event-driven design of Node.js make it a logical choice for implementing a well-scalable application that performs well. Working with WADL If you are familiar with SOAP web services, you may have heard of the Web Service Definition Language (WSDL). It is an XML description of the interface of the service. It is mandatory for a SOAP web service to be described by such a WSDL definition. Similar to SOAP web services, RESTful services also offer a description language named WADL. WADL stands for Web Application Definition Language. Unlike WSDL for SOAP web services, a WADL description of a RESTful service is optional, that is, consuming the service has nothing to do with its description. Here is a sample part of a WADL file that describes the GET operation of our balance service: <application ><grammer><include href="balance.xsd"/><include href="error.xsd"/></grammer><resources base="http://localhost:8080/data/balance/"><resource path="{date}"><method name="GET"><request><param name="date" type="xsd:string" style="template"/></request><response status="200"><representation mediaType="application/xml"element="service:balance"/><representation mediaType="application/json" /></response><response status="404"><representation mediaType="application/xml"element="service:balance"/></response></method></resource></resources></application> This extract of a WADL file shows how application-exposing resources are described. Basically, each resource must be a part of an application. The resource provides the URI where it is located with the base attribute, and describes each of its supported HTTP methods in a method. Additionally, an optional doc element can be used at resource and application to provide additional documentation about the service and its operations. Though WADL is optional, it significantly reduces the efforts of discovering RESTful services. Taking advantage of the existing infrastructure The best part of developing and distributing RESTful applications is that the infrastructure needed is already out there waiting restlessly for you. As RESTful applications use the existing web space heavily, you need to do nothing more than following the REST principles when developing. In addition, there are a plenty of libraries available out there for any platform, and I do mean any given platform. This eases development of RESTful applications, so you just need to choose the preferred platform for you and start developing. Summary In this article, you learned about the history of REST, and we made a slight comparison between RESTful services and classical SOAP Web services. We looked at the five key principles that would turn our web application into a REST-enabled application, and finally took a look at how RESTful services are described and how we can simplify the discovery of the services we develop. Now that you know the REST basics, we are ready to dive into the Node.js way of implementing RESTful services. Resources for Article: Further resources on this subject: Creating a RESTful API [Article] So, what is Node.js? [Article] CreateJS – Performing Animation and Transforming Function [Article]
Read more
  • 0
  • 0
  • 856
article-image-model-view-viewmodel
Packt
02 Mar 2015
24 min read
Save for later

Model-View-ViewModel

Packt
02 Mar 2015
24 min read
In this article, by Einar Ingebrigtsen, author of the book, SignalR Blueprints, we will focus on a different programming model for client development: Model-View-ViewModel (MVVM). It will reiterate what you have already learned about SignalR, but you will also start to see a recurring theme in how you should architect decoupled software that adheres to the SOLID principles. It will also show the benefit of thinking in single page application terms (often referred to as Single Page Application (SPA)), and how SignalR really fits well with this idea. (For more resources related to this topic, see here.) The goal – an imagined dashboard A counterpart to any application is often a part of monitoring its health. Is it running? and are there any failures?. Getting this information in real time when the failure occurs is important and also getting some statistics from it is interesting. From a SignalR perspective, we will still use the hub abstraction to do pretty much what we have been doing, but the goal is to give ideas of how and what we can use SignalR for. Another goal is to dive into the architectural patterns, making it ready for larger applications. MVVM allows better separation and is very applicable for client development in general. A question that you might ask yourself is why KnockoutJS instead of something like AngularJS? It boils down to the personal preference to a certain degree. AngularJS is described as a MVW where W stands for Whatever. I find AngularJS less focused on the same things I focus on and I also find it very verbose to get it up and running. I'm not in any way an expert in AngularJS, but I have used it on a project and I found myself writing a lot to make it work the way I wanted it to in terms of MVVM. However, I don't think it's fair to compare the two. KnockoutJS is very focused in what it's trying to solve, which is just a little piece of the puzzle, while AngularJS is a full client end-to-end framework. On this note, let's just jump straight to it. Decoupling it all MVVM is a pattern for client development that became very popular in the XAML stack, enabled by Microsoft based on Martin Fowlers presentation model. Its principle is that you have a ViewModel that holds the state and exposes behavior that can be utilized from a view. The view observes any changes of the state the ViewModel exposes, making the ViewModel totally unaware that there is a view. The ViewModel is decoupled and can be put in isolation and is perfect for automated testing. As part of the state that the ViewModel typically holds is the model part, which is something it usually gets from the server, and a SignalR hub is the perfect transport to get this. It boils down to recognizing the different concerns that make up the frontend and separating it all. This gives us the following diagram: Back to basics This time we will go back in time, going down what might be considered a more purist path; use the browser elements (HTML, JavaScript, and CSS) and don't rely on any server-side rendering. Clients today are powerful and very capable and offloading the composition of what the user sees onto the client frees up server resources. You can also rely on the infrastructure of the Web for caching with static HTML files not rendered by the server. In fact, you could actually put these resources on a content delivery network, making the files available as close as possible to the end user. This would result in better load times for the user. You might have other reasons to perform server-side rendering and not just plain HTML. Leveraging existing infrastructure or third-party party tools could be those reasons. It boils down to what's right for you. But this particular sample will focus on things that the client can do. Anyways, let's get started. Open Visual Studio and create a new project by navigating to FILE | New | Project. The following dialog box will show up: From the left-hand side menu, select Web and then ASP.NET Web Application. Enter Chapter4 in the Name textbox and select your location. Select the Empty template from the template selector and make sure you deselect the Host in the cloud option. Then, click on OK, as shown in the following screenshot: Setting up the packages First, we want Twitter bootstrap. To get this, follow these steps: Add a NuGet package reference. Right-click on References in Solution Explorer and select Manage NuGet Packages and type Bootstrap in the search dialog box. Select it and then click on Install. We want a slightly different look, so we'll download one of the many bootstrap themes out here. Add a NuGet package reference called metro-bootstrap. As jQuery is still a part of this, let's add a NuGet package reference to it as well. For the MVVM part, we will use something called KnockoutJS; add it through NuGet as well. Add a NuGet package reference, as in the previous steps, but this time, type SignalR in the search dialog box. Find the package called Microsoft ASP.NET SignalR. Making any SignalR hubs available for the client Add a file called Startup.cs file to the root of the project. Add a Configuration method that will expose any SignalR hubs, as follows: public void Configuration(IAppBuilder app) { app.MapSignalR(); } At the top of the Startup.cs file, above the namespace declaration, but right below the using statements, add the following code:  [assembly: OwinStartupAttribute(typeof(Chapter4.Startup))] Knocking it out of the park KnockoutJS is a framework that implements a lot of the principles found in MVVM and makes it easier to apply. We're going to use the following two features of KnockoutJS, and it's therefore important to understand what they are and what significance they have: Observables: In order for a view to be able to know when state change in a ViewModel occurs, KnockoutJS has something called an observable for single objects or values and observable array for arrays. BindingHandlers: In the view, the counterparts that are able to recognize the observables and know how to deal with its content are known as BindingHandlers. We create binding expression in the view that instructs the view to get its content from the properties found in the binding context. The default binding context will be the ViewModel, but there are more advanced scenarios where this changes. In fact, there is a BindingHandler that enables you to specify the context at any given time called with. Our single page Whether one should strive towards having an SPA is widely discussed on the Web these days. My opinion on the subject, in the interest of the user, is that we should really try to push things in this direction. Having not to post back and cause a full reload of the page and all its resources and getting into the correct state gives the user a better experience. Some of the arguments to perform post-backs every now and then go in the direction of fixing potential memory leaks happening in the browser. Although, the technique is sound and the result is right, it really just camouflages a problem one has in the system. However, as with everything, it really depends on the situation. At the core of an SPA is a single page (pun intended), which is usually the index.html file sitting at the root of the project. Add the new index.html file and edit it as follows: Add a new HTML file (index.html) at the root of the project by right- clicking on the Chapter4 project in Solution Explorer. Navigate to Add | New Item | Web from the left-hand side menu, and then select HTML Page and name it index.html. Finally, click on Add. Let's put in the things we've added dependencies to, starting with the style sheets. In the index.html file, you'll find the <head> tag; add the following code snippet under the <title></title> tag: <link href="Content/bootstrap.min.css" rel="stylesheet" /> <link href="Content/metro-bootstrap.min.css" rel="stylesheet" /> Next, add the following code snippet right beneath the preceding code: <script type="text/javascript" src="Scripts/jquery- 1.9.0.min.js"></script> <script type="text/javascript" src="Scripts/jquery.signalR- 2.1.1.js"></script> <script type="text/javascript" src="signalr/hubs"></script> <script type="text/javascript" src="Scripts/knockout- 3.2.0.js"></script> Another thing we will need in this is something that helps us visualize things; Google has a free, open source charting library that we will use. We will take a dependency to the JavaScript APIs from Google. To do this, add the following script tag after the others: <script type="text/javascript" src="https://www.google.com/jsapi"></script> Now, we can start filling in the view part. Inside the <body> tag, we start by putting in a header, as shown here: <div class="navbar navbar-default navbar-static-top bsnavbar">     <div class="container">         <div class="navbar-header">             <h1>My Dashboard</h1>         </div>     </div> </div> The server side of things In this little dashboard thing, we will look at web requests, both successful and failed. We will perform some minor things for us to be able to do this in a very naive way, without having to flesh out a full mechanism to deal with error situations. Let's start by enabling all requests even static resources, such as HTML files, to run through all HTTP modules. A word of warning: there are performance implications of putting all requests through the managed pipeline, so normally, you wouldn't necessarily want to do this on a production system, but for this sample, it will be fine to show the concepts. Open Web.config in the project and add the following code snippet within the <configuration> tag: <system.webServer>   <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> The hub In this sample, we will only have one hub, the one that will be responsible for dealing with reporting requests and failed requests. Let's add a new class called RequestStatisticsHub. Right-click on the project in Solution Explorer, select Class from Add, name it RequestStatisticsHub.cs, and then click on Add. The new class should inherit from the hub. Add the following using statement at the top: using Microsoft.AspNet.SignalR; We're going to keep a track of the count of requests and failed requests per time with a resolution of not more than every 30 seconds in the memory on the server. Obviously, if one wants to scale across multiple servers, this is way too naive and one should choose an out-of-process shared key-value store that goes across servers. However, for our purpose, this will be fine. Let's add a using statement at the top, as shown here: using System.Collections.Generic; At the top of the class, add the two dictionaries that we will use to hold this information: static Dictionary<string, int> _requestsLog = new Dictionary<string, int>(); static Dictionary<string, int> _failedRequestsLog = new Dictionary<string, int>(); In our client, we want to access these logs at startup. So let's add two methods to do so: public Dictionary<string, int> GetRequests() {     return _requestsLog; }   public Dictionary<string, int> GetFailedRequests() {     return _failedRequestsLog; } Remember the resolution of only keeping track of number of requests per 30 seconds at a time. There is no default mechanism in the .NET Framework to do this so we need to add a few helper methods to deal with rounding of time. Let's add a class called DateTimeRounding at the root of the project. Mark the class as a public static class and put the following extension methods in the class: public static DateTime RoundUp(this DateTime dt, TimeSpan d) {     var delta = (d.Ticks - (dt.Ticks % d.Ticks)) % d.Ticks;     return new DateTime(dt.Ticks + delta); }   public static DateTime RoundDown(this DateTime dt, TimeSpan d) {     var delta = dt.Ticks % d.Ticks;     return new DateTime(dt.Ticks - delta); }   public static DateTime RoundToNearest(this DateTime dt, TimeSpan d) {     var delta = dt.Ticks % d.Ticks;     bool roundUp = delta > d.Ticks / 2;       return roundUp ? dt.RoundUp(d) : dt.RoundDown(d); } Let's go back to the RequestStatisticsHub class and add some more functionality now so that we can deal with rounding of time: static void Register(Dictionary<string, int> log, Action<dynamic, string, int> hubCallback) {     var now = DateTime.Now.RoundToNearest(TimeSpan.FromSeconds(30));     var key = now.ToString("HH:mm");       if (log.ContainsKey(key))         log[key] = log[key] + 1;     else         log[key] = 1;       var hub = GlobalHost.ConnectionManager.GetHubContext<RequestStatisticsHub>() ;     hubCallback(hub.Clients.All, key, log[key]); }   public static void Request() {     Register(_requestsLog, (hub, key, value) => hub.requestCountChanged(key, value)); }   public static void FailedRequest() {     Register(_requestsLog, (hub, key, value) => hub.failedRequestCountChanged(key, value)); } This enables us to have a place to call in order to report requests and these get published back to any clients connected to this particular hub. Note the usage of GlobalHost and its ConnectionManager property. When we want to get a hub instance and when we are not in the hub context of a method being called from a client, we use ConnectionManager to get it. It gives is a proxy for the hub and enables us to call methods on any connected client. Naively dealing with requests With all this in place, we will be able to easily and naively deal with what we consider correct and failed requests. Let's add a Global.asax file by right-clicking on the project in Solution Explorer and select the New item from the Add. Navigate to Web and find Global Application Class, then click on Add. In the new file, we want to replace the BindingHandlers method with the following code snippet: protected void Application_AuthenticateRequest(object sender, EventArgs e) {     var path = HttpContext.Current.Request.Path;     if (path == "/") path = "index.html";       if (path.ToLowerInvariant().IndexOf(".html") < 0) return;       var physicalPath = HttpContext.Current.Request.MapPath(path);     if (File.Exists(physicalPath))     {         RequestStatisticsHub.Request();     }     else     {         RequestStatisticsHub.FailedRequest();     } } Basically, with this, we are only measuring requests with .html in its path, and if it's only "/", we assume it's "index.html". Any file that does not exist, accordingly, is considered an error; typically a 404 error and we register it as a failed request. Bringing it all back to the client With the server taken care of, we can start consuming all this in the client. We will now be heading down the path of creating a ViewModel and hook everything up. ViewModel Let's start by adding a JavaScript file sitting next to our index.html file at the root level of the project, call it index.js. This file will represent our ViewModel. Also, this scenario will be responsible to set up KnockoutJS, so that the ViewModel is in fact activated and applied to the page. As we only have this one page for this sample, this will be fine. Let's start by hooking up the jQuery document that is ready: $(function() { }); Inside the function created here, we will enter our viewModel definition, which will start off being an empty one: var viewModel = function() { }; KnockoutJS has a function to apply a viewModel to the document, meaning that the document or body will be associated with the viewModel instance given. Right under the definition of viewModel, add the following line: ko.applyBindings(new viewModel()); Compiling this and running it should at the very least not give you any errors but nothing more than a header saying My Dashboard. So, we need to lighten this up a bit. Inside the viewModel function definition, add the following code snippet: var self = this; this.requests = ko.observableArray(); this.failedRequests = ko.observableArray(); We enter a reference to this as a variant called self. This will help us with scoping issues later on. The arrays we added are now KnockoutJS's observable arrays that allows the view or any BindingHandler to observe the changes that are coming in. The ko.observableArray() and ko.observable() arrays both return a new function. So, if you want to access any values in it, you must unwrap it by calling it something that might seem counterintuitive at first. You might consider your variable as just another property. However, for the observableArray(), KnockoutJS adds most of the functions found in the array type in JavaScript and they can be used directly on the function without unwrapping. If you look at a variable that is an observableArray in the console of the browser, you'll see that it looks as if it actually is just any array. This is not really true though; to get to the values, you will have to unwrap it by adding () after accessing the variable. However, all the functions you're used to having on an array are here. Let's add a function that will know how to handle an entry into the viewModel function. An entry coming in is either an existing one or a new one; the key of the entry is the giveaway to decide: function handleEntry(log, key, value) {     var result = log().forEach(function (entry) {         if (entry[0] == key) {             entry[1](value);             return true;         }     });       if (result !== true) {         log.push([key, ko.observable(value)]);     } }; Let's set up the hub and add the following code to the viewModel function: var hub = $.connection.requestStatisticsHub; var initializedCount = 0;   hub.client.requestCountChanged = function (key, value) {     if (initializedCount < 2) return;     handleEntry(self.requests, key, value); }   hub.client.failedRequestCountChanged = function (key, value) {     if (initializedCount < 2) return;     handleEntry(self.failedRequests, key, value); } You might notice the initalizedCount variable. Its purpose is not to deal with requests until completely initialized, which comes next. Add the following code snippet to the viewModel function: $.connection.hub.start().done(function () {     hub.server.getRequests().done(function (requests) {         for (var property in requests) {             handleEntry(self.requests, property, requests[property]);         }           initializedCount++;     });     hub.server.getFailedRequests().done(function (requests) {         for (var property in requests) {             handleEntry(self.failedRequests, property, requests[property]);         }           initializedCount++;     }); }); We should now have enough logic in our viewModel function to actually be able to get any requests already sitting there and also respond to new ones coming. BindingHandler The key element of KnockoutJS is its BindingHandler mechanism. In KnockoutJS, everything starts with a data-bind="" attribute on an element in the HTML view. Inside the attribute, one puts binding expressions and the BindingHandlers are a key to this. Every expression starts with the name of the handler. For instance, if you have an <input> tag and you want to get the value from the input into a property on the ViewModel, you would use the BindingHandler value. There are a few BindingHandlers out of the box to deal with the common scenarios (text, value for each, and more). All of the BindingHandlers are very well documented on the KnockoutJS site. For this sample, we will actually create our own BindingHandler. KnockoutJS is highly extensible and allows you to do just this amongst other extensibility points. Let's add a JavaScript file called googleCharts.js at the root of the project. Inside it, add the following code: google.load('visualization', '1.0', { 'packages': ['corechart'] }); This will tell the Google API to enable the charting package. The next thing we want to do is to define the BindingHandler. Any handler has the option of setting up an init function and an update function. The init function should only occur once, when it's first initialized. Actually, it's when the binding context is set. If the parent binding context of the element changes, it will be called again. The update function will be called whenever there is a change in an observable or more observables that the binding expression is referring to. For our sample, we will use the init function only and actually respond to changes manually because we have a more involved scenario than what the default mechanism would provide us with. The update function that you can add to a BindingHandler has the exact same signature as the init function; hence, it is called an update. Let's add the following code underneath the load call: ko.bindingHandlers.lineChart = {     init: function (element, valueAccessor, allValueAccessors, viewModel, bindingContext) {     } }; This is the core structure of a BindingHandler. As you can see, we've named the BindingHandler as lineChart. This is the name we will use in our view later on. The signature of init and update are the same. The first parameter represents the element that holds the binding expression, whereas the second valueAccessor parameter holds a function that enables us to access the value, which is a result of the expression. KnockoutJS deals with the expression internally and parses any expression and figures out how to expand any values, and so on. Add the following code into the init function: optionsInput = valueAccessor();   var options = {     title: optionsInput.title,     width: optionsInput.width || 300,     height: optionsInput.height || 300,     backgroundColor: 'transparent',     animation: {         duration: 1000,         easing: 'out'     } };   var dataHash = {};   var chart = new google.visualization.LineChart(element); var data = new google.visualization.DataTable(); data.addColumn('string', 'x'); data.addColumn('number', 'y');   function addRow(row, rowIndex) {     var value = row[1];     if (ko.isObservable(value)) {         value.subscribe(function (newValue) {             data.setValue(rowIndex, 1, newValue);             chart.draw(data, options);         });     }       var actualValue = ko.unwrap(value);     data.addRow([row[0], actualValue]);       dataHash[row[0]] = actualValue; };   optionsInput.data().forEach(addRow);   optionsInput.data.subscribe(function (newValue) {     newValue.forEach(function(row, rowIndex) {         if( !dataHash.hasOwnProperty(row[0])) {             addRow(row,rowIndex);         }     });       chart.draw(data, options); });         chart.draw(data, options); As you can see, observables has a function called subscribe(), which is the same for both an observable array and a regular observable. The code adds a subscription to the array itself; if there is any change to the array, we will find the change and add any new row to the chart. In addition, when we create a new row, we subscribe to any change in its value so that we can update the chart. In the ViewModel, the values were converted into observable values to accommodate this. View Go back to the index.html file; we need the UI for the two charts we're going to have. Plus, we need to get both the new BindingHandler loaded and also the ViewModel. Add the following script references after the last script reference already present, as shown here: <script type="text/javascript" src="googleCharts.js"></script> <script type="text/javascript" src="index.js"></script> Inside the <body> tag below the header, we want to add a bootstrap container and a row to hold two metro styled tiles and utilize our new BindingHandler. Also, we want a footer sitting at the bottom, as shown in the following code: <div class="container">     <div class="row">         <div class="col-sm-6 col-md-4">             <div class="thumbnail tile tile-green-sea tile-large">                 <div data-bind="lineChart: { title: 'Web Requests', width: 300, height: 300, data: requests }"></div>             </div>         </div>           <div class="col-sm-6 col-md-4">             <div class="thumbnail tile tile-pomegranate tile- large">                 <div data-bind="lineChart: { title: 'Failed Web Requests', width: 300, height: 300, data: failedRequests }"></div>             </div>         </div>     </div>       <hr />     <footer class="bs-footer" role="contentinfo">         <div class="container">             The Dashboard         </div>     </footer> </div> Note the data: requests and data: failedRequests are a part of the binding expressions. These will be handled and resolved by KnockoutJS internally and pointed to the observable arrays on the ViewModel. The other properties are options that go into the BindingHandler and something it forwards to the Google Charting APIs. Trying it all out Running the preceding code (Ctrl + F5) should yield the following result: If you open a second browser and go to the same URL, you will see the change in the chart in real time. Waiting approximately for 30 seconds and refreshing the browser should add a second point automatically and also animate the chart accordingly. Typing a URL with a file that does exist should have the same effect on the failed requests chart. Summary In this article, we had a brief encounter with MVVM as a pattern with the sole purpose of establishing good practices for your client code. We added this to a single page application setting, sprinkling on top the SignalR to communicate from the server to any connected client. Resources for Article: Further resources on this subject: Using R for Statistics Research and Graphics? [article] Aspects Data Manipulation in R [article] Learning Data Analytics R and Hadoop [article]
Read more
  • 0
  • 0
  • 1095

article-image-applications-webrtc
Packt
27 Feb 2015
20 min read
Save for later

Applications of WebRTC

Packt
27 Feb 2015
20 min read
This article is by Andrii Sergiienko, the author of the book WebRTC Cookbook. WebRTC is a relatively new and revolutionary technology that opens new horizons in the area of interactive applications and services. Most of the popular web browsers support it natively (such as Chrome and Firefox) or via extensions (such as Safari). Mobile platforms such as Android and iOS allow you to develop native WebRTC applications. In this article, we will cover the following recipes: Creating a multiuser conference using WebRTCO Taking a screenshot using WebRTC Compiling and running a demo for Android (For more resources related to this topic, see here.) Creating a multiuser conference using WebRTCO In this recipe, we will create a simple application that supports a multiuser videoconference. We will do it using WebRTCO—an open source JavaScript framework for developing WebRTC applications. Getting ready For this recipe, you should have a web server installed and configured. The application we will create can work while running on the local filesystem, but it is more convenient to use it via the web server. To create the application, we will use the signaling server located on the framework's homepage. The framework is open source, so you can download the signaling server from GitHub and install it locally on your machine. GitHub's page for the project can be found at https://github.com/Oslikas/WebRTCO. How to do it… The following recipe is built on the framework's infrastructure. We will use the framework's signaling server. What we need to do is include the framework's code and do some initialization procedure: Create an HTML file and add common HTML heads: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8"> Add some style definitions to make the web page looking nicer:     <style type="text/css">         video {             width: 384px;             height: 288px;             border: 1px solid black;             text-align: center;         }         .container {             width: 780px;             margin: 0 auto;         }     </style> Include the framework in your project: <script type="text/javascript" src ="https://cdn.oslikas.com/js/WebRTCO-1.0.0-beta-min.js"charset="utf-8"></script></head> Define the onLoad function—it will be called after the web page is loaded. In this function, we will make some preliminary initializing work: <body onload="onLoad();"> Define HTML containers where the local video will be placed: <div class="container">     <video id="localVideo"></video> </div> Define a place where the remote video will be added. Note that we don't create HTML video objects, and we just define a separate div. Further, video objects will be created and added to the page by the framework automatically: <div class="container" id="remoteVideos"></div> <div class="container"> Create the controls for the chat area: <div id="chat_area" style="width:100%; height:250px;overflow: auto; margin:0 auto 0 auto; border:1px solidrgb(200,200,200); background: rgb(250,250,250);"></div></div><div class="container" id="div_chat_input"><input type="text" class="search-query"placeholder="chat here" name="msgline" id="chat_input"><input type="submit" class="btn" id="chat_submit_btn"onclick="sendChatTxt();"/></div> Initialize a few variables: <script type="text/javascript">     var videoCount = 0;     var webrtco = null;     var parent = document.getElementById('remoteVideos');     var chatArea = document.getElementById("chat_area");     var chatColorLocal = "#468847";     var chatColorRemote = "#3a87ad"; Define a function that will be called by the framework when a new remote peer is connected. This function creates a new video object and puts it on the page:     function getRemoteVideo(remPid) {         var video = document.createElement('video');         var id = 'remoteVideo_' + remPid;         video.setAttribute('id',id);         parent.appendChild(video);         return video;     } Create the onLoad function. It initializes some variables and resizes the controls on the web page. Note that this is not mandatory, and we do it just to make the demo page look nicer:     function onLoad() {         var divChatInput =         document.getElementById("div_chat_input");         var divChatInputWidth = divChatInput.offsetWidth;         var chatSubmitButton =         document.getElementById("chat_submit_btn");         var chatSubmitButtonWidth =         chatSubmitButton.offsetWidth;         var chatInput =         document.getElementById("chat_input");         var chatInputWidth = divChatInputWidth -         chatSubmitButtonWidth - 40;         chatInput.setAttribute("style","width:" +         chatInputWidth + "px");         chatInput.style.width = chatInputWidth + 'px';         var lv = document.getElementById("localVideo"); Create a new WebRTCO object and start the application. After this point, the framework will start signaling connection, get access to the user's media, and will be ready for income connections from remote peers: webrtco = new WebRTCO('wss://www.webrtcexample.com/signalling',lv, OnRoomReceived, onChatMsgReceived, getRemoteVideo, OnBye);}; Here, the first parameter of the function is the URL of the signaling server. In this example, we used the signaling server provided by the framework. However, you can install your own signaling server and use an appropriate URL. The second parameter is the local video object ID. Then, we will supply functions to process messages of received room, received message, and received remote video stream. The last parameter is the function that will be called when some of the remote peers have been disconnected. The following function will be called when the remote peer has closed the connection. It will remove video objects that became outdated:     function OnBye(pid) {         var video = document.getElementById("remoteVideo_"         + pid);         if (null !== video) video.remove();     }; We also need a function that will create a URL to share with other peers in order to make them able to connect to the virtual room. The following piece of code represents such a function: function OnRoomReceived(room) {addChatTxt("Now, if somebody wants to join you,should use this link: <ahref=""+window.location.href+"?room="+room+"">"+window.location.href+"?room="+room+"</a>",chatColorRemote);}; The following function prints some text in the chat area. We will also use it to print the URL to share with remote peers:     function addChatTxt(msg, msgColor) {         var txt = "<font color=" + msgColor + ">" +         getTime() + msg + "</font><br/>";         chatArea.innerHTML = chatArea.innerHTML + txt;         chatArea.scrollTop = chatArea.scrollHeight;     }; The next function is a callback that is called by the framework when a peer has sent us a message. This function will print the message in the chat area:     function onChatMsgReceived(msg) {         addChatTxt(msg, chatColorRemote);     }; To send messages to remote peers, we will create another function, which is represented in the following code:     function sendChatTxt() {         var msgline =         document.getElementById("chat_input");         var msg = msgline.value;         addChatTxt(msg, chatColorLocal);         msgline.value = '';         webrtco.API_sendPutChatMsg(msg);     }; We also want to print the time while printing messages; so we have a special function that formats time data appropriately:     function getTime() {         var d = new Date();         var c_h = d.getHours();         var c_m = d.getMinutes();         var c_s = d.getSeconds();           if (c_h < 10) { c_h = "0" + c_h; }         if (c_m < 10) { c_m = "0" + c_m; }         if (c_s < 10) { c_s = "0" + c_s; }         return c_h + ":" + c_m + ":" + c_s + ": ";     }; We have some helper code to make our life easier. We will use it while removing obsolete video objects after remote peers are disconnected:     Element.prototype.remove = function() {         this.parentElement.removeChild(this);     }     NodeList.prototype.remove =     HTMLCollection.prototype.remove = function() {         for(var i = 0, len = this.length; i < len; i++) {             if(this[i] && this[i].parentElement) {                 this[i].parentElement.removeChild(this[i]);             }         }     } </script> </body> </html> Now, save the file and put it on the web server, where it could be accessible from web browser. How it works… Open a web browser and navigate to the place where the file is located on the web server. You will see an image from the web camera and a chat area beneath it. At this stage, the application has created the WebRTCO object and initiated the signaling connection. If everything is good, you will see an URL in the chat area. Open this URL in a new browser window or on another machine—the framework will create a new video object for every new peer and will add it to the web page. The number of peers is not limited by the application. In the following screenshot, I have used three peers: two web browser windows on the same machine and a notebook as the third peer: Taking a screenshot using WebRTC Sometimes, it can be useful to be able to take screenshots from a video during videoconferencing. In this recipe, we will implement such a feature. Getting ready No specific preparation is necessary for this recipe. You can take any basic WebRTC videoconferencing application. We will add some code to the HTML and JavaScript parts of the application. How to do it… Follow these steps: First of all, add image and canvas objects to the web page of the application. We will use these objects to take screenshots and display them on the page: <img id="localScreenshot" src=""> <canvas style="display:none;" id="localCanvas"></canvas> Next, you have to add a button to the web page. After clicking on this button, the appropriate function will be called to take the screenshot from the local stream video: <button onclick="btn_screenshot()" id="btn_screenshot">Make a screenshot</button> Finally, we need to implement the screenshot taking function: function btn_screenshot() { var v = document.getElementById("localVideo"); var s = document.getElementById("localScreenshot"); var c = document.getElementById("localCanvas"); var ctx = c.getContext("2d"); Draw an image on the canvas object—the image will be taken from the video object: ctx.drawImage(v,0,0); Now, take reference of the canvas, convert it to the DataURL object, and insert the value into the src option of the image object. As a result, the image object will show us the taken screenshot: s.src = c.toDataURL('image/png'); } That is it. Save the file and open the application in a web browser. Now, when you click on the Make a screenshot button, you will see the screenshot in the appropriate image object on the web page. You can save the screenshot to the disk using right-click and the pop-up menu. How it works… We use the canvas object to take a frame of the video object. Then, we will convert the canvas' data to DataURL and assign this value to the src parameter of the image object. After that, an image object is referred to the video frame, which is stored in the canvas. Compiling and running a demo for Android Here, you will learn how to build a native demo WebRTC application for Android. Unfortunately, the supplied demo application from Google doesn't contain any IDE-specific project files, so you will have to deal with console scripts and commands during all the building process. Getting ready We will need to check whether we have all the necessary libraries and packages installed on the work machine. For this recipe, I used a Linux box—Ubuntu 14.04.1 x64. So all the commands that might be specific for OS will be relevant to Ubuntu. Nevertheless, using Linux is not mandatory and you can take Windows or Mac OS X. If you're using Linux, it should be 64-bit based. Otherwise, you most likely won't be able to compile Android code. Preparing the system First of all, you need to install the necessary system packages: sudo apt-get install git git-svn subversion g++ pkg-config gtk+-2.0libnss3-dev libudev-dev ant gcc-multilib lib32z1 lib32stdc++6 Installing Oracle JDK By default, Ubuntu is supplied with OpenJDK, but it is highly recommended that you install an Oracle JDK. Otherwise, you can face issues while building WebRTC applications for Android. One another thing that you should keep in mind is that you should probably use Oracle JDK version 1.6—other versions (in particular, 1.7 and 1.8) might not be compatible with the WebRTC code base. This will probably be fixed in the future, but in my case, only Oracle JDK 1.6 was able to build the demo successfully. Download the Oracle JDK from its home page at http://www.oracle.com/technetwork/java/javase/downloads/index.html. In case there is no download link on such an old JDK, you can try another URL: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html. Oracle will probably ask you to sign in or register first. You will be able to download anything from their archive. Install the downloaded JDK: sudo mkdir –p /usr/lib/jvmcd /usr/lib/jvm && sudo /bin/sh ~/jdk-6u45-linux-x64.bin --noregister Here, I assume that you downloaded the JDK package into the home directory. Register the JDK in the system: sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_45/bin/javac 50000 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_45/bin/java 50000 sudo update-alternatives --config javac sudo update-alternatives --config java cd /usr/lib sudo ln -s /usr/lib/jvm/jdk1.6.0_45 java-6-sun export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45/ Test the Java version: java -version You should see something like Java HotSpot on the screen—it means that the correct JVM is installed. Getting the WebRTC source code Perform the following steps to get the WebRTC source code: Download and prepare Google Developer Tools:Getting the WebRTC source code mkdir –p ~/dev && cd ~/dev git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=`pwd`/depot_tools:"$PATH" Download the WebRTC source code: gclient config http://webrtc.googlecode.com/svn/trunk echo "target_os = ['android', 'unix']" >> .gclient gclient sync The last command can take a couple of minutes (actually, it depends on your Internet connection speed), as you will be downloading several gigabytes of source code. Installing Android Developer Tools To develop Android applications, you should have Android Developer Tools (ADT) installed. This SDK contains Android-specific libraries and tools that are necessary to build and develop native software for Android. Perform the following steps to install ADT: Download ADT from its home page http://developer.android.com/sdk/index.html#download. Unpack ADT to a folder: cd ~/dev unzip ~/adt-bundle-linux-x86_64-20140702.zip Set up the ANDROID_HOME environment variable: export ANDROID_HOME=`pwd`/adt-bundle-linux-x86_64-20140702/sdk How to do it… After you've prepared the environment and installed the necessary system components and packages, you can continue to build the demo application: Prepare Android-specific build dependencies: cd ~/dev/trunk source ./build/android/envsetup.sh Configure the build scripts: export GYP_DEFINES="$GYP_DEFINES build_with_libjingle=1 build_with_chromium=0 libjingle_java=1 OS=android"gclient runhooks Build the WebRTC code with the demo application: ninja -C out/Debug -j 5 AppRTCDemo After the last command, you can find the compiled Android packet with the demo application at ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk. Running on the Android simulator Follow these steps to run an application on the Android simulator: Run Android SDK manager and install the necessary Android components: $ANDROID_HOME/tools/android sdk Choose at least Android 4.x—lower versions don't have WebRTC support. In the following screenshot, I've chosen Android SDK 4.4 and 4.2: Create an Android virtual device: cd $ANDROID_HOME/tools ./android avd & The last command executes the Android SDK tool to create and maintain virtual devices. Create a new virtual device using this tool. You can see an example in the following screenshot: Start the emulator using just the created virtual device: ./emulator –avd emu1 & This can take a couple of seconds (or even minutes), after that you should see a typical Android device home screen, like in the following screenshot: Check whether the virtual device is simulated and running: cd $ANDROID_HOME/platform-tools ./adb devices You should see something like the following: List of devices attached emulator-5554   device This means that your just created virtual device is OK and running; so we can use it to test our demo application. Install the demo application on the virtual device: ./adb install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You should see something like the following: 636 KB/s (2507985 bytes in 3.848s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success This means that the application is transferred to the virtual device and is ready to be started. Switch to the simulator window; you should see the demo application's icon. Execute it like it is a real Android device. In the following screenshot, you can see the installed demo application AppRTC: While trying to launch the application, you might see an error message with a Java runtime exception referring to GLSurfaceView. In this case, you probably need to switch to the Use Host GPU option while creating the virtual device with Android Virtual Device (AVD) tool. Fixing a bug with GLSurfaceView Sometimes if you're using an Android simulator with a virtual device on the ARM architecture, you can be faced with an issue when the application says No config chosen, throws an exception, and exits. This is a known defect in the Android WebRTC code and its status can be tracked at https://code.google.com/p/android/issues/detail?id=43209. The following steps can help you fix this bug in the original demo application: Go to the ~/dev/trunk/talk/examples/android/src/org/appspot/apprtc folder and edit the AppRTCDemoActivity.java file. Look for the following line of code: vsv = new AppRTCGLView(this, displaySize); Right after this line, add the following line of code: vsv.setEGLConfigChooser(8,8,8,8,16,16); You will need to recompile the application: cd ~/dev/trunk ninja -C out/Debug AppRTCDemo  Now you can deploy your application and the issue will not appear anymore. Running on a physical Android device For deploying applications on an Android device, you don't need to have any developer certificates (like in the case of iOS devices). So if you have an Android physical device, it probably would be easier to debug and run the demo application on the device rather than on the simulator. Connect the Android device to the machine using a USB cable. On the Android device, switch the USB debug mode on. Check whether your machine sees your device: cd $ANDROID_HOME/platform-tools ./adb devices If device is connected and the machine sees it, you should see the device's name in the result print of the preceding command: List of devices attached QO4721C35410   device Deploy the application onto the device: cd $ANDROID_HOME/platform-tools ./adb -d install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You will get the following output: 3016 KB/s (2508031 bytes in 0.812s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success After that you should see the AppRTC demo application's icon on the device: After you have started the application, you should see a prompt to enter a room number. At this stage, go to http://apprtc.webrtc.org in your web browser on another machine; you will see an image from your camera. Copy the room number from the URL string and enter it in the demo application on the Android device. Your Android device and another machine will try to establish a peer-to-peer connection, and might take some time. In the following screenshot, you can see the image on the desktop after the connection with Android smartphone has been established: Here, the big image represents what is translated from the frontal camera of the Android smartphone; the small image depicts the image from the notebook's web camera. So both the devices have established direct connection and translate audio and video to each other. The following screenshot represents what was seen on the Android device: There's more… The original demo doesn't contain any ready-to-use IDE project files; so you have to deal with console commands and scripts during all the development process. You can make your life a bit easier if you use some third-party tools that simplify the building process. Such tools can be found at http://tech.pristine.io/build-android-apprtc. Summary In this article, we have learned to create a multiuser conference using WebRTCO, take a screenshot using WebRTC, and compile and run a demo for Android. Resources for Article: Further resources on this subject: Webrtc with Sip and Ims? [article] Using the Webrtc Data Api [article] Applying Webrtc for Education and E Learning [article]
Read more
  • 0
  • 0
  • 2389