Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-what-naive-bayes-classifier
Packt
22 Feb 2016
9 min read
Save for later

What is Naïve Bayes classifier?

Packt
22 Feb 2016
9 min read
The name Naïve Bayes comes from the basic assumption in the model that the probability of a particular feature Xi is independent of any other feature Xj given the class label CK. This implies the following: Using this assumption and the Bayes rule, one can show that the probability of class CK, given features {X1,X2,X3,...,Xn}, is given by: Here, P(X1,X2,X3,...,Xn) is the normalization term obtained by summing the numerator on all the values of k. It is also called Bayesian evidence or partition function Z. The classifier selects a class label as the target class that maximizes the posterior class probability P(CK |{X1,X2,X3,...,Xn}): The Naïve Bayes classifier is a baseline classifier for document classification. One reason for this is that the underlying assumption that each feature (words or m-grams) is independent of others, given the class label typically holds good for text. Another reason is that the Naïve Bayes classifier scales well when there is a large number of documents. There are two implementations of Naïve Bayes. In Bernoulli Naïve Bayes, features are binary variables that encode whether a feature (m-gram) is present or absent in a document. In multinomial Naïve Bayes, the features are frequencies of m-grams in a document. To avoid issues when the frequency is zero, a Laplace smoothing is done on the feature vectors by adding a 1 to each count. Let's look at multinomial Naïve Bayes in some detail. Let ni be the number of times the feature Xi occurred in the class CK in the training data. Then, the likelihood function of observing a feature vector X={X1,X2,X3,..,Xn}, given a class label CK, is given by: Here, is the probability of observing the feature Xi in the class CK. Using Bayesian rule, the posterior probability of observing the class CK, given a feature vector X, is given by: Taking logarithm on both the sides and ignoring the constant term Z, we get the following: So, by taking logarithm of posterior distribution, we have converted the problem into a linear regression model with as the coefficients to be determined from data. This can be easily solved. Generally, instead of term frequencies, one uses TF-IDF (term frequency multiplied by inverse frequency) with the document length normalized to improve the performance of the model. The R package e1071 (Miscellaneous Functions of the Department of Statistics) by T.U. Wien contains an R implementation of Naïve Bayes. For this article, we will use the SMS spam dataset from the UCI Machine Learning repository (reference 1 in the References section of this article). The dataset consists of 425 SMS spam messages collected from the UK forum Grumbletext, where consumers can submit spam SMS messages. The dataset also contains 3375 normal (ham) SMS messages from the NUS SMS corpus maintained by the National University of Singapore. The dataset can be downloaded from the UCI Machine Learning repository (https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection). Let's say that we have saved this as file SMSSpamCollection.txt in the working directory of R (actually, you need to open it in Excel and save it is as tab-delimited file for it to read in R properly). Then, the command to read the file into the tm (text mining) package would be the following: >spamdata ←read.table("SMSSpamCollection.txt",sep="\t",stringsAsFactors = default.stringsAsFactors()) We will first separate the dependent variable y and independent variables x and split the dataset into training and testing sets in the ratio 80:20, using the following R commands: >samp←sample.int(nrow(spamdata),as.integer(nrow(spamdata)*0.2),replace=F) >spamTest ←spamdata[samp,] >spamTrain ←spamdata[-samp,] >ytrain←as.factor(spamTrain[,1]) >ytest←as.factor(spamTest[,1]) >xtrain←as.vector(spamTrain[,2]) >xtest←as.vector(spamTest[,2]) Since we are dealing with text documents, we need to do some standard preprocessing before we can use the data for any machine learning models. We can use the tm package in R for this purpose. In the next section, we will describe this in some detail. Text processing using the tm package The tm package has methods for data import, corpus handling, preprocessing, metadata management, and creation of term-document matrices. Data can be imported into the tm package either from a directory, a vector with each component a document, or a data frame. The fundamental data structure in tm is an abstract collection of text documents called Corpus. It has two implementations; one is where data is stored in memory and is called VCorpus (volatile corpus) and the second is where data is stored in the hard disk and is called PCorpus (permanent corpus). We can create a corpus of our SMS spam dataset by using the following R commands; prior to this, you need to install the tm package and SnowballC package by using the install.packages("packagename") command in R: >library(tm) >library(SnowballC) >xtrain ← VCorpus(VectorSource(xtrain)) First, we need to do some basic text processing, such as removing extra white space, changing all words to lowercase, removing stop words, and stemming the words. This can be achieved by using the following functions in the tm package: >#remove extra white space >xtrain ← tm_map(xtrain,stripWhitespace) >#remove punctuation >xtrain ← tm_map(xtrain,removePunctuation) >#remove numbers >xtrain ← tm_map(xtrain,removeNumbers) >#changing to lower case >xtrain ← tm_map(xtrain,content_transformer(tolower)) >#removing stop words >xtrain ← tm_map(xtrain,removeWords,stopwords("english")) >#stemming the document >xtrain ← tm_map(xtrain,stemDocument) Finally, the data is transformed into a form that can be consumed by machine learning models. This is the so called document-term matrix form where each document (SMS in this case) is a row, the terms appearing in all documents are the columns, and the entry in each cell denotes how many times each word occurs in one document: >#creating Document-Term Matrix >xtrain ← as.data.frame.matrix(DocumentTermMatrix(xtrain)) The same set of processes is done on the xtest dataset as well. The reason we converted y to factors and xtrain to a data frame is to match the input format for the Naïve Bayes classifier in the e1071 package. Model training and prediction You need to first install the e1071 package from CRAN. The naiveBayes() function can be used to train the Naïve Bayes model. The function can be called using two methods. The following is the first method: >naiveBayes(formula,data,laplace=0, ,subset,na.action=na.pass) Here formula stands for the linear combination of independent variables to predict the following class: >class ~ x1+x2+… Also, data stands for either a data frame or contingency table consisting of categorical and numerical variables. If we have the class labels as a vector y and dependent variables as a data frame x, then we can use the second method of calling the function, as follows: >naiveBayes(x,y,laplace=0,…) We will use the second method of calling in our example. Once we have a trained model, which is an R object of class naiveBayes, we can predict the classes of new instances as follows: >predict(object,newdata,type=c(class,raw),threshold=0.001,eps=0,…) So, we can train the Naïve Bayes model on our training dataset and score on the test dataset by using the following commands: >#Training the Naive Bayes Model >nbmodel ← naiveBayes(xtrain,ytrain,laplace=3) >#Prediction using trained model >ypred.nb ← predict(nbmodel,xtest,type = "class",threshold = 0.075) >#Converting classes to 0 and 1 for plotting ROC >fconvert ← function(x){ if(x == "spam"){ y ← 1} else {y ← 0} y } >ytest1 ← sapply(ytest,fconvert,simplify = "array") >ypred1 ← sapply(ypred.nb,fconvert,simplify = "array") >roc(ytest1,ypred1,plot = T)  Here, the ROC curve for this model and dataset is shown. This is generated using the pROC package in CRAN: >#Confusion matrix >confmat ← table(ytest,ypred.nb) >confmat pred.nb ytest ham spam ham 143 139 spam 9 35 From the ROC curve and confusion matrix, one can choose the best threshold for the classifier, and the precision and recall metrics. Note that the example shown here is for illustration purposes only. The model needs be to tuned further to improve accuracy. We can also print some of the most frequent words (model features) occurring in the two classes and their posterior probabilities generated by the model. This will give a more intuitive feeling for the model exercise. The following R code does this job: >tab ← nbmodel$tables >fham ← function(x){ y ← x[1,1] y } >hamvec ← sapply(tab,fham,simplify = "array") >hamvec ← sort(hamvec,decreasing = T) >fspam ← function(x){ y ← x[2,1] y } >spamvec ← sapply(tab,fspam,simplify = "array") >spamvec ← sort(spamvec,decreasing = T) >prb ← cbind(spamvec,hamvec) >print.table(prb)  The output table is as follows: word Prob(word|spam) Prob(word|ham) call 0.6994 0.4084 free 0.4294 0.3996 now 0.3865 0.3120 repli 0.2761 0.3094 text 0.2638 0.2840 spam 0.2270 0.2726 txt 0.2270 0.2594 get 0.2209 0.2182 stop 0.2086 0.2025 The table shows, for example, that given a document is spam, the probability of the word call appearing in it is 0.6994, whereas the probability of the same word appearing in a normal document is only 0.4084. Summary In this article, we learned a basic and popular method for classification, Naïve Bayes, implemented using the Bayesian approach. For further information on Bayesian models, you can refer to: https://www.packtpub.com/big-data-and-business-intelligence/data-analysis-r https://www.packtpub.com/big-data-and-business-intelligence/building-probabilistic-graphical-models-python Resources for Article: Further resources on this subject: Introducing Bayesian Inference [article] Practical Applications of Deep Learning [article] Machine learning in practice [article]
Read more
  • 0
  • 0
  • 2866

article-image-component-composition
Packt
22 Feb 2016
38 min read
Save for later

Component Composition

Packt
22 Feb 2016
38 min read
In this article, we understand how large-scale JavaScript applications amount to a series of communicating components. Composition is a big topic, and one that's relevant to scalable JavaScript code. When we start thinking about the composition of our components, we start to notice certain flaws in our design; limitations that prevent us from scaling in response to influencers. (For more resources related to this topic, see here.) The composition of a component isn't random—there's a handful of prevalent patterns for JavaScript components. We'll begin the article with a look at some of these generic component types that encapsulate common patterns found in every web application. Understanding that components implement patterns is crucial for extending these generic components in a way that scales. It's one thing to get our component composition right from a purely technical standpoint, it's another to easily map these components to features. The same challenge holds true for components we've already implemented. The way we compose our code needs to provide a level of transparency, so that it's feasible to decompose our components and understand what they're doing, both at runtime and at design time. Finally, we'll take a look at the idea of decoupling business logic from our components. This is nothing new, the idea of separation-of-concerns has been around for a long time. The challenge with JavaScript applications is that it touches so many things—it's difficult to clearly separate business logic from other implementation concerns. The way in which we organize our source code (relative to the components that use them) can have a dramatic effect on our ability to scale. Generic component types It's exceedingly unlikely that anyone, in this day and age, would set out to build a large scale JavaScript application without the help of libraries, a framework, or both. Let's refer to these collectively as tools, since we're more interested in using the tools that help us scale, and not necessarily which tools are better than other tools. At the end of the day, it's up to the development team to decide which tool is best for the application we're building, personal preferences aside. Guiding factors in choosing the tools we use are the type of components they provide, and what these are capable of. For example, a larger web framework may have all the generic components we need. On the other hand, a functional programming utility library might provide a lot of the low-level functionality we need. How these things are composed into a cohesive feature that scales, is for us to figure out. The idea is to find tools that expose generic implementations of the components we need. Often, we'll extend these components, building specific functionality that's unique to our application. This section walks through the most typical components we'd want in a large-scale JavaScript application. Modules Modules exist, in one form or another, in almost every programming language. Except in JavaScript. That's almost untrue though—ECMAScript 6, in it's final draft status at the time of this writing, introduces the notion of modules. However, there're tools out there today that allow us to modularize our code, without relying on the script tag. Large-scale JavaScript code is still a relatively new thing. Things like the script tag weren't meant to address issues like modular code and dependency management. RequireJS is probably the most popular module loader and dependency resolver. The fact that we need a library just to load modules into our front-end application speaks of the complexities involved. For example, module dependencies aren't a trivial matter when there's network latency and race conditions to consider. Another option is to use a transpiler like Browserify. This approach is gaining traction because it lets us declare our modules using the CommonJS format. This format is used by NodeJS, and the upcoming ECMAScript module specification is a lot closer to CommonJS than to AMD. The advantage is that the code we write today has better compatibility with back-end JavaScript code, and with the future. Some frameworks, like Angular or Marionette, have their own ideas of what modules are, albeit, more abstract ideas. These modules are more about organizing our code, than they are about tactfully delivering code from the server to the browser. These types of modules might even map better to other features of the framework. For example, if there's a centralized application instance that's used to manage our modules, the framework might provide a mean to manage modules from the application. Take a look at the following diagram: A global application component using modules as it's building blocks. Modules can be small, containing only one feature, or large, containing several features This lets us perform higher-level tasks at the module level (things like disabling modules or configuring them with arguments). Essentially, modules speak for features. They're a packaging mechanism that allows us to encapsulate things about a given feature that the rest of the application doesn't care about. Modules help us scale our application by adding high-level operations to our features, by treating our features as the building blocks. Without modules, we'd have no meaningful way to do this. The composition of modules look different depending on the mechanism used to declare the module. A module could be straightforward, providing a namespace from which objects can be exported. Or if we're using a specific framework module flavor, there could be much more to it. Like automatic event life cycles, or methods for performing boilerplate setup tasks. However we slice it, modules in the context of scalable JavaScript are a means to create larger building blocks, and a means to handle complex dependencies: // main.js // Imports a log() function from the util.js model. import log from 'util.js'; log('Initializing...'); // util.js // Exports a basic console.log() wrapper function. 'use strict'; export default function log(message) { if (console) { console.log(message); } } While it's easier to build large-scale applications with module-sized building blocks, it's also easier to tear a module out of an application and work with it in isolation. If our application is monolithic or our modules are too plentiful and fine-grained, it's very difficult for us to excise problem-spots from our code, or to test work in progress. Our component may function perfectly well on its own. It could have negative side-effects somewhere else in the system, however. If we can remove pieces of the puzzle, one at a time and without too much effort, we can scale the trouble-shooting process. Routers Any large-scale JavaScript application has a significant number of possible URIs. The URI is the address of the page that the user is looking at. They can navigate to this resource by clicking on links, or they may be taken to a new URI automatically by our code, perhaps in response to some user action. The web has always relied on URIs, long before the advent of large-scale JavaScript applications. URIs point to resources, and resources can be just about anything. The larger the application, the more resources, and the more potential URIs. Router components are tools we use in the front-end, to listen for these URI change events and respond to them accordingly. There's less reliance on the back-end web servers parsing the URI, and returning the new content. Most web sites still do this, but there're several disadvantages with this approach when it comes to building applications: The browser triggers events when the URI changes, and the router component responds to these changes. The URI changes can be triggered from the history API, or from location.hash The main problem is that we want the UI to be portable, as in, we want to be able to deploy it against any back-end and things should work. Since we're not assembling markup for the URI in the back-end, it doesn't make sense to parse the URI in the back-end either. We declaratively specify all the URI patterns in our router components. We generally refer to these as, routes. Think of a route as a blueprint, and a URI as an instance of that blueprint. This means that when the router receives a URI, it can correlate it to a route. That, in essence, is the responsibility of router components. Which is easy with smaller applications, but when we're talking about scale, further deliberation on router design is in order. As a starting point, we have to consider the URI mechanism we want to use. The two choices are basically listening to hash change events, or utilizing the history API. Using hash-bang URIs is probably the simplest approach. The history API available in every modern browser, on the other hand, lets us format URI's without the hash-bang—they look like real URIs. The router component in the framework we're using may support only one or the other, thus simplifying the decision. Some support both URI approaches, in which case we need to decide which one works best for our application. The next thing to consider about routing in our architecture is how to react to route changes. There're generally two approaches to this. The first is to declaratively bind a route to a callback function. This is ideal when the router doesn't have a lot of routes. The second approach is to trigger events when routes are activated. This means that there's nothing directly bound to the router. Instead, some other component listens for such an event. This approach is beneficial when there are lots of routes, because the router has no knowledge of the components, just the routes. Here's an example that shows a router component listening to route events: // router.js import Events from 'events.js' // A router is a type of event broker, it // can trigger routes, and listen to route // changes. export default class Router extends Events { // If a route configuration object is passed, // then we iterate over it, calling listen() // on each route name. This is translating from // route specs to event listeners. constructor(routes) { super(); if (routes != null) { for (let key of Object.keys(routes)) { this.listen(key, routes[key]); } } } // This is called when the caller is ready to start // responding to route events. We listen to the // "onhashchange" window event. We manually call // our handler here to process the current route. start() { window.addEventListener('hashchange', this.onHashChange.bind(this)); this.onHashChange(); } // When there's a route change, we translate this into // a triggered event. Remember, this router is also an // event broker. The event name is the current URI. onHashChange() { this.trigger(location.hash, location.hash); } }; // Creates a router instance, and uses two different // approaches to listening to routes. // // The first is by passing configuration to the Router. // The key is the actual route, and the value is the // callback function. // // The second uses the listen() method of the router, // where the event name is the actual route, and the // callback function is called when the route is activated. // // Nothing is triggered until the start() method is called, // which gives us an opportunity to set everything up. For // example, the callback functions that respond to routes // might require something to be configured before they can // run. import Router from 'router.js' function logRoute(route) { console.log(`${route} activated`); } var router = new Router({ '#route1': logRoute }); router.listen('#route2', logRoute); router.start(); Some of the code required to run these examples is omitted from the listings. For example, the events.js module is included in the code bundle that comes with this book, it's just not that relevant to the example. Also in the interest of space, the code examples avoid using specific frameworks and libraries. In practice, we're not going to write our own router or events API—our frameworks do that already. We're instead using vanillaES6 JavaScript, to illustrate points pertinent to scaling our applications Another architectural consideration we'll want to make when it comes to routing is whether we want a global, monolithic router, or a router per module, or some other component. The downside to having a monolithic router is that it becomes difficult to scale when it grows sufficiently large, as we keep adding features and routes. The advantage is that the routes are all declared in one place. Monolithic routers can still trigger events that all our components can listen to. The per-module approach to routing involves multiple router instances. For example, if our application has five components, each would have their own router. The advantage here is that the module is completely self-contained. Anyone working with this module doesn't need to look elsewhere to figure out which routes it responds to. Using this approach, we can also have a tighter coupling between the route definitions and the functions that respond to them, which could mean simpler code. The downside to this approach is that we lose the consolidated aspect of having all our routes declared in a central place. Take a look at the following diagram: The router to the left is global—all modules use the same instance to respond to URI events. The modules to the right have their own routers. These instances contain configuration specific to the module, not the entire application Depending on the capabilities of the framework we're using, the router components may or may not support multiple router instances. It may only be possible to have one callback function per route. There may be subtle nuances to the router events we're not yet aware of. Models/Collections The API our application interacts with exposes entities. Once these entities have been transferred to the browser, we will store a model of those entities. Collections are a bunch of related entities, usually of the same type. The tools we're using may or may not provide a generic model and/or collection components, or they may have something similar but named differently. The goal of modeling API data is a rough approximation of the API entity. This could be as simple as storing models as plain JavaScript objects and collections as arrays. The challenge with simply storing our API entities as plain objects in arrays is that some other component is then responsible for talking to the API, triggering events when the data changes, and for performing data transformations. We want other components to be able to transform collections and models where needed, in order to fulfill their duties. But we don't want repetitive code, and it's best if we're able to encapsulate the common things like transformations, API calls, and event life cycles. Take a look at the next diagram: Models encapsulate interaction with APIs, parsing data, and triggering events when data changes. This leads to simpler code outside of the models Hiding the details of how the API data is loaded into the browser, or how we issue commands, helps us scale our application as we grow. As we add more entities to the API, the complexity of our code grows too. We can throttle this complexity by constraining the API interactions to our model and collection components. Another scalability issue we'll face with our models and collections is where they fit in the big picture. That is, our application is really just one big component, composed of smaller components. Our models and collections map well to our API, but not necessarily to features. API entities are more generic than specific features, and are often used by several features. Which leaves us with an open question—where do our models and collections fit into components? Here's an example that shows specific views extending generic views. The same model can be passed to both: // A super simple model class. class Model { constructor(first, last, age) { this.first = first; this.last = last; this.age = age; } } // The base view, with a name method that // generates some output. class BaseView { name() { return `${this.model.first} ${this.model.last}`; } } // Extends BaseView with a constructor that accepts // a model and stores a reference to it. class GenericModelView extends BaseView { constructor(model) { super(); this.model = model; } } // Extends GenericModelView with specific constructor // arguments. class SpecificModelView extends BaseView { constructor(first, last, age) { super(); this.model = new Model(...arguments); } } var properties = [ 'Terri', 'Hodges', 41 ]; // Make sure the data is the same in both views. // The name() method should return the same result... console.log('generic view', new GenericModelView(new Model(...properties)).name()); console.log('specific view', new SpecificModelView(...properties).name()); On one hand, components can be completely generic with regard to the models and collections they use. On the other hand, some components are specific with their requirements—they can directly instantiate their collections. Configuring generic components with specific models and collections at runtime only benefits us when the component truly is generic, and is used in several places. Otherwise, we might as well encapsulate the models within the components that use them. Choosing the right approach helps us scale. Because, not all our components will be entirely generic or entirely specific. Controllers/Views Depending on the framework we're using, and the design patterns our team is following, controllers and views can represent different things. There's simply too many MV* pattern and style variations to provide a meaningful distinction in terms of scale. The minute differences have trade-offs relative to similar but different MV* approaches. For our purpose of discussing large scale JavaScript code, we'll treat them as the same type of component. If we decide to separate the two concepts in our implementation, the ideas in this section will be relevant to both types. Let's stick with the term views for now, knowing that we're covering both views and controllers, conceptually. These components interact with several other component types, including routers, models or collections, and templates, which are discussed in the next section. When something happens, the user needs to be notified about it. The view's job is to update the DOM. This could be as simple as changing an attribute on a DOM element, or as involved as rendering a new template: A view component updating the DOM in response to router and model events A view can update the DOM in response to several types of events. A route could have changed. A model could have been updated. Or something more direct, like a method call on the view component. Updating the DOM is not as straightforward as one might think. There's the performance to think about—what happens when our view is flooded with events? There's the latency to think about—how long will this JavaScript call stack run, before stopping and actually allowing the DOM to render? Another responsibility of our views is responding to DOM events. These are usually triggered by the user interacting with our UI. The interaction may start and end with our view. For example, depending on the state of something like user input or one of our models, we might update the DOM with a message. Or we might do nothing, if the event handler is debounced, for instance. A debounced function groups multiple calls into one. For example, calling foo() 20 times in 10 milliseconds may only result in the implementation of foo() being called once. For a more detailed explanation, look at: http://drupalmotion.com/article/debounce-and-throttle-visual-explanation. Most of the time, the DOM events get translated into something else, either a method call or another event. For example, we might call a method on a model, or transform a collection. The end result, most of the time, is that we provide feedback by updating the DOM. This can be done either directly, or indirectly. In the case of direct DOM updates, it's simple to scale. In the case of indirect updates, or updates through side-effects, scaling becomes more of a challenge. This is because as the application acquires more moving parts, the more difficult it becomes to form a mental map of cause and effect. Here's an example that shows a view listening to DOM events and model events. import Events from 'events.js'; // A basic model. It extending "Events" so it // can listen to events triggered by other components. class Model extends Events { constructor(enabled) { super(); this.enabled = !!enabled; } // Setters and getters for the "enabled" property. // Setting it also triggers an event. So other components // can listen to the "enabled" event. set enabled(enabled) { this._enabled = enabled; this.trigger('enabled', enabled); } get enabled() { return this._enabled; } } // A view component that takes a model and a DOM element // as arguments. class View { constructor(element, model) { // When the model triggers the "enabled" event, // we adjust the DOM. model.listen('enabled', (enabled) => { element.setAttribute('disabled', !enabled); }); // Set the state of the model when the element is // clicked. This will trigger the listener above. element.addEventListener('click', () => { model.enabled = false; }); } } new View(document.getElementById('set'), new Model()); On the plus side to all this complexity, we actually get some reusable code. The view is agnostic as to how the model or router it's listening to is updated. All it cares about is specific events on specific components. This is actually helpful to us because it reduces the amount of special-case handling we need to implement. The DOM structure that's generated at runtime, as a result of rendering all our views, needs to be taken into consideration as well. For example, if we look at some of the top-level DOM nodes, they have nested structure within them. It's these top-level nodes that form the skeleton of our layout. Perhaps this is rendered by the main application view, and each of our views has a child-relationship to it. Or perhaps the hierarchy extends further down than that. The tools we're using most likely have mechanisms for dealing with these parent-child relationships. However, bear in mind that vast view hierarchies are difficult to scale. Templates Template engines used to reside mostly in the back-end framework. That's less true today, thanks in a large part to the sophisticated template rendering libraries available in the front-end. With large-scale JavaScript applications, we rarely talk to back-end services about UI-specific things. We don't say, "here's a URL, render the HTML for me". The trend is to give our JavaScript components a certain level autonomy—letting them render their own markup. Having component markup coupled with the components that render them is a good thing. It means that we can easily discern where the markup in the DOM is being generated. We can then diagnose issues and tweak the design of a large scale application. Templates help establish a separation of concerns with each of our components. The markup that's rendered in the browser mostly comes from the template. This keeps markup-specific code out of our JavaScript. Front-end template engines aren't just tools for string replacement; they often have other tools to help reduce the amount of boilerplate JavaScript code to write. For example, we can embed things like conditionals and for-each loops in our markup, where they're suited. Application-specific components The component types we've discussed so far are very useful for implementing scalable JavaScript code, but they're also very generic. Inevitably, during implementation we're going to hit a road block—the component composition patterns we've been following will not work for certain features. This is when it's time to step back and think about possibly adding a new type of component to our architecture. For example, consider the idea of widgets. These are generic components that are mainly focused on presentation and user interactions. Let's say that many of our views are using the exact same DOM elements, and the exact same event handlers. There's no point in repeating them in every view throughout our application. Might it be better if we were to factor it into a common component? A view might be overkill, perhaps we need a new type of widget component? Sometimes we'll create components for the sole purpose of composition. For example, we might have a component that glues together router, view, model/collection, and template components together to form a cohesive unit. Modules partially solve this problem but they aren't always enough. Sometimes we're missing that added bit of orchestration that our components need in order to communicate. Extending generic components We often discover, late in the development process, that the components we rely on are lacking something we need. If the base component we're using is designed well, then we can extend it, plugging in the new properties or functionality we need. In this section, we'll walk through some scenarios where we might need to extend the common generic components used throughout our application. If we're going to scale our code, we need to leverage these base components where we can. We'll probably want to start extending our own base components at some point too. Some tools are better than others at facilitating the extension mechanism through which we implement this specialized behavior. Identifying common data and functionality Before we look at extending the specific component types, it's worthwhile to consider the common properties and functionality that's common across all component types. Some of these things will be obvious up-front, while others are less pronounced. Our ability to scale depends, in part, on our ability to identify commonality across our components. If we have a global application instance, quite common in large JavaScript applications, global values and functionality can live there. This can grow unruly down the line though, as more common things are discovered. Another approach might be to have several global modules, as shown in the following diagram, instead of just a single application instance. Or both. But this doesn't scale from an understandability perspective: The ideal component hierarchy doesn't extend beyond three levels. The top level is usually found in a framework our application depends on As a rule-of-thumb, we should, for any given component, avoid extending it more than three levels down. For example, a generic view component from the tools we're using could be extended by our generic version of it. This would include properties and functionality that every view instance in our application requires. This is only a two-level hierarchy, and easy to manage. This means that if any given component needs to extend our generic view, it can do so without complicating things. Three-levels should be the maximum extension hierarchy depth for any given type. This is just enough to avoid unnecessary global data, going beyond this presents scaling issues because the hierarchy isn't easily grasped. Extending router components Our application may only require a single router instance. Even in this case, we may still need to override certain extension points of the generic router. In case of multiple router instances, there's bound to be common properties and functionality that we don't want to repeat. For example, if every route in our application follows the same pattern, with only subtle differences, we can implement the tools in our base router to avoid repetitious code. In addition to declaring routes, events take place when a given route is activated. Depending on the architecture of our application, different things need to happen. Maybe certain things always need to happen, no matter which route has been activated. This is where extending the router to provide our own functionality comes in hand. For example, we have to validate permission for a given route. It wouldn't make much sense for us to handle this through individual components, as this would not scale well with complex access control rules and a lot of routes. Extending models/collections Our models and collections, no matter what their specific implementation looks like, will share some common properties with one another. Especially if they're targeting the same API, which is the common case. The specifics of a given model or collection revolve around the API endpoint, the data returned, and the possible actions taken. It's likely that we'll target the same base API path for all entities, and that all entities have a handful of shared properties. Rather than repeat ourselves in every model or collection instance, it's better to abstract the common data. In addition to sharing properties among our models and collections, we can share common behavior. For instance, it's quite likely that a given model isn't going to have sufficient data for a given feature. Perhaps that data can be derived by transforming the model. These types of transformations can be common, and abstracted in a base model or collection. It really depends on the types of features we're implementing and how consistent they are with one another. If we're growing fast and getting lots of requests for "outside-the-box" features, then we're more likely to implement data transformations inside the views that require these one-off changes to the models or collections they're using. Most frameworks take care of the nuances for performing XHR requests to fetch our data or perform actions. That's not the whole story unfortunately, because our features will rarely map one-to-one with a single API entity. More likely, we will have a feature that requires several collections that are related to one another somehow, and a transformed collection. This type of operation can grow complex quickly, because we have to work with multiple XHR requests. We'll likely use promises to synchronize the fetching of these requests, and then perform the data transformation once we have all the necessary sources. Here's an example that shows a specific model extending a generic model, to provide new fetching behavior: // The base fetch() implementation of a model, sets // some property values, and resolves the promise. class BaseModel { fetch() { return new Promise((resolve, reject) => { this.id = 1; this.name = 'foo'; resolve(this); }); } } // Extends BaseModel with a specific implementation // of fetch(). class SpecificModel extends BaseModel { // Overrides the base fetch() method. Returns // a promise with combines the original // implementation and result of calling fetchSettings(). fetch() { return Promise.all([ super.fetch(), this.fetchSettings() ]); } // Returns a new Promise instance. Also sets a new // model property. fetchSettings() { return new Promise((resolve, reject) => { this.enabled = true; resolve(this); }); } } // Make sure the properties are all in place, as expected, // after the fetch() call completes. new SpecificModel().fetch().then((result) => { var [ model ] = result; console.assert(model.id === 1, 'id'); console.assert(model.name === 'foo'); console.assert(model.enabled, 'enabled'); console.log('fetched'); }); Extending controllers/views When we have a base model or base collection, there're often properties shared between our controllers or views. That's because the job of a controller or a view is to render model or collection data. For example, if the same view is rendering the same model properties over and over, we can probably move that bit to a base view, and extend from that. Perhaps the repetitive parts are in the templates themselves. This means that we might want to consider having a base template inside a base view, as shown in the following diagram. Views that extend this base view, inherit this base template. Depending on the library or framework at our disposal, extending templates like this may not be feasible. Or the nature of our features may make this difficult to achieve. For example, there might not be a common base template, but there might be a lot of smaller views and templates that can plug-into larger components: A view that extends a base view can populate the template of the base view, as well as inherit other base view functionalities Our views also need to respond to user interactions. They may respond directly, or forward the events up the component hierarchy. In either case, if our features are at all consistent, there will be some common DOM event handling that we'll want to abstract into a common base view. This is a huge help in scaling our application, because as we add more features, the DOM event handling code additions is minimized. Mapping features to components Now that we have a handle on the most common JavaScript components, and the ways we'll want to extend them for use in our application, it's time to think about how to glue those components together. A router on it's own isn't very useful. Nor is a standalone model, template, or controller. Instead, we want these things to work together, to form a cohesive unit that realizes a feature in our application. To do that, we have to map our features to components. We can't do this haphazardly either—we need to think about what's generic about our feature, and about what makes it unique. These feature properties will guide our design decisions on producing something that scales. Generic features Perhaps the most important aspects of component composition are consistency and reusability. While considering that the scaling influences our application faces, we'll come up with a list of traits that all our components must carry. Things like user management, access control, and other traits unique to our application. Along with the other architectural perspectives (explored in more depth throughout the remainder of the book), which form the core of our generic features: A generic component, composed of other generic components from our framework. The generic aspects of every feature in our application serve as a blueprint. They inform us in composing larger building blocks. These generic features account for the architectural factors that help us scale. And if we can encode these factors as parts of an aggregate component, we'll have an easier time scaling our application. What makes this design task challenging is that we have to look at these generic components not only from a scalable architecture perspective, but also from a feature-complete perspective. As much as we'd like to think that if every feature behaves the same way, we'd be all set. If only every feature followed an identical pattern, the sky's the limit when it comes the time to scale. But 100% consistent feature functionality is an illusion, more visible to JavaScript programmers than to users. The pattern breaks out of necessity. It's responding to this breakage in a scalable way that matters. This is why successful JavaScript applications will continuously revisit the generic aspects of our features to ensure they reflect reality. Specific features When it's time to implement something that doesn't fit the pattern, we're faced with a scaling challenge. We have to pivot, and consider the consequences of introducing such a feature into our architecture. When patterns are broken, our architecture needs to change. This isn't a bad thing—it's a necessity. The limiting factor in our ability to scale in response to these new features, lies with generic aspects of our existing features. This means that we can't be too rigid with our generic feature components. If we're too demanding, we're setting ourselves up for failure. Before making any brash architectural decisions stemming from offbeat features, think about the specific scaling consequences. For example, does it really matter that the new feature uses a different layout and requires a template that's different from all other feature components? The state of the JavaScript scaling art revolves around finding the handful of essential blueprints to follow for our component composition. Everything else is up for discussion on how to proceed. Decomposing components Component composition is an activity that creates order; larger behavior out of smaller parts. We often need to move in the opposite direction during development. Even after development, we can learn how a component works by tearing the code apart and watching it run in different contexts. Component decomposition means that we're able to take the system apart and examine individual parts in a somewhat structured approach. Maintaining and debugging components Over the course of application development, our components accumulate abstractions. We do this to support a feature's requirement better, while simultaneously supporting some architectural property that helps us scale. The problem is that as the abstractions accumulate, we lose transparency into the functioning of our components. This is not only essential for diagnosing and fixing issues, but also in terms of how easy the code is to learn. For example, if there's a lot of indirection, it takes longer for a programmer to trace cause to effect. Time wasted on tracing code, reduces our ability to scale from a developmental point of view. We're faced with two opposing problems. First, we need abstractions to address real world feature requirements and architectural constraints. Second, is our inability to master our own code due to a lack of transparency. 'Following is an example that shows a renderer component and a feature component. Renderers used by the feature are easily substitutable: // A Renderer instance takes a renderer function // as an argument. The render() method returns the // result of calling the function. class Renderer { constructor(renderer) { this.renderer = renderer; } render() { return this.renderer ? this.renderer(this) : ''; } } // A feature defines an output pattern. It accepts // header, content, and footer arguments. These are // Renderer instances. class Feature { constructor(header, content, footer) { this.header = header; this.content = content; this.footer = footer; } // Renders the sections of the view. Each section // either has a renderer, or it doesn't. Either way, // content is returned. render() { var header = this.header ? `${this.header.render()}n` : '', content = this.content ? `${this.content.render()}n` : '', footer = this.footer ? this.footer.render() : ''; return `${header}${content}${footer}`; } } // Constructs a new feature with renderers for three sections. var feature = new Feature( new Renderer(() => { return 'Header'; }), new Renderer(() => { return 'Content'; }), new Renderer(() => { return 'Footer'; }) ); console.log(feature.render()); // Remove the header section completely, replace the footer // section with a new renderer, and check the result. delete feature.header; feature.footer = new Renderer(() => { return 'Test Footer'; }); console.log(feature.render()); A tactic that can help us cope with these two opposing scaling influencers is substitutability. In particular, the ease with which one of our components, or sub-components, can be replaced with something else. This should be really easy to do. So before we go introducing layers of abstraction, we need to consider how easy it's going to be to replace a complex component with a simple one. This can help programmers learn the code, and also help with debugging. For example, if we're able to take a complex component out of the system and replace it with a dummy component, we can simplify the debugging process. If the error goes away after the component is replaced, we have found the problematic component. Otherwise, we can rule out a component and keep digging elsewhere. Re-factoring complex components It's of course easier said than done to implement substitutability with our components, especially in the face of deadlines. Once it becomes impractical to easily replace components with others, it's time to consider re-factoring our code. Or at least the parts that make substitutability infeasible. It's a balancing act, getting the right level of encapsulation, and the right level of transparency. Substitution can also be helpful at a more granular level. For example, let's say a view method is long and complex. If there are several stages during the execution of that method, where we would like to run something custom, we can't. It's better to re-factor the single method into a handful of methods, each of which can be overridden. Pluggable business logic Not all of our business logic needs to live inside our components, encapsulated from the outside world. Instead, it would be ideal if we could write our business logic as a set of functions. In theory, this provides us with a clear separation of concerns. The components are there to deal with the specific architectural concerns that help us scale, and the business logic can be plugged into any component. In practice, excising business logic from components isn't trivial. Extending versus configuring There're two approaches we can take when it comes to building our components. As a starting point, we have the tools provided by our libraries and frameworks. From there, we can keep extending these tools, getting more specific as we drill deeper and deeper into our features. Alternatively, we can provide our component instances with configuration values. These instruct the component on how to behave. The advantage of extending things that would otherwise need to be configured is that the caller doesn't need to worry about them. And if we can get by, using this approach, all the better, because it leads to simpler code. Especially the code that's using the component. On the other hand, we could have generic feature components that can be used for a specific purpose, if only they support this configuration or that configuration option. This approach has the advantage of simpler component hierarchies, and less overall components. Sometimes it's better to keep components as generic as possible, within the realm of understandability. That way, when we need a generic component for a specific feature, we can use it without having to re-define our hierarchy. Of course, there's more complexity involved for the caller of that component, because they need to supply it with the configuration values. It's a trade-off that's up to us, the JavaScript architects of our application. Do we want to encapsulate everything, configure everything, or do we want to strike a balance between the two? Stateless business logic With functional programming, functions don't have side effects. In some languages, this property is enforced, in JavaScript it isn't. However, we can still implement side-effect-free functions in JavaScript. If a function takes arguments, and always returns the same output based on those arguments, then the function can be said to be stateless. It doesn't depend on the state of a component, and it doesn't change the state of a component. It just computes a value. If we can establish a library of business logic that's implemented this way, we can design some super flexible components. Rather than implement this logic directly in a component, we pass the behavior into the component. That way, different components can utilize the same stateless business logic functions. The tricky part is finding the right functions that can be implemented this way. It's not a good idea to implement these up-front. Instead, as the iterations of our application development progress, we can use this strategy to re-factor code into generic stateless functions that are shared by any component capable of using them. This leads to business logic that's implemented in a focused way, and components that are small, generic, and reusable in a variety of contexts. Organizing component code In addition to composing our components in such a way that helps our application scale, we need to consider the structure of our source code modules too. When we first start off with a given project, our source code files tend to map well to what's running in the client's browser. Over time, as we accumulate more features and components, earlier decisions on how to organize our source tree can dilute this strong mapping. When tracing runtime behavior to our source code, the less mental effort involved, the better. We can scale to more stable features this way because our efforts are focused more on the design problems of the day—the things that directly provide customer value: The diagram shows the mapping component parts to their implementation artifacts There's another dimension to code organization in the context of our architecture, and that's our ability to isolate specific code. We should treat our code just like our runtime components, which are self-sustained units that we can turn on or off. That is, we should be able to find all the source code files required for a given component, without having to hunt them down. If a component requires, say, 10 source code files—JavaScript, HTML, and CSS—then ideally these should all be found in the same directory. The exception, of course, is generic base functionality that's shared by all components. These should be as close to the surface as possible. Then it's easy to trace our component dependencies; they will all point to the top of the hierarchy. It's a challenge to scale the dependency graph when our component dependencies are all over the place. Summary This article introduced us to the concept of component composition. Components are the building blocks of a scalable JavaScript application. The common components we're likely to encounter include things like modules, models/collections, controllers/views, and templates. While these patterns help us achieve a level of consistency, they're not enough on their own to make our code work well under various scaling influencers. This is why we need to extend these components, providing our own generic implementations that specific features of our application can further extend and use. Depending on the various scaling factors our application encounters, different approaches may be taken in getting generic functionality into our components. One approach is to keep extending the component hierarchy, and keep everything encapsulated and hidden away from the outside world. Another approach is to plug logic and properties into components when they're created. The cost is more complexity for the code that's using the components. We ended the article with a look at how we might go about organizing our source code; so that it's structure better reflects that of our logical component design. This helps us scale our development effort, and helps isolate one component's code from others'. It's one thing to have well crafted components that stand by themselves. It's quite another to implement scalable component communication. For more information, refer to: https://www.packtpub.com/web-development/javascript-and-json-essentials https://www.packtpub.com/application-development/learning-javascript-data-structures-and-algorithms Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack [Article] Components of PrimeFaces Extensions [Article] Unlocking the JavaScript Core [Article]
Read more
  • 0
  • 0
  • 2956

article-image-nodejs-fundamentals-and-asynchronous-javascript
Packt
19 Feb 2016
5 min read
Save for later

Node.js Fundamentals and Asynchronous JavaScript

Packt
19 Feb 2016
5 min read
Node.js is a JavaScript-driven technology. The language has been in development for more than 15 years, and it was first used in Netscape. Over the years, they've found interesting and useful design patterns, which will be of use to us in this book. All this knowledge is now available to Node.js coders. Of course, there are some differences because we are running the code in different environments, but we are still able to apply all these good practices, techniques, and paradigms. I always say that it is important to have a good basis to your applications. No matter how big your application is, it should rely on flexible and well-tested code (For more resources related to this topic, see here.) Node.js fundamentals Node.js is a single-threaded technology. This means that every request is processed in only one thread. In other languages, for example, Java, the web server instantiates a new thread for every request. However, Node.js is meant to use asynchronous processing, and there is a theory that doing this in a single thread could bring good performance. The problem of the single-threaded applications is the blocking I/O operations; for example, when we need to read a file from the hard disk to respond to the client. Once a new request lands on our server, we open the file and start reading from it. The problem occurs when another request is generated, and the application is still processing the first one. Let's elucidate the issue with the following example: var http = require('http'); var getTime = function() { var d = new Date(); return d.getHours() + ':' + d.getMinutes() + ':' + d.getSeconds() + ':' + d.getMilliseconds(); } var respond = function(res, str) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end(str + 'n'); console.log(str + ' ' + getTime()); } var handleRequest = function (req, res) { console.log('new request: ' + req.url + ' - ' + getTime()); if(req.url == '/immediately') { respond(res, 'A'); } else { var now = new Date().getTime(); while(new Date().getTime() < now + 5000) { // synchronous reading of the file } respond(res, 'B'); } } http.createServer(handleRequest).listen(9000, '127.0.0.1'); The http module, which we initialize on the first line, is needed for running the web server. The getTime function returns the current time as a string, and the respond function sends a simple text to the browser of the client and reports that the incoming request is processed. The most interesting function is handleRequest, which is the entry point of our logic. To simulate the reading of a large file, we will create a while cycle for 5 seconds. Once we run the server, we will be able to make an HTTP request to http://localhost:9000. In order to demonstrate the single-thread behavior we will send two requests at the same time. These requests are as follows: One request will be sent to http://localhost:9000, where the server will perform a synchronous operation that takes 5 seconds The other request will be sent to http://localhost:9000/immediately, where the server should respond immediately The following screenshot is the output printed from the server, after pinging both the URLs: As we can see, the first request came at 16:58:30:434, and its response was sent at 16:58:35:440, that is, 5 seconds later. However, the problem is that the second request is registered when the first one finishes. That's because the thread belonging to Node.js was busy processing the while loop. Of course, Node.js has a solution for the blocking I/O operations. They are transformed to asynchronous functions that accept callback. Once the operation finishes, Node.js fires the callback, notifying that the job is done. A huge benefit of this approach is that while it waits to get the result of the I/O, the server can process another request. The entity that handles the external events and converts them into callback invocations is called the event loop. The event loop acts as a really good manager and delegates tasks to various workers. It never blocks and just waits for something to happen; for example, a notification that the file is written successfully. Now, instead of reading a file synchronously, we will transform our brief example to use asynchronous code. The modified example looks like the following code: var handleRequest = function (req, res) { console.log('new request: ' + req.url + ' - ' + getTime()); if(req.url == '/immediately') { respond(res, 'A'); } else { setTimeout(function() { // reading the file respond(res, 'B'); }, 5000); } } The while loop is replaced with the setTimeout invocation. The result of this change is clearly visible in the server's output, which can be seen in the following screenshot: The first request still gets its response after 5 seconds. However, the second one is processed immediately. Summary In this article, we went through the most common programming paradigms in Node.js. We learned how Node.js handles parallel requests. We understood how to write modules and make them communicative. We saw the problems of the asynchronous code and their most popular solutions. For more information on Node.js you can refer to the following URLs: https://www.packtpub.com/web-development/mastering-nodejs https://www.packtpub.com/web-development/deploying-nodejs Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 0
  • 4077
Banner background image

article-image-create-collisions-objects
Packt
19 Feb 2016
10 min read
Save for later

Create Collisions on Objects

Packt
19 Feb 2016
10 min read
In this article by Julien Moreau-Mathis, the author of the book, Babylon.JS Essentials, you will learn about creating and customizing the scenes using the materials and meshes provided by the designer. (For more resources related to this topic, see here.) In this article, let's play with the gameplay itself and create interactions with the objects in the scene by creating collisions and physics simulations. Collisions are important to add realism in your scenes if you want to walk without crossing the walls. Moreover, to make your scenes more alive, let's introduce the physics simulation with Babylon.js and, finally, see how it is easy to integrate these two notions in your scenes. We are going to discuss the following topics: Checking collisions in a scene Physics simulation Checking collisions in a scene Starting from the concept, configuring and checking collisions in a scene can be done without mathematical notions. We all have the notions of gravity and ellipsoid even if the ellipsoid word is not necessarily familiar. How collisions work in Babylon.js? Let's start with the following scene (a camera, light, plane, and box): The goal is to block the current camera of the scene in order not to cross the objects in the scene. In other words, we want the camera to stay above the plane and stop moving forward if it collides with the box. To perform these actions, the collisions system provided by Babylon.js uses what we call a collider. A collider is based on the bounding box of an object that looks as follows: A bounding box simply represents the minimum and maximum positions of a mesh's vertices and is computed automatically by Babylon.js. This means that you have to do nothing particularly tricky to configure collisions on objects. All the collisions are based on these bounding boxes of each mesh to determine if the camera should be blocked or not. You'll also see that the physics engines use the same kind of collider to simulate physics. If the geometry of a mesh has to be changed by modifying the vertices' positions, the bounding box information can be refreshed by calling the following code: myMesh.refreshBoundingInfo(); To draw the bounding box of an object, simply set the .showBoundingBox property of the mesh instance to true with the following code: myMesh.showBoundingBox = true; Configuring collisions in a scene Let's practice with the Babylon.js collisions engine itself. You'll see that the engine is particularly well hidden as you have to enable checks on scenes and objects only. First, configure the scene to enable collisions and then wake the engine up. If the following property is false, all the next properties will be ignored by Babylon.js and the collisions engine will be in stand-by mode. Then, it's easy to enable or disable collisions in a scene without modifying more properties: scene.collisionsEnabled = true; // Enable collisions in scene Next, configure the camera to check collisions. Then, the collisions engine will check collisions for all the rendered cameras that have collisions enabled. Here, we have only one camera to configure: camera.checkCollisions = true; // Check collisions for THIS camera To finish, just set the .checkCollisions property of each mesh to true to active collisions (the plane and box): plane.checkCollisions = true; box.checkCollisions = true; Now, the collisions engine will check the collisions in the scene for the camera on both the plane and box meshes. You guessed it; if you want to enable collisions only on the plane and you want the camera to walk across the box, you'll have to set the .checkCollisions property of the box to false: plane.checkCollisions = true; box.checkCollisions = false; Configuring gravity and ellipsoid In the previous section, the camera checks the collision on the plane and box but is not submitted to a famous force named gravity. To enrich the collisions in your scene, you can apply the gravitational force, for example, to go down the stairs. First, enable the gravity on the camera by setting the .applyGravity property to true: camera.applyGravity = true; // Enable gravity on the camera Customize the gravity direction by setting BABYLON.Vector3 to the .gravity property of your scene: scene.gravity = new BABYLON.Vector3(0.0, -9.81, 0.0); // To stay on earth Of course, the gravity in space should be as follows: scene.gravity = BABYLON.Vector3.Zero(); // No gravity in space Don't hesitate to play with the values in order to adjust the gravity to your scene referential. The last parameter to enrich the collisions in your scene is the camera's ellipsoid. The ellipsoid represents the camera's dimensions in the scene. In other words, it adjusts the collisions according to the X, Y, and Z axes of the ellipsoid (an ellipsoid is represented by BABYLON.Vector3). For example, the camera must measure 1m80 (Y axis) and the minimum distance to collide on the X (sides) and Z (forward) axes must be 1 m. Then, the ellipsoid must be (X=1, Y=1.8, Z=1). Simply set the .ellipsoid property of the camera as follows: camera.ellipsoid = new BABYLON.Vector3(1, 1.8, 1); The default value of the cameras ellipsoids is (X=0.5, Y=1.0, Z=0.5) As for the gravity, don't hesitate to adjust the X, Y, and Z values to your scene referential. Physics simulation Physics simulation is pretty different from the collisions system as it does not occur on the cameras but on the objects of the scene themselves. In other words, if physics is enabled (and configured) on a box, the box will interact with the other meshes in the scene and will try to represent the real physical movements. For example, let's take a sphere in the air. If you apply physics on the sphere, the sphere will fall until it collides with another mesh and, according to the given parameters, will tend to bounce and roll in the scene. The example files reproduce the behavior with a sphere that falls on the box in the middle of the sphere. Enabling physics in Babylon.js In Babylon.js, physics simulations can be done using plugins only. Two plugins are available using either the Cannon.js framework or the Oimo.js framework. These two frameworks are included in the Babylon.js GitHub repository in the dist folder. Each scene has its own physics simulations system and can be enabled with the following lines: var gravity = new BABYLON.Vector3(0, -9.81, 0); scene.enablePhysics(gravity, new BABYLON.OimoJSPlugin()); // or scene.enablePhysics(gravity, new BABYLON.CannonJSPlugin()); The .enablePhysics(gravity, plugin) function takes the following two arguments: The gravitational force of the scene to apply on objects The plugin to use: Oimo.js: This is the new BABYLON.OimoJSPlugin() plugin Cannon.js: This is the new BABYLON.CannonJSPlugin() plugin To disable physics in a scene, simply call the .disablePhysicsEngine() function using the following code: scene.disablePhysicsEngine(); Impostors Once the physics simulations are enabled in a scene, you can configure the physics properties (or physics states) of the scene meshes. To configure the physics properties of a mesh, the BABYLON.Mesh class provides a function named .setPhysicsState(impostor, options). The parameters are as follows: The impostor: Each kind of mesh has its own impostor according to their forms. For example, a box will tend to slip while a sphere will tend to roll. There are several types of impostors. The options: These options define the values used in physics equations. They count the mass, friction, and restitution. Let's have a box named box with a mass of 1 and set its physics properties: box.setPhysicsState(BABYLON.PhysicsEngine.BoxImpostor, { mass: 1 }); This is all; the box is now configured to interact with other configured meshes by following the physics equations. Let's imagine that the box is in the air. It will fall until it collides with another configured mesh. Now, let's have a sphere named sphere with a mass of 2 and set its physics properties: sphere.setPhysicsState(BABYLON.PhysicsEngine.SphereImpostor, { mass: 2 }); You'll notice that the sphere, which is a particular kind of mesh, has its own impostor (sphere impostor). In contrast to the box, the physics equations of the plugin will make the sphere roll while the box will slip on other meshes. According to their mass, if the box and sphere collide together, then the sphere will tend to push harder the box. The available impostors in Babylon.js are as follows: The box impostor: BABYLON.PhysicsEngine.BoxImpostor The sphere impostor: BABYLON.PhysicsEngine.SphereImpostor The plane impostor: BABYLON.PhysicsEngine.PlaneImpostor The cylinder impostor: BABYLON.PhysicsEngine.CylinderImpostor In fact, in Babylon.js, the box, plane, and cylinder impostors are the same according to the Cannon.js and Oimo.js plugins. Regardless of the impostor parameter, the options parameter is the same. In these options, there are three parameters, which are as follows: The mass: This represents the mass of the mesh in the world. The heavier the mesh is, the harder it is to stop its movement. The friction: This represents the force opposed to the meshes in contact. In other words, it represents how the mesh is slippery. To give you an order, the glass as a friction equals to 1. We can determine that the friction is in the range [0, 1]. The restitution: This represents how the mesh will bounce on others. Imagine a ping-pong ball and its table. If the table's material is a carpet, the restitution will be small, but if the table's material is glass, the restitution will be maximum. A real interval for the restitution is in [0, 1]. In the example files, these parameters are set and if you play, you'll see that these three parameters are linked together in the physics equations. Appling a force (impulse) on a mesh At any moment, you can apply a new force or impulse to a configured mesh. Let's take an explosion: a box is located at the coordinates (X=0, Y=0, Z=0) and an explosion happens above the box at the coordinates (X=0, Y=-5, Z=0). In real life, the box should be pushed up: this action is possible by calling a function named .applyImpulse(force, contactPoint) provided by the BABYLON.Mesh class. Once the mesh is configured with its options and impostor, you can, at any moment, call this function to apply a force to the object. The parameters are as follows: The force: This represents the force in the X, Y, and Z axes The contact point: This represents the origin of the force located in the X, Y and Z axes For example, the explosion generates a force only on Y that is equal to 10 (10 is an arbitrary value) and has its origin at the coordinates (X=0, Y=-5, Z=0): mesh.applyImpulse(new BABYLON.Vector3(0, 10, 0), new BABYLON.Vector3(0, -5, 0)); Once the impulse is applied to the mesh (only once), the box is pushed up and will fall according to its physics options (mass, friction, and restitution). Summary You are now ready to configure collisions in your scene and simulate physics. Whether by code or by the artists, you learned the pipeline to make your scenes more alive. Resources for Article: Further resources on this subject: Building Games with HTML5 and Dart[article] Fundamentals of XHTML MP in Mobile Web Development[article] MODx Web Development: Creating Lists[article]
Read more
  • 0
  • 0
  • 3812

article-image-building-movie-api-express
Packt
18 Feb 2016
22 min read
Save for later

Building A Movie API with Express

Packt
18 Feb 2016
22 min read
We will build a movie API that allows you to add actor and movie information to a database and connect actors with movies, and vice versa. This will give you a hands-on feel for what Express.js offers. We will cover the following topics in this article: Folder structure and organization Responding to CRUD operations Object modeling with Mongoose Generating unique IDs Testing (For more resources related to this topic, see here.) Folder structure and organization Folder structure is a very controversial topic. Though there are many clean ways to structure your project, we will use the following code for the remainder of our article: article +-- app.js +-- package.json +-- node_modules ¦+-- npm package folders +-- src ¦+-- lib ¦+-- models ¦+-- routes +-- test Let's take a look this at in detail: app.js: It is conventional to have the main app.js file in the root directory. The app.js is the entry point of our application and will be used to launch the server. package.json: As with any Node.js app, we have package.json in the root folder specifying our application name and version as well as all of our npm dependencies. node_modules: The node_modules folder and its content are generated via npm installation and should usually be ignored in your version control of choice because it depends on the platform the app runs on. Having said that, according to the npm FAQ, it is probably better to commit the node_modules folder as well. Check node_modules into git for things you deploy, such as websites and apps. Do not check node_modules into git for libraries and modules intended to be reused. Refer to the following article to read more about the rationale behind this: http://www.futurealoof.com/posts/nodemodules-in-git.html src: The src folder contains all the logic of the application. lib: Within the src folder, we have the lib folder, which contains the core of the application. This includes the middleware, routes, and creating the database connection. models: The models folder contains our mongoose models, which defines the structure and logic of the models we want to manipulate and save. routes: The routes folder contains the code for all the endpoints the API is able to serve. test: The test folder will contain our functional tests using Mocha as well as two other node modules, should and supertest, to make it easier to aim for 100 percent coverage. Responding to CRUD operations The term CRUD refers to the four basic operations one can perform on data: create, read, update, and delete. Express gives us an easy way to handle those operations by supporting the basic methods GET, POST, PUT, and DELETE: GET: This method is used to retrieve the existing data from the database. This can be used to read single or multiple rows (for SQL) or documents (for MongoDB) from the database. POST: This method is used to write new data into the database, and it is common to include a JSON payload that fits the data model. PUT: This method is used to update existing data in the database, and a JSON payload that fits the data model is often included for this method as well. DELETE: This method is used to remove an existing row or document from the database. Express 4 has dramatically changed from version 3. A lot of the core modules have been removed in order to make it even more lightweight and less dependent. Therefore, we have to explicitly require modules when needed. One helpful module is body-parser. It allows us to get a nicely formatted body when a POST or PUT HTTP request is received. We have to add this middleware before our business logic in order to use its result later. We write the following in src/lib/parser.js: var bodyParser = require('body-parser'); module;exports = function(app) { app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); }; The preceding code is then used in src/lib/app.js as follows: var express = require('express'); var app = express(); require('./parser')(app); module.exports = app; The following example allows you to respond to a GET request on http://host/path. Once a request hits our API, Express will run it through the necessary middleware as well as the following function: app.get('/path/:id', function(req, res, next) { res.status(200).json({ hello: 'world'}); }); The first parameter is the path we want to handle a GET function. The path can contain parameters prefixed with :. Those path parameters will then be parsed in the request object. The second parameter is the callback that will be executed when the server receives the request. This function gets populated with three parameters: req, res, and next. The req parameter represents the HTTP request object that has been customized by Express and the middlewares we added in our applications. Using the path http://host/path/:id, suppose a GET request is sent to http://host/path/1?a=1&b=2. The req object would be the following: { params: { id: 1 }, query: { a: 1, b: 2 } } The params object is a representation of the path parameters. The query is the query string, which are the values stated after ? in the URL. In a POST request, there will often be a body in our request object as well, which includes the data we wish to place in our database. The res parameter represents the response object for that request. Some methods, such as status() or json(), are provided in order to tell Express how to respond to the client. Finally, the next() function will execute the next middleware defined in our application. Retrieving an actor with GET Retrieving a movie or actor from the database consists of submitting a GET request to the route: /movies/:id or /actors/:id. We will need a unique ID that refers to a unique movie or actor: app.get('/actors/:id', function(req, res, next) { //Find the actor object with this :id //Respond to the client }); Here, the URL parameter :id will be placed in our request object. Since we call the first variable in our callback function req as before, we can access the URL parameter by calling req.params.id. Since an actor may be in many movies and a movie may have many actors, we need a nested endpoint to reflect this as well: app.get('/actors/:id/movies', function(req, res, next) { //Find all movies the actor with this :id is in //Respond to the client }); If a bad GET request is submitted or no actor with the specified ID is found, then the appropriate status code bad request 400 or not found 404 will be returned. If the actor is found, then success request 200 will be sent back along with the actor information. On a success, the response JSON will look like this: { "_id": "551322589911fefa1f656cc5", "id": 1, "name": "AxiomZen", "birth_year": 2012, "__v": 0, "movies": [] } Creating a new actor with POST In our API, creating a new movie in the database involves submitting a POST request to /movies or /actors for a new actor: app.post('/actors', function(req, res, next) { //Save new actor //Respond to the client }); In this example, the user accessing our API sends a POST request with data that would be placed into request.body. Here, we call the first variable in our callback function req. Thus, to access the body of the request, we call req.body. The request body is sent as a JSON string; if an error occurs, a 400 (bad request) status would be sent back. Otherwise, a 201 (created) status is sent to the response object. On a success request, the response will look like the following: { "__v": 0, "id": 1, "name": "AxiomZen", "birth_year": 2012, "_id": "551322589911fefa1f656cc5", "movies": [] } Updating an actor with PUT To update a movie or actor entry, we first create a new route and submit a PUT request to /movies/:id or /actors /:id, where the id parameter is unique to an existing movie/actor. There are two steps to an update. We first find the movie or actor by using the unique id and then we update that entry with the body of the request object, as shown in the following code: app.put('/actors/:id', function(req, res) { //Find and update the actor with this :id //Respond to the client }); In the request, we would need request.body to be a JSON object that reflects the actor fields to be updated. The request.params.id would still be a unique identifier that refers to an existing actor in the database as before. On a successful update, the response JSON looks like this: { "_id": "551322589911fefa1f656cc5", "id": 1, "name": "Axiomzen", "birth_year": 99, "__v": 0, "movies": [] } Here, the response will reflect the changes we made to the data. Removing an actor with DELETE Deleting a movie is as simple as submitting a DELETE request to the same routes that were used earlier (specifying the ID). The actor with the appropriate id is found and then deleted: app.delete('/actors/:id', function(req, res) { //Remove the actor with this :id //Respond to the client }); If the actor with the unique id is found, it is then deleted and a response code of 204 is returned. If the actor cannot be found, a response code of 400 is returned. There is no response body for a DELETE() method; it will simply return the status code of 204 on a successful deletion. Our final endpoints for this simple app will be as follows: //Actor endpoints app.get('/actors', actors.getAll); app.post('/actors', actors.createOne); app.get('/actors/:id', actors.getOne); app.put('/actors/:id', actors.updateOne); app.delete('/actors/:id', actors.deleteOne) app.post('/actors/:id/movies', actors.addMovie); app.delete('/actors/:id/movies/:mid', actors.deleteMovie); //Movie endpoints app.get('/movies', movies.getAll); app.post('/movies', movies.createOne); app.get('/movies/:id', movies.getOne); app.put('/movies/:id', movies.updateOne); app.delete('/movies/:id', movies.deleteOne); app.post('/movies/:id/actors', movies.addActor); app.delete('/movies/:id/actors/:aid', movies.deleteActor); In Express 4, there is an alternative way to describe your routes. Routes that share a common URL, but use a different HTTP verb, can be grouped together as follows: app.route('/actors') .get(actors.getAll) .post(actors.createOne); app.route('/actors/:id') .get(actors.getOne) .put(actors.updateOne) .delete(actors.deleteOne); app.post('/actors/:id/movies', actors.addMovie); app.delete('/actors/:id/movies/:mid', actors.deleteMovie); app.route('/movies') .get(movies.getAll) .post(movies.createOne); app.route('/movies/:id') .get(movies.getOne) .put(movies.updateOne) .delete(movies.deleteOne); app.post('/movies/:id/actors', movies.addActor); app.delete('/movies/:id/actors/:aid', movies.deleteActor); Whether you prefer it this way or not is up to you. At least now you have a choice! We have not discussed the logic of the function being run for each endpoint. We will get to that shortly. Express allows us to easily CRUD our database objects, but how do we model our objects? Object modeling with Mongoose Mongoose is an object data modeling library (ODM) that allows you to define schemas for your data collections. You can find out more about Mongoose on the project website: http://mongoosejs.com/. To connect to a MongoDB instance using the mongoose variable, we first need to install npm and save Mongoose. The save flag automatically adds the module to your package.json with the latest version, thus, it is always recommended to install your modules with the save flag. For modules that you only need locally (for example, Mocha), you can use the savedev flag. For this project, we create a new file db.js under /src/lib/db.js, which requires Mongoose. The local connection to the mongodb database is made in mongoose.connect as follows: var mongoose = require('mongoose'); module.exports = function(app) { mongoose.connect('mongodb://localhost/movies', { mongoose: { safe: true } }, function(err) { if (err) { return console.log('Mongoose - connection error:', err); } }); return mongoose; }; In our movies database, we need separate schemas for actors and movies. As an example, we will go through object modeling in our actor database /src/models/actor.js by creating an actor schema as follows: // /src/models/actor.js var mongoose = require('mongoose'); var generateId = require('./plugins/generateId'); var actorSchema = new mongoose.Schema({ id: { type: Number, required: true, index: { unique: true } }, name: { type: String, required: true }, birth_year: { type: Number, required: true }, movies: [{ type : mongoose.Schema.ObjectId, ref : 'Movie' }] }); actorSchema.plugin(generateId()); module.exports = mongoose.model('Actor', actorSchema); Each actor has a unique id, a name, and a birth year. The entries also contain validators such as the type and boolean value that are required. The model is exported upon definition (module.exports), so that we can reuse it directly in the app. Alternatively, you could fetch each model through Mongoose using mongoose.model('Actor', actorSchema), but this would feel less explicitly coupled compared to our approach of directly requiring it. Similarly, we need a movie schema as well. We define the movie schema as follows: // /src/models/movies.js var movieSchema = new mongoose.Schema({ id: { type: Number, required: true, index: { unique: true } }, title: { type: String, required: true }, year: { type: Number, required: true }, actors: [{ type : mongoose.Schema.ObjectId, ref : 'Actor' }] }); movieSchema.plugin(generateId()); module.exports = mongoose.model('Movie', movieSchema); Generating unique IDs In both our movie and actor schemas, we used a plugin called generateId(). While MongoDB automatically generates ObjectID for each document using the _id field, we want to generate our own IDs that are more human readable and hence friendlier. We also would like to give the user the opportunity to select their own id of choice. However, being able to choose an id can cause conflicts. If you were to choose an id that already exists, your POST request would be rejected. We should autogenerate an ID if the user does not pass one explicitly. Without this plugin, if either an actor or a movie is created without an explicit ID passed along by the user, the server would complain since the ID is required. We can create middleware for Mongoose that assigns an id before we persist the object as follows: // /src/models/plugins/generateId.js module.exports = function() { return function generateId(schema){ schema.pre('validate',function(next, done) { var instance = this; var model = instance.model(instance.constructor.modelName); if( instance.id == null ) { model.findOne().sort("-id").exec(function(err,maxInstance) { if (err){ return done(err); } else { var maxId = maxInstance.id || 0; instance.id = maxId+1; done(); } }) } else { done(); } }) } }; There are a few important notes about this code. See what we did to get the var model? This makes the plugin generic so that it can be applied to multiple Mongoose schemas. Notice that there are two callbacks available: next and done. The next variable passes the code to the next pre-validation middleware. That's something you would usually put at the bottom of the function right after you make your asynchronous call. This is generally a good thing since one of the advantages of asynchronous calls is that you can have many things running at the same time. However, in this case, we cannot call the next variable because it would conflict with our model definition of id required. Thus, we just stick to using the done variable when the logic is complete. Another concern arises due to the fact that MongoDB doesn't support transactions, which means you may have to account for this function failing in some edge cases. For example, if two calls to POST /actor happen at the same time, they will both have their IDs auto incremented to the same value. Now that we have the code for our generateId() plugin, we require it in our actor and movie schema as follows: var generateId = require('./plugins/generateId'); actorSchema.plugin(generateId()); Validating your database Each key in the Mongoose schema defines a property that is associated with a SchemaType. For example, in our actors.js schema, the actor's name key is associated with a string SchemaType. String, number, date, buffer, boolean, mixed, objectId, and array are all valid schema types. In addition to schema types, numbers have min and max validators and strings have enum and match validators. Validation occurs when a document is being saved (.save()) and will return an error object, containing type, path, and value properties, if the validation has failed. Extracting functions to reusable middleware We can use our anonymous or named functions as middleware. To do so, we would export our functions by calling module.exports in routes/actors.js and routes/movies.js: Let's take a look at our routes/actors.js file. At the top of this file, we require the Mongoose schemas we defined before: var Actor = require('../models/actor'); This allows our variable actor to access our MongoDB using mongo functions such as find(), create(), and update(). It will follow the schema defined in the file /models/actor. Since actors are in movies, we will also need to require the Movie schema to show this relationship by the following. var Movie = require('../models/movie'); Now that we have our schema, we can begin defining the logic for the functions we described in endpoints. For example, the endpoint GET /actors/:id will retrieve the actor with the corresponding ID from our database. Let's call this function getOne(). It is defined as follows: getOne: function(req, res, next) { Actor.findOne({ id: req.params.id }) .populate('movies') .exec(function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, Here, we use the mongo findOne() method to retrieve the actor with id: req.params.id. There are no joins in MongoDB so we use the .populate() method to retrieve the movies the actor is in. The .populate() method will retrieve documents from a separate collection based on its ObjectId. This function will return a status 400 if something went wrong with our Mongoose driver, a status 404 if the actor with :id is not found, and finally, it will return a status 200 along with the JSON of the actor object if an actor is found. We define all the functions required for the actor endpoints in this file. The result is as follows: // /src/routes/actors.js var Actor = require('../models/actor'); var Movie = require('../models/movie'); module.exports = { getAll: function(req, res, next) { Actor.find(function(err, actors) { if (err) return res.status(400).json(err); res.status(200).json(actors); }); }, createOne: function(req, res, next) { Actor.create(req.body, function(err, actor) { if (err) return res.status(400).json(err); res.status(201).json(actor); }); }, getOne: function(req, res, next) { Actor.findOne({ id: req.params.id }) .populate('movies') .exec(function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, updateOne: function(req, res, next) { Actor.findOneAndUpdate({ id: req.params.id }, req.body,function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, deleteOne: function(req, res, next) { Actor.findOneAndRemove({ id: req.params.id }, function(err) { if (err) return res.status(400).json(err); res.status(204).json(); }); }, addMovie: function(req, res, next) { Actor.findOne({ id: req.params.id }, function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); Movie.findOne({ id: req.body.id }, function(err, movie) { if (err) return res.status(400).json(err); if (!movie) return res.status(404).json(); actor.movies.push(movie); actor.save(function(err) { if (err) return res.status(500).json(err); res.status(201).json(actor); }); }) }); }, deleteMovie: function(req, res, next) { Actor.findOne({ id: req.params.id }, function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); actor.movies = []; actor.save(function(err) { if (err) return res.status(400).json(err); res.status(204).json(actor); }) }); } }; For all of our movie endpoints, we need the same functions but applied to the movie collection. After exporting these two files, we require them in app.js (/src/lib/app.js) by simply adding: require('../routes/movies'); require('../routes/actors'); By exporting our functions as reusable middleware, we keep our code clean and can refer to functions in our CRUD calls in the /routes folder. Testing Mocha is used as the test framework along with should.js and supertest. Testing supertest lets you test your HTTP assertions and testing API endpoints. The tests are placed in the root folder /test. Tests are completely separate from any of the source code and are written to be readable in plain English, that is, you should be able to follow along with what is being tested just by reading through them. Well-written tests with good coverage can serve as a readme for its API, since it clearly describes the behavior of the entire app. The initial setup to test our movies API is the same for both /test/actors.js and /test/movies.js: var should = require('should'); var assert = require('assert'); var request = require('supertest'); var app = require('../src/lib/app'); In src/test/actors.js, we test the basic CRUD operations: creating a new actor object, retrieving, editing, and deleting the actor object. An example test for the creation of a new actor is shown as follows: describe('Actors', function() { describe('POST actor', function(){ it('should create an actor', function(done){ var actor = { 'id': '1', 'name': 'AxiomZen', 'birth_year': '2012', }; request(app) .post('/actors') .send(actor) .expect(201, done) }); We can see that the tests are readable in plain English. We create a new POST request for a new actor to the database with the id of 1, name of AxiomZen, and birth_year of 2012. Then, we send the request with the .send() function. Similar tests are present for GET and DELETE requests as given in the following code: describe('GET actor', function() { it('should retrieve actor from db', function(done){ request(app) .get('/actors/1') .expect(200, done); }); describe('DELETE actor', function() { it('should remove a actor', function(done) { request(app) .delete('/actors/1') .expect(204, done); }); }); To test our PUT request, we will edit the name and birth_year of our first actor as follows: describe('PUT actor', function() { it('should edit an actor', function(done) { var actor = { 'name': 'ZenAxiom', 'birth_year': '2011' }; request(app) .put('/actors/1') .send(actor) .expect(200, done); }); it('should have been edited', function(done) { request(app) .get('/actors/1') .expect(200) .end(function(err, res) { res.body.name.should.eql('ZenAxiom'); res.body.birth_year.should.eql(2011); done(); }); }); }); The first part of the test modifies the actor name and birth_year keys, sends a PUT request for /actors/1 (1 is the actors id), and then saves the new information to the database. The second part of the test checks whether the database entry for the actor with id 1 has been changed. The name and birth_year values are checked against their expected values using .should.eql(). In addition to performing CRUD actions on the actor object, we can also perform these actions to the movies we add to each actor (associated by the actor's ID). The following snippet shows a test to add a new movie to our first actor (with the id of 1): describe('POST /actors/:id/movies', function() { it('should successfully add a movie to the actor',function(done) { var movie = { 'id': '1', 'title': 'Hello World', 'year': '2013' } request(app) .post('/actors/1/movies') .send(movie) .expect(201, done) }); }); it('actor should have array of movies now', function(done){ request(app) .get('/actors/1') .expect(200) .end(function(err, res) { res.body.movies.should.eql(['1']); done(); }); }); }); The first part of the test creates a new movie object with id, title, and year keys, and sends a POST request to add the movies as an array to the actor with id of 1. The second part of the test sends a GET request to retrieve the actor with id of 1, which should now include an array with the new movie input. We can similarly delete the movie entries as illustrated in the actors.js test file: describe('DELETE /actors/:id/movies/:movie_id', function() { it('should successfully remove a movie from actor', function(done){ request(app) .delete('/actors/1/movies/1') .expect(200, done); }); it('actor should no longer have that movie id', function(done){ request(app) .get('/actors/1') .expect(201) .end(function(err, res) { res.body.movies.should.eql([]); done(); }); }); }); Again, this code snippet should look familiar to you. The first part tests that sending a DELETE request specifying the actor ID and movie ID will delete that movie entry. In the second part, we make sure that the entry no longer exists by submitting a GET request to view the actor's details where no movies should be listed. In addition to ensuring that the basic CRUD operations work, we also test our schema validations. The following code tests to make sure two actors with the same ID do not exist (IDs are specified as unique): it('should not allow you to create duplicate actors', function(done) { var actor = { 'id': '1', 'name': 'AxiomZen', 'birth_year': '2012', }; request(app) .post('/actors') .send(actor) .expect(400, done); }); We should expect code 400 (bad request) if we try to create an actor who already exists in the database. A similar set of tests is present for tests/movies.js. The function and outcome of each test should be evident now. Summary In this article, we created a basic API that connects to MongoDB and supports CRUD methods. You should now be able to set up an API complete with tests, for any data, not just movies and actors! We hope you found that this article has laid a good foundation for the Express and API setup. To learn more about Express.js, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mastering Web Application Development with Express(https://www.packtpub.com/web-development/mastering-web-application-development-express) Advanced Express Web Application Development(https://www.packtpub.com/web-development/advanced-express-web-application-development) Resources for Article: Further resources on this subject: Metal API: Get closer to the bare metal with Metal API [Article] Building a Basic Express Site [Article] Introducing Sails.js [Article]
Read more
  • 0
  • 0
  • 7993

article-image-developing-nodejs-web-applications
Packt
18 Feb 2016
11 min read
Save for later

Developing Node.js Web Applications

Packt
18 Feb 2016
11 min read
Node.js is a platform that supports various types of applications, but the most popular kind is the development of web applications. Node's style of coding depends on the community to extend the platform through third-party modules; these modules are then built upon to create new modules, and so on. Companies and single developers around the globe are participating in this process by creating modules that wrap the basic Node APIs and deliver a better starting point for application development. (For more resources related to this topic, see here.) There are many modules to support web application development but none as popular as the Connect module. The Connect module delivers a set of wrappers around the Node.js low-level APIs to enable the development of rich web application frameworks. To understand what Connect is all about, let's begin with a basic example of a basic Node web server. In your working folder, create a file named server.js, which contains the following code snippet: var http = require('http'); http.createServer(function(req, res) { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Hello World'); }).listen(3000); console.log('Server running at http://localhost:3000/'); To start your web server, use your command-line tool, and navigate to your working folder. Then, run the node CLI tool and run the server.js file as follows: $ node server Now open http://localhost:3000 in your browser, and you'll see the Hello World response. So how does this work? In this example, the http module is used to create a small web server listening to the 3000 port. You begin by requiring the http module and use the createServer() method to return a new server object. The listen() method is then used to listen to the 3000 port. Notice the callback function that is passed as an argument to the createServer() method. The callback function gets called whenever there's an HTTP request sent to the web server. The server object will then pass the req and res arguments, which contain the information and functionality needed to send back an HTTP response. The callback function will then do the following two steps: First, it will call the writeHead() method of the response object. This method is used to set the response HTTP headers. In this example, it will set the Content-Type header value to text/plain. For instance, when responding with HTML, you just need to replace text/plain with html/plain. Then, it will call the end() method of the response object. This method is used to finalize the response. The end() method takes a single string argument that it will use as the HTTP response body. Another common way of writing this is to add a write() method before the end() method and then call the end() method, as follows: res.write('Hello World'); res.end(); This simple application illustrates the Node coding style where low-level APIs are used to simply achieve certain functionality. While this is a nice example, running a full web application using the low-level APIs will require you to write a lot of supplementary code to support common requirements. Fortunately, a company called Sencha has already created this scaffolding code for you in the form of a Node module called Connect. Meet the Connect module Connect is a module built to support interception of requests in a more modular approach. In the first web server example, you learned how to build a simple web server using the http module. If you wish to extend this example, you'd have to write code that manages the different HTTP requests sent to your server, handles them properly, and responds to each request with the correct response. Connect creates an API exactly for that purpose. It uses a modular component called middleware, which allows you to simply register your application logic to predefined HTTP request scenarios. Connect middleware are basically callback functions, which get executed when an HTTP request occurs. The middleware can then perform some logic, return a response, or call the next registered middleware. While you will mostly write custom middleware to support your application needs, Connect also includes some common middleware to support logging, static file serving, and more. The way a Connect application works is by using an object called dispatcher. The dispatcher object handles each HTTP request received by the server and then decides, in a cascading way, the order of middleware execution. To understand Connect better, take a look at the following diagram: The preceding diagram illustrates two calls made to the Connect application: the first one should be handled by a custom middleware and the second is handled by the static files middleware. Connect's dispatcher initiates the process, moving on to the next handler using the next() method, until it gets to middleware responding with the res.end() method, which will end the request handling. Express is based on Connect's approach, so in order to understand how Express works, we'll begin with creating a Connect application. In your working folder, create a file named server.js that contains the following code snippet: var connect = require('connect'); var app = connect(); app.listen(3000); console.log('Server running at http://localhost:3000/'); As you can see, your application file is using the connect module to create a new web server. However, Connect isn't a core module, so you'll have to install it using NPM. As you already know, there are several ways of installing third-party modules. The easiest one is to install it directly using the npm install command. To do so, use your command-line tool, and navigate to your working folder. Then execute the following command: $ npm install connect NPM will install the connect module inside a node_modules folder, which will enable you to require it in your application file. To run your Connect web server, just use Node's CLI and execute the following command: $ node server Node will run your application, reporting the server status using the console.log() method. You can try reaching your application in the browser by visiting http://localhost:3000. However, you should get a response similar to what is shown in the following screenshot: Connect application's empty response What this response means is that there isn't any middleware registered to handle the GET HTTP request. This means two things: You've successfully managed to install and use the Connect module It's time for you to write your first Connect middleware Connect middleware Connect middleware is just JavaScript function with a unique signature. Each middleware function is defined with the following three arguments: req: This is an object that holds the HTTP request information res: This is an object that holds the HTTP response information and allows you to set the response properties next: This is the next middleware function defined in the ordered set of Connect middleware When you have a middleware defined, you'll just have to register it with the Connect application using the app.use() method. Let's revise the previous example to include your first middleware. Change your server.js file to look like the following code snippet: var connect = require('connect'); var app = connect(); var helloWorld = function(req, res, next) { res.setHeader('Content-Type', 'text/plain'); res.end('Hello World'); }; app.use(helloWorld); app.listen(3000); console.log('Server running at http://localhost:3000/'); Then, start your connect server again by issuing the following command in your command-line tool: $ node server Try visiting http://localhost:3000 again. You will now get a response similar to that in the following screenshot: Connect application's response Congratulations, you've just created your first Connect middleware! Let's recap. First, you added a middleware function named helloWorld(), which has three arguments: req, res, and next. In your middleware, you used the res.setHeader() method to set the response Content-Type header and the res.end() method to set the response text. Finally, you used the app.use() method to register your middleware with the Connect application. Understanding the order of Connect middleware One of Connect's greatest features is the ability to register as many middleware functions as you want. Using the app.use() method, you'll be able to set a series of middleware functions that will be executed in a row to achieve maximum flexibility when writing your application. Connect will then pass the next middleware function to the currently executing middleware function using the next argument. In each middleware function, you can decide whether to call the next middleware function or stop at the current one. Notice that each Connect middleware function will be executed in first-in-first-out (FIFO) order using the next arguments until there are no more middleware functions to execute or the next middleware function is not called. To understand this better, we will go back to the previous example and add a logger function that will log all the requests made to the server in the command line. To do so, go back to the server.js file and update it to look like the following code snippet: var connect = require('connect'); var app = connect(); var logger = function(req, res, next) { console.log(req.method, req.url); next(); }; var helloWorld = function(req, res, next) { res.setHeader('Content-Type', 'text/plain'); res.end('Hello World'); }; app.use(logger); app.use(helloWorld); app.listen(3000); console.log('Server running at http://localhost:3000/'); In the preceding example, you added another middleware called logger(). The logger() middleware uses the console.log() method to simply log the request information to the console. Notice how the logger() middleware is registered before the helloWorld() middleware. This is important as it determines the order in which each middleware is executed. Another thing to notice is the next() call in the logger() middleware, which is responsible for calling the helloWorld() middleware. Removing the next() call would stop the execution of middleware function at the logger() middleware, which means that the request would hang forever as the response is never ended by calling the res.end() method. To test your changes, start your connect server again by issuing the following command in your command-line tool: $ node server Then, visit http://localhost:3000 in your browser and notice the console output in your command-line tool. Mounting Connect middleware As you may have noticed, the middleware you registered responds to any request regardless of the request path. This does not comply with modern web application development because responding to different paths is an integral part of all web applications. Fortunately, Connect middleware supports a feature called mounting, which enables you to determine which request path is required for the middleware function to get executed. Mounting is done by adding the path argument to the app.use() method. To understand this better, let's revisit our previous example. Modify your server.js file to look like the following code snippet: var connect = require('connect'); var app = connect(); var logger = function(req, res, next) { console.log(req.method, req.url); next(); }; var helloWorld = function(req, res, next) { res.setHeader('Content-Type', 'text/plain'); res.end('Hello World'); }; var goodbyeWorld = function(req, res, next) { res.setHeader('Content-Type', 'text/plain'); res.end('Goodbye World'); }; app.use(logger); app.use('/hello', helloWorld); app.use('/goodbye', goodbyeWorld); app.listen(3000); console.log('Server running at http://localhost:3000/'); A few things have been changed in the previous example. First, you mounted the helloWorld() middleware to respond only to requests made to the /hello path. Then, you added another (a bit morbid) middleware called goodbyeWorld() that will respond to requests made to the /goodbye path. Notice how, as a logger should do, we left the logger() middleware to respond to all the requests made to the server. Another thing you should be aware of is that any requests made to the base path will not be responded by any middleware because we mounted the helloWorld() middleware to a specific path. Connect is a great module that supports various features of common web applications. Connect middleware is super simple as it is built with a JavaScript style in mind. It allows the endless extension of your application logic without breaking the nimble philosophy of the Node platform. While Connect is a great improvement over writing your web application infrastructure, it deliberately lacks some basic features you're used to having in other web frameworks. The reason lies in one of the basic principles of the Node community: create your modules lean and let other developers build their modules on top of the module you created. The community is supposed to extend Connect with its own modules and create its own web infrastructures. In fact, one very energetic developer named TJ Holowaychuk, did it better than most when he released a Connect-based web framework known as Express. Summary In this article, you learned about the basic principles of Node.js web applications and discovered the Connect web module. You created your first Connect application and learned how to use middleware functions. To learn more about Node.js and Web Development, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Node.js Design Patterns (https://www.packtpub.com/web-development/nodejs-design-patterns) Web Development with MongoDB and Node.js (https://www.packtpub.com/web-development/web-development-mongodb-and-nodejs) Resources for Article: Further resources on this subject: So, what is Node.js?[article] AngularJS[article] So, what is MongoDB?[article]
Read more
  • 0
  • 0
  • 4297
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-understanding-material-design
Packt
18 Feb 2016
22 min read
Save for later

Understanding Material Design

Packt
18 Feb 2016
22 min read
Material can be thought of as something like smart paper. Like paper, it has surfaces and edges that reflect light and cast shadows, but unlike paper, material has properties that real paper does not, such as its ability to move, change its shape and size, and merge with other material. Despite this seemingly magical behavior, material should be treated like a physical object with a physicality of its own. Material can be seen as existing in a three-dimensional space, and it is this that gives its interfaces a reassuring sense of depth and structure. Hierarchies become obvious when it is instantly clear whether an object is above or below another. Based largely on age-old principles taken from color theory, animation, traditional print design, and physics, material design provides a virtual space where developers can use surface and light to create meaningful interfaces and movement to design intuitive user interactions. (For more resources related to this topic, see here.) Material properties As mentioned in the introduction, material can be thought of as being bound by physical laws. There are things it can do and things it cannot. It can split apart and heal again, and change color and shape, but it cannot occupy the same space as another sheet of material or rotate around two of its axes. We will be dealing with these properties throughout the book, but it is a good idea to begin with a quick look at the things material can and can't do. The third dimension is fundamental when it comes to material. This is what gives the user the illusion that they are interacting with something more tangible than a rectangle of light. The illusion is generated by the widening and softening of shadows beneath material that is closer to the user. Material exists in virtual space, but a space that, nevertheless, represents the real dimensions of a phone or tablet. The x axis can be thought of as existing between the top and bottom of the screen, the y axis between the right and left edges, and the z axis confined to the space between the back of the handset and the glass of the screen. It is for this reason that material should not rotate around the x or y axes, as this would break the illusion of a space inside the phone. The basic laws of the physics of material are outlined, as follows, in the form of a list: All material is 1 dp thick (along the z axis). Material is solid, only one sheet can exist in one place at a time and material cannot pass through other material. For example, if a card needs to move past another, it must move over it. Elevation, or position along the z axis, is portrayed by shadow, with higher objects having wider, softer shadows. The z axis should be used to prompt interaction. For example, an action button rising up toward the user to demonstrate that it can be used to perform some action. Material does not fold or bend. Material cannot appear to rise higher than the screen surface. Material can grow and shrink along both x and y axes. Material can move along any axis. Material can be spontaneously created and destroyed, but this must not be without movement. The arrivals and departures of material components must be animated. For example, a card growing from the point that it was summoned from or sliding off the screen when dismissed. A sheet of material can split apart anywhere along the x or y axes, and join together again with its original partner or with other material. This covers the basic rules of material behavior but we have said nothing of its content. If material can be thought of as smart paper, then its content can only be described as smart ink. The rules governing how ink behaves are a little simpler: Material content can be text, imagery, or any other form of visual digital content Content can be of any shape or color and behaves independently from its container material It cannot be displayed beyond the edges of its material container It adds nothing to the thickness (z axis) of the material it is displayed on Setting up a development environment The Android development environment consists mainly of two distinct components: the SDK, which provides the code libraries behind Android and Android Studio, and a powerful code editor that is used for constructing and testing applications for Android phones and tablets, Wear, TV, Auto, Glass, and Cardboard. Both these components can both be downloaded as a single package from http://developer.android.com/sdk/index.html. Installing Android Studio The installation is very straightforward. Run the Android Studio bundle and follow the on-screen instructions, installing HAXM hardware acceleration if prompted, and selecting all SDK components, as shown here: Android Studio is dependent on the Java JDK. If you have not previously installed it, this will be detected while you are installing Android Studio, and you will be prompted to download and install it. If for some reason it does not, it can be found at http://www.oracle.com/technetwork/java/javase/downloads/index.html, from where you should download the latest version. This is not quite the end of the installation process. There are still some SDK components that we will need to download manually before we can build our first app. As we will see next, this is done using the Android SDK Manager. Configuring the Android SDK People often refer to Android versions by name, such as Lollipop, or an identity number, such as 5.1.1. As developers, it makes more sense to use the API level, which in the case of Android 5.1.1 would be API level 22. The SDK provides a platform for every API level since API level 8 (Android 2.2). In this section, we will use the SDK Manager to take a closer look at Android platforms, along with the other tools included in the SDK. Start a new Android Studio project or open an existing one with the minimum SDK at 21 or higher. You can then open the SDK manager from the menu via Tools | Android | SDK Manager or the matching icon on the main toolbar. The Android SDK Manager can also be started as a stand alone program. It can be found in the /Android/sdk directory, as can the Android Virtual Device (AVD) manager. As can be seen in the preceding screenshot, there are really three main sections in the SDK: A Tools folder A collection of platforms An Extras folder All these require a closer look. The Tools directory contains exactly what it says, that is, tools. There are a handful of these but the ones that will concern us are the SDK manager that we are using now, and the AVD manager that we will be using shortly to create a virtual device. Open the Tools folder. You should find the latest revisions of the SDK tools and the SDK Platform-tools already installed. If not, select these items, along with the latest Build-tools, that is, if they too have not been installed. These tools are often revised, and it is well worth it to regularly check the SDK manager for updates. When it comes to the platforms themselves, it is usually enough to simply install the latest one. This does not mean that these apps will not work on or be available to devices running older versions, as we can set a minimum SDK level when setting up a project, and along with the use of support libraries, we can bring material design to almost any Android device out there. If you open up the folder for the latest platform, you will see that some items have already been installed. Strictly speaking, the only things you need to install are the SDK platform itself and at least one system image. System images are copies of the hard drives of actual Android devices and are used with the AVD to create emulators. Which images you use will depend on your system and the form factors that you are developing for. In this book, we will be building apps for phones and tablets, so make sure you use one of these at least. Although they are not required to develop apps, the documentation and samples packages can be extremely useful. At the bottom of each platform folder are the Google APIs and corresponding system images. Install these if you are going to include Google services, such as Maps and Cloud, in your apps. You will also need to install the Google support libraries from the Extras directory, and this is what we will cover next. The Extras folder contains various miscellaneous packages with a range of functions. The ones you are most likely to want to download are listed as follows: Android support libraries are invaluable extensions to the SDK that provide APIs that not only facilitate backwards compatibility, but also provide a lot of extra components and functions, and most importantly for us, the design library. As we are developing on Android Studio, we need only install the Android Support Repository, as this contains the Android Support Library and is designed for use with Android. The Google Play services and Google Repository packages are required, along with the Google APIs mentioned a moment ago, to incorporate Google Services into an application. You will most likely need the Google USB Driver if you are intending to test your apps on a real device. How to do this will be explained later in this chapter. The HAXM installer is invaluable if you have a recent Intel processor. Android emulators can be notoriously slow, and this hardware acceleration can make a noticeable difference. Once you have downloaded your selected SDK components, depending on your system and/or project plans, you should have a list of installed packages similar to the one shown next: The SDK is finally ready, and we can start developing material interfaces. All that is required now is a device to test it on. This can, of course, be done on an actual device, but generally speaking, we will need to test our apps on as many devices as possible. Being able to emulate Android devices allows us to do this. Emulating Android devices The AVD allows us to test our designs across the entire range of form factors. There are an enormous number of screen sizes, shapes, and densities around. It is vital that we get to test our apps on as many device configurations as possible. This is actually more important for design than it is for functionality. An app might operate perfectly well on an exceptionally small or narrow screen, but not look as good as we had wanted, making the AVD one of the most useful tools available to us. This section covers how to create a virtual device using the AVD Manager. The AVD Manager can be opened from within Android Studio by navigating to Tools | Android | AVD Manager from the menu or the corresponding icon on the toolbar. Here, you should click on the Create Virtual Device... button. The easiest way to create an emulator is to simply pick a device definition from the list of hardware images and keep clicking on Next until you reach Finish. However, it is much more fun and instructive to either clone and edit an existing profile, or create one from scratch. Click on the New Hardware Profile button. This takes you to the Configure Hardware Profile window where you will be able to create a virtual device from scratch, configuring everything from cameras and sensors, to storage and screen resolution. When you are done, click on Finish and you will be returned to the hardware selection screen where your new device will have been added: As you will have seen from the Import Hardware Profiles button, it is possible to download system images for many devices not included with the SDK. Check the developer sections of device vendor's web sites to see which models are available. So far, we have only configured the hardware for our virtual device. We must now select all the software it will use. To do this, select the hardware profile you just created and press Next. In the following window, select one of the system images you installed earlier and press Next again. This takes us to the Verify Configuration screen where the emulator can be fine-tuned. Most of these configurations can be safely left as they are, but you will certainly need to play with the scale when developing for high density devices. It can also be very useful to be able to use a real SD card. Once you click on Finish, the emulator will be ready to run. An emulator can be rotated through 90 degrees with left Ctrl + F12. The menu can be called with F2, and the back button with ESC. Keyboard commands to emulate most physical buttons, such as call, power, and volume, and a complete list can be found at http://developer.android.com/tools/help/emulator.html. Android emulators are notoriously slow, during both loading and operating, even on quite powerful machines. The Intel hardware accelerator we encountered earlier can make a significant difference. Between the two choices offered, the one that you use should depend on how often you need to open and close a particular emulator. More often than not, taking advantage of your GPU is the more helpful of the two. Apart from this built-in assistance, there are a few other things you can do to improve performance, such as setting lower pixel densities, increasing the device's memory, and building the website for lower API levels. If you are comfortable doing so, set up exclusions in your anti-virus software for the Android Studio and SDK directories. There are several third-party emulators, such as Genymotion, that are not only faster, but also behave more like real devices. The slowness of Android emulators is not necessarily a big problem, as most early development needs only one device, and real devices suffer none of the performance issues found on emulators. As we shall see next, real devices can be connected to our development environment with very little effort. Connecting a real device Using an actual physical device to run and test applications does not have the flexibility that emulators provide, but it does have one or two advantages of its own. Real devices are faster than any emulator, and you can test features unavailable to a virtual device, such as accessing sensors, and making and receiving calls. There are two steps involved in setting up a real phone or tablet. We need to set developer options on the handset and configure the USB connection with our development computer: To enable developer options on your handset, navigate to Settings | About phone. Tap on Build number 7 times to enable Developer options, which will now be available from the previous screen. Open this to enable USB debugging and Allow mock locations. Connect the device to your computer and check that it is connected as a Media device (MTP). Your handset can now be used as a test device. Depending on your We need only install the Google USB. Connect the device to your computer with a USB cable, start Android Studio, and open a project. Depending on your setup, it is quite possible that you are already connected. If not, you can install the Google USB driver by following these steps: From the Windows start menu, open the device manager. Your handset can be found under Other Devices or Portable Devices. Open its Properties window and select the Driver tab. Update the driver with the Google version, which can be found in the sdkextrasgoogleusb_driver directory. An application can be compiled and run from Android Studio by selecting Run 'app' from the Run menu, pressing Shift + F10, or clicking on the green play icon on the toolbar. Once the project has finished building, you will be asked to confirm your choice of device before the app loads and then opens on your handset. With a fully set up development environment and devices to test on, we can now start taking a look at material design, beginning with the material theme that is included as the default in all SDKs with APIs higher than 21. The material theme Since API level 21 (Android 5.0), the material theme has been the built-in user interface. It can be utilized and customized, simplifying the building of material interfaces. However, it is more than just a new look; the material theme also provides the automatic touch feedback and transition animations that we associate with material design. To better understand Android themes and how to apply them, we need to understand how Android styles work, and a little about how screen components, such as buttons and text boxes, are defined. Most individual screen components are referred to as widgets or views. Views that contain other views are called view groups, and they generally take the form of a layout, such as the relative layout we will use in a moment. An Android style is a set of graphical properties defining the appearance of a particular screen component. Styles allow us to define everything from font size and background color, to padding elevation, and much more. An Android theme is simply a style applied across a whole screen or application. The best way to understand how this works is to put it into action and apply a style to a working project. This will also provide a great opportunity to become more familiar with Android Studio. Applying styles Styles are defined as XML files and are stored in the resources (res) directory of Android Studio projects. So that we can apply different styles to a variety of platforms and devices, they are kept separate from the layout code. To see how this is done, start a new project, selecting a minimum SDK of 21 or higher, and using the blank activity template. To the left of the editor is the project explorer pane. This is your access point to every branch of your project. Take a look at the activity_main.xml file, which would have been opened in the editor pane when the project was created. At the bottom of the pane, you will see a Text tab and a Design tab. It should be quite clear, from examining these, how the XML code defines a text box (TextView) nested inside a window (RelativeLayout). Layouts can be created in two ways: textually and graphically. Usually, they are built using a combination of both techniques. In the design view, widgets can be dragged and dropped to form layout designs. Any changes made using the graphical interface are immediately reflected in the code, and experimenting with this is a fantastic way to learn how various widgets and layouts are put together. We will return to both these subjects in detail later on in the book, but for now, we will continue with styles and themes by defining a custom style for the text view in our Hello world app. Open the res node in the project explorer; you can then right-click on the values node and select the New | Values resource file from the menu. Call this file my_style and fill it out as follows: <?xml version="1.0" encoding="utf-8"?> <resources>     <style name="myStyle">         <item name="android:layout_width">match_parent</item>         <item name="android:layout_height">wrap_content</item>         <item name="android:elevation">4dp</item>         <item name="android:gravity">center_horizontal</item>         <item name="android:padding">8dp</item>         <item name="android:background">#e6e6e6</item>         <item name="android:textSize">32sp</item>         <item name="android:textColor">#727272</item>     </style> </resources> This style defines several graphical properties, most of which should be self-explanatory with the possible exception of gravity, which here refers to how content is justified within the view. We will cover measurements and units later in the book, but for now, it is useful to understand dp and sp: Density-independent pixel (dp): Android runs on an enormous number of devices, with screen densities ranging from 120 dpi to 480 dpi and more. To simplify the process of developing for such a wide variety, Android uses a virtual pixel unit based on a 160 dpi screen. This allows us to develop for a particular screen size without having to worry about screen density. Scale-independent pixel (sp): This unit is designed to be applied to text. The reason it is scale-independent is because the actual text size on a user's device will depend on their font size settings. To apply the style we just defined, open the activity_main.xml file (from res/layouts, if you have closed it) and edit the TextView node so that it matches this: <TextView     style="@style/myStyle"     android_text="@string/hello_world" /> The effects of applying this style can be seen immediately from the design tab or preview pane, and having seen how styles are applied, we can now go ahead and create a style to customize the material theme palette. Customizing the material theme One of the most useful features of the material theme is the way it can take a small palette made of only a handful of colors and incorporate these colors into every aspect of a UI. Text and cursor colors, the way things are highlighted, and even system features such as the status and navigation bars can be customized to give our apps brand colors and an easily recognizable look. The use of color in material design is a topic in itself, and there are strict guidelines regarding color, shade, and text, and these will be covered in detail later in the book. For now, we will just look at how we can use a style to apply our own colors to a material theme. So as to keep our resources separate, and therefore easier to manage, we will define our palette in its own XML file. As we did earlier with the my_style.xml file, create a new values resource file in the values directory and call it colors. Complete the code as shown next: <?xml version="1.0" encoding="utf-8"?> <resources>     <color name="primary">#FFC107</color>     <color name="primary_dark">#FFA000</color>     <color name="primary_light">#FFECB3</color>     <color name="accent">#03A9F4</color>     <color name="text_primary">#212121</color>     <color name="text_secondary">#727272</color>     <color name="icons">#212121</color>     <color name="divider">#B6B6B6</color> </resources> In the gutter to the left of the code, you will see small, colored squares. Clicking on these will take you to a dialog with a color wheel and other color selection tools for quick color editing. We are going to apply our style to the entire app, so rather than creating a separate file, we will include our style in the theme that was set up by the project template wizard when we started the project. This theme is called AppTheme, as can be seen by opening the res/values/styles/styles.xml (v21) file. Edit the code in this file so that it looks like the following: <?xml version="1.0" encoding="utf-8"?> <resources>     <style name="AppTheme" parent="android:Theme.Material.Light">         <item name="android:colorPrimary">@color/primary</item>         <item name="android:colorPrimaryDark">@color/primary_dark</item>         <item name="android:colorAccent">@color/accent</item>         <item name="android:textColorPrimary">@color/text_primary</item>         <item name="android:textColor">@color/text_secondary</item>     </style> </resources> Being able to set key colors, such as colorPrimary and colorAccent, allows us to incorporate our brand colors throughout the app, although the project template only shows us how we have changed the color of the status bar and app bar. Try adding radio buttons or text edit boxes to see how the accent color is applied. In the following figure, a timepicker replaces the original text view: The XML for this looks like the following lines: <TimePicker     android_layout_width="wrap_content"     android_layout_height="wrap_content"     android_layout_alignParentBottom="true"     android_layout_centerHorizontal="true" /> For now, it is not necessary to know all the color guidelines. Until we get to them, there is an online material color palette generator at http://www.materialpalette.com/ that lets you try out different palette combinations and download color XML files that can just be cut and pasted into the editor. With a complete and up-to-date development environment constructed, and a way to customize and adapt the material theme, we are now ready to look into how material specific widgets, such as card views, are implemented. Summary The Android SDK, Android Studio, and AVD comprise a sophisticated development toolkit, and even setting them up is no simple task. But, with our tools in place, we were able to take a first look at one of material design's major components: the material theme. We have seen how themes and styles relate, and how to create and edit styles in XML. Finally, we have touched on material palettes, and how to customize a theme to utilize our own brand colors across an app. With these basics covered, we can move on to explore material design further, and in the next chapter, we will look at layouts and material components in greater detail. To learn more about material design, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Instant Responsive Web Design (https://www.packtpub.com/web-development/instant-responsive-web-design-instant) Mobile Game Design Essentials (https://www.packtpub.com/game-development/mobile-game-design-essentials) Resources for Article: Further resources on this subject: Speaking Java – Your First Game [article] Metal API: Get closer to the bare metal with Metal API [article] Looking Good – The Graphical Interface [article]
Read more
  • 0
  • 0
  • 2196

article-image-introducing-object-oriented-programmng-typescript
Packt
17 Feb 2016
13 min read
Save for later

Introducing Object Oriented Programmng with TypeScript

Packt
17 Feb 2016
13 min read
In this article, we will see how to group our functions in reusable components, such as classes or modules. This article will cover the following topics: SOLID principles Classes Association, aggregation, and composition Inheritance (For more resources related to this topic, see here.) SOLID principles In the early days of software development, developers used to write code with procedural programming languages. In procedural programming languages, the programs follow a top-to-bottom approach and the logic is wrapped with functions. New styles of computer programming, such as modular programming or structured programming, emerged when developers realized that procedural computer programs could not provide them with the desired level of abstraction, maintainability, and reusability. The development community created a series of recommended practices and design patterns to improve the level of abstraction and reusability of procedural programming languages, but some of these guidelines required a certain level of expertise. In order to facilitate adherence to these guidelines, a new style of computer programming known as object-oriented programming (OOP) was created. Developers quickly noticed some common OOP mistakes and came up with five rules that every OOP developer should follow to create a system that is easy to maintain and extend over time. These five rules are known as the SOLID principles. SOLID is an acronym introduced by Michael Feathers, which stands for the following principles: Single responsibility principle (SRP): This principle states that a software component (function, class, or module) should focus on one unique task (have only one responsibility). Open/closed principle (OCP): This principle states that software entities should be designed with application growth (new code) in mind (should be open to extension), but the application growth should require the fewer possible number of changes to the existing code (be closed for modification). Liskov substitution principle (LSP): This principle states that we should be able to replace a class in a program with another class as long as both classes implement the same interface. After replacing the class, no other changes should be required, and the program should continue to work as it did originally. Interface segregation principle (ISP): This principle states that we should split interfaces that are very large (general-purpose interfaces) into smaller and more specific ones (many client-specific interfaces) so that clients will only need to know about the methods that are of interest to them. Dependency inversion principle (DIP): This principle states that entities should depend on abstractions (interfaces) as opposed to depending on concretion (classes). In this article, we will see how to write TypeScript code that adheres to these principles so that our applications are easy to maintain and extend over time. Classes In this section, we will look at some details and OOP concepts through examples. Let's start by declaring a simple class: class Person { public name : string; public surname : string; public email : string; constructor(name : string, surname : string, email : string){ this.email = email; this.name = name; this.surname = surname; } greet() { alert("Hi!"); } } var me : Person = new Person("Remo", "Jansen", "[email protected]"); We use classes to represent the type of an object or entity. A class is composed of a name, attributes, and methods. The class in the preceding example is named Person and contains three attributes or properties (name, surname, and email) and two methods (constructor and greet). Class attributes are used to describe the object's characteristics, while class methods are used to describe its behavior. A constructor is a special method used by the new keyword to create instances (also known as objects) of our class. We have declared a variable named me, which holds an instance of the Person class. The new keyword uses the Person class's constructor to return an object whose type is Person. A class should adhere to the single responsibility principle (SRP). The Person class in the preceding example represents a person, including all their characteristics (attributes) and behaviors (methods). Now let's add some email as validation logic: class Person { public name : string; public surname : string; public email : string; constructor(name : string, surname : string, email : string) { this.surname = surname; this.name = name; if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } validateEmail() { var re = /S+@S+.S+/; return re.test(this.email); } greet() { alert("Hi! I'm " + this.name + ". You can reach me at " + this.email); } } When an object doesn't follow the SRP and it knows too much (has too many properties) or does too much (has too many methods), we say that the object is a God object. The Person class here is a God object because we have added a method named validateEmail that is not really related to the Person class's behavior. Deciding which attributes and methods should or should not be part of a class is a relatively subjective decision. If we spend some time analyzing our options, we should be able to find a way to improve the design of our classes. We can refactor the Person class by declaring an Email class, responsible for e-mail validation, and use it as an attribute in the Person class: class Email { public email : string; constructor(email : string){ if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } validateEmail(email : string) { var re = /S+@S+.S+/; return re.test(email); } } Now that we have an Email class, we can remove the responsibility of validating the emails from the Person class and update its email attribute to use the type Email instead of string: class Person { public name : string; public surname : string; public email : Email; constructor(name : string, surname : string, email : Email){ this.email = email; this.name = name; this.surname = surname; } greet() { alert("Hi!"); } } Making sure that a class has a single responsibility makes it easier to see what it does and how we can extend/improve it. We can further improve our Person and Email classes by increasing the level of abstraction of our classes. For example, when we use the Email class, we don't really need to be aware of the existence of the validateEmail method; so this method could be invisible from outside the Email class. As a result, the Email class would be much simpler to understand. When we increase the level of abstraction of an object, we can say that we are encapsulating the object's data and behavior. Encapsulation is also known as information hiding. For example, the Email class allows us to use emails without having to worry about e-mail validation because the class will deal with it for us. We can make this clearer by using access modifiers (public or private) to flag as private all the class attributes and methods that we want to abstract from the use of the Email class: class Email { private email : string; constructor(email : string){ if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } private validateEmail(email : string) { var re = /S+@S+.S+/; return re.test(email); } get():string { return this.email; } } We can then simply use the Email class without needing to explicitly perform any kind of validation: var email = new Email("[email protected]"); Interfaces The feature that we will miss the most when developing large-scale web applications with JavaScript is probably interfaces. We have seen that following the SOLID principles can help us to improve the quality of our code, and writing good code is a must when working on a large project. The problem is that if we attempt to follow the SOLID principles with JavaScript, we will soon realize that without interfaces, we will never be able to write SOLID OOP code. Fortunately for us, TypeScript features interfaces. Traditionally, in OOP, we say that a class can extend another class and implement one or more interfaces. An interface can implement one or more interfaces and cannot extend another class or interface. Wikipedia's definition of interfaces in OOP is as follows: In object-oriented languages, the term interface is often used to define an abstract type that contains no data or code, but defines behaviors as method signatures. Implementing an interface can be understood as signing a contract. The interface is a contract, and when we sign it (implement it), we must follow its rules. The interface rules are the signatures of the methods and properties, and we must implement them. We will see many examples of interfaces later in this article. In TypeScript, interfaces don't strictly follow this definition. The two main differences are that in TypeScript: An interface can extend another interface or class An interface can define data and behaviors as opposed to only behaviors Association, aggregation, and composition In OOP, classes can have some kind of relationship with each other. Now, we will take a look at the three different types of relationships between classes. Association We call association those relationships whose objects have an independent lifecycle and where there is no ownership between the objects. Let's take an example of a teacher and student. Multiple students can associate with a single teacher, and a single student can associate with multiple teachers, but both have their own lifecycles (both can be create and delete independently); so when a teacher leaves the school, we don't need to delete any students, and when a student leaves the school, we don't need to delete any teachers. Aggregation We call aggregation those relationships whose objects have an independent lifecycle, but there is ownership, and child objects cannot belong to another parent object. Let's take an example of a cell phone and a cell phone battery. A single battery can belong to a phone, but if the phone stops working, and we delete it from our database, the phone battery will not be deleted because it may still be functional. So in aggregation, while there is ownership, objects have their own lifecycle. Composition We use the term composition to refer to relationships whose objects don't have an independent lifecycle, and if the parent object is deleted, all child objects will also be deleted. Let's take an example of the relationship between questions and answers. Single questions can have multiple answers, and answers cannot belong to multiple questions. If we delete questions, answers will automatically be deleted. Objects with a dependent life cycle (answers, in the example) are known as weak entities. Sometimes, it can be a complicated process to decide if we should use association, aggregation, or composition. This difficulty is caused in part because aggregation and composition are subsets of association, meaning they are specific cases of association. Inheritance One of the most fundamental object-oriented programming features is its capability to extend existing classes. This feature is known as inheritance and allows us to create a new class (child class) that inherits all the properties and methods from an existing class (parent class). Child classes can include additional properties and methods not available in the parent class. Let's return to our previously declared Person class. We will use the Person class as the parent class of a child class named Teacher: class Person { public name : string; public surname : string; public email : Email; constructor(name : string, surname : string, email : Email){ this.name = name; this.surname = surname; this.email = email; } greet() { alert("Hi!"); } } This example is included in the companion source code. Once we have a parent class in place, we can extend it by using the reserved keyword extends. In the following example, we declare a class called Teacher, which extends the previously defined Person class. This means that Teacher will inherit all the attributes and methods from its parent class: class Teacher extends Person { class Teacher extends Person { teach() { alert("Welcome to class!"); } } Note that we have also added a new method named teach to the class Teacher. If we create instances of the Person and Teacher classes, we will be able to see that both instances share the same attributes and methods with the exception of the teach method, which is only available for the instance of the Teacher class: var teacher = new Teacher("remo", "jansen", new Email("[email protected]")); var me = new Person("remo", "jansen", new Email("[email protected]")); me.greet(); teacher.greet(); me.teach(); // Error : Property 'teach' does not exist on type 'Person' teacher.teach(); Sometimes, we will need a child class to provide a specific implementation of a method that is already provided by its parent class. We can use the reserved keyword super for this purpose. Imagine that we want to add a new attribute to list the teacher's subjects, and we want to be able to initialize this attribute through the teacher constructor. We will use the super keyword to explicitly reference the parent class constructor inside the child class constructor. We can also use the super keyword when we want to extend an existing method, such as greet. This OOP language feature that allows a subclass or child class to provide a specific implementation of a method that is already provided by its parent classes is known as method overriding. class Teacher extends Person { public subjects : string[]; constructor(name : string, surname : string, email : Email, subjects : string[]){ super(name, surname, email); this.subjects = subjects; } greet() { super.greet(); alert("I teach " + this.subjects); } teach() { alert("Welcome to Maths class!"); } } var teacher = new Teacher("remo", "jansen", new Email("[email protected]"), ["math", "physics"]); We can declare a new class that inherits from a class that is already inheriting from another. In the following code snippet, we declare a class called SchoolPrincipal that extends the Teacher class, which extends the Person class: class SchoolPrincipal extends Teacher { manageTeachers() { alert("We need to help students to get better results!"); } } If we create an instance of the SchoolPrincipal class, we will be able to access all the properties and methods from its parent classes (SchoolPrincipal, Teacher, and Person): var principal = new SchoolPrincipal("remo", "jansen", new Email("[email protected]"), ["math", "physics"]); principal.greet(); principal.teach(); principal.manageTeachers(); It is not recommended to have too many levels in the inheritance tree. A class situated too deeply in the inheritance tree will be relatively complex to develop, test, and maintain. Unfortunately, we don't have a specific rule that we can follow when we are unsure whether we should increase the depth of the inheritance tree (DIT). We should use inheritance in such a way that it helps us to reduce the complexity of our application and not the opposite. We should try to keep the DIT between 0 and 4 because a value greater than 4 would compromise encapsulation and increase complexity. Summary In this article, we saw how to work with classes, and interfaces in depth. We were able to reduce the complexity of our application by using techniques such as encapsulation and inheritance. To learn more about TypeScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: TypeScript Essentials (https://www.packtpub.com/web-development/typescript-essentials) Mastering TypeScript(https://www.packtpub.com/web-development/mastering-typescript) Resources for Article: Further resources on this subject: Writing SOLID JavaScript code with TypeScript [article] Introduction to TypeScript [article] An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js [article]
Read more
  • 0
  • 0
  • 6081

article-image-create-your-first-react-element
Packt
17 Feb 2016
22 min read
Save for later

Create Your First React Element

Packt
17 Feb 2016
22 min read
From the 7th to the 13th of November 2016, you can save up to 80% on some of our top ReactJS content - so what are you waiting for? Dive in here before the week ends! As many of you know, creating a simple web application today involves writing the HTML, CSS, and JavaScript code. The reason we use three different technologies is because we want to separate three different concerns: Content (HTML) Styling (CSS) Logic (JavaScript) (For more resources related to this topic, see here.) This separation works great for creating a web page because, traditionally, we had different people working on different parts of our web page: one person structured the content using HTML and styled it using CSS, and then another person implemented the dynamic behavior of various elements on that web page using JavaScript. It was a content-centric approach. Today, we mostly don't think of a website as a collection of web pages anymore. Instead, we build web applications that might have only one web page, and that web page does not represent the layout for our content—it represents a container for our web application. Such a web application with a single web page is called (unsurprisingly) a Single Page Application (SPA). You might be wondering, how do we represent the rest of the content in a SPA? Surely, we need to create an additional layout using HTML tags? Otherwise, how does a web browser know what to render? These are all valid questions. Let's take a look at how it works in this article. Once you load your web page in a web browser, it creates a Document Object Model (DOM) of that web page. A DOM represents your web page in a tree structure, and at this point, it reflects the structure of the layout that you created with only HTML tags. This is what happens regardless of whether you're building a traditional web page or a SPA. The difference between the two is what happens next. If you are building a traditional web page, then you would finish creating your web page's layout. On the other hand, if you are building a SPA, then you would need to start creating additional elements by manipulating the DOM with JavaScript. A web browser provides you with the JavaScript DOM API to do this. You can learn more about it at https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model. However, manipulating (or mutating) the DOM with JavaScript has two issues: Your programming style will be imperative if you decide to use the JavaScript DOM API directly. This programming style leads to a code base that is harder to maintain. DOM mutations are slow because they cannot be optimized for speed, unlike other JavaScript code. Luckily, React solves both these problems for us. Understanding virtual DOM Why do we need to manipulate the DOM in the first place? Because our web applications are not static. They have a state represented by the user interface (UI) that a web browser renders, and that state can be changed when an event occurs. What kind of events are we talking about? There are two types of events that we're interested in: User events: When a user types, clicks, scrolls, resizes, and so on Server events: When an application receives data or an error from a server, among others What happens while handling these events? Usually, we update the data that our application depends on, and that data represents a state of our data model. In turn, when a state of our data model changes, we might want to reflect this change by updating a state of our UI. Looks like what we want is a way of syncing two different states: the UI state and the data model state. We want one to react to the changes in  the other and vice versa. How can we achieve this? One of the ways to sync your application's UI state with an underlying data model's state is two-way data binding. There are different types of two-way data binding. One of them is key-value observing (KVO), which is used in Ember.js, Knockout, Backbone, and iOS, among others. Another one is dirty checking, which is used in Angular. Instead of two-way data binding, React offers a different solution called the virtual DOM. The virtual DOM is a fast, in-memory representation of the real DOM, and it's an abstraction that allows us to treat JavaScript and DOM as if they were reactive. Let's take a look at how it works: Whenever the state of your data model changes, the virtual DOM and React will rerender your UI to a virtual DOM representation. React then calculates the difference between the two virtual DOM representations: the previous virtual DOM representation that was computed before the data was changed and the current virtual DOM representation that was computed after the data was changed. This difference between the two virtual DOM representations is what actually needs to be changed in the real DOM. React updates only what needs to be updated in the real DOM. The process of finding a difference between the two representations of the virtual DOM and rerendering only the updated patches in a real DOM is fast. Also, the best part is, as a React developer, that you don't need to worry about what actually needs to be rerendered. React allows you to write your code as if you were rerendering the entire DOM every time your application's state changes. If you would like to learn more about the virtual DOM, the rationale behind it, and how it can be compared to data binding, then I would strongly recommend that you watch this very informative talk by Pete Hunt from Facebook at https://www.youtube.com/watch?v=-DX3vJiqxm4. Now that we've learnt about the virtual DOM, let's mutate a real DOM by installing React and creating our first React element. Installing React To start using the React library, we need to first install it. I am going to show you two ways of doing this: the simplest one and the one using the npm install command. The simplest way is to add the <script> tag to our ~/snapterest/build/index.html file: For the development version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.js"></script> For the production version version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.min.js"></script> For our project, we'll be using the development version of React. At the time of writing, the latest version of React library is 0.14.0-beta3. Over time, React gets updated, so make sure you use the latest version that is available to you, unless it introduces breaking changes that are incompatible with the code samples provided in this article. Visit https://github.com/fedosejev/react-essentials to learn about any compatibility issues between the code samples and the latest version of React. We all know that Browserify allows us to import all the dependency modules for our application using the require() function. We'll be using require() to import the React library as well, which means that, instead of adding a <script> tag to our index.html, we'll be using the npm install command to install React: Navigate to the ~/snapterest/ directory and run this command:  npm install --save [email protected] [email protected] Then, open the ~/snapterest/source/app.js file in your text editor and import the React and ReactDOM libraries to the React and ReactDOM variables, respectively: var React = require('react'); var ReactDOM = require('react-dom'); The react package contains methods that are concerned with the key idea behind React, that is, describing what you want to render in a declarative way. On the other hand, the react-dom package offers methods that are responsible for rendering to the DOM. You can read more about why developers at Facebook think it's a good idea to separate the React library into two packages at https://facebook.github.io/react/blog/2015/07/03/react-v0.14-beta-1.html#two-packages. Now we're ready to start using the React library in our project. Next, let's create our first React Element! Creating React Elements with JavaScript We'll start by familiarizing ourselves with a fundamental React terminology. It will help us build a clear picture of what the React library is made of. This terminology will most likely update over time, so keep an eye on the official documentation at http://facebook.github.io/react/docs/glossary.html. Just like the DOM is a tree of nodes, React's virtual DOM is a tree of React nodes. One of the core types in React is called ReactNode. It's a building block for a virtual DOM, and it can be any one of these core types: ReactElement: This is the primary type in React. It's a light, stateless, immutable, virtual representation of a DOM Element. ReactText: This is a string or a number. It represents textual content and it's a virtual representation of a Text Node in the DOM. ReactElements and ReactTexts are ReactNodes. An array of ReactNodes is called a ReactFragment. You will see examples of all of these in this article. Let's start with an example of a ReactElement: Add the following code to your ~/snapterest/source/app.js file: var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Now your app.js file should look exactly like this: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Navigate to the ~/snapterest/ directory and run Gulp's default task: gulp You will see the following output: Starting 'default'... Finished 'default' after 1.73 s Navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a blank web page. Open Developer Tools in your web browser and inspect the HTML markup for your blank web page. You should see this line, among others: <h1 data-reactid=".0"></h1> Well done! We've just created your first React element. Let's see exactly how we did it. The entry point to the React library is the React object. This object has a method called createElement() that takes three parameters: type, props, and children: React.createElement(type, props, children); Let's take a look at each parameter in more detail. The type parameter The type parameter can be either a string or a ReactClass: A string could be an HTML tag name such as 'div', 'p', 'h1', and so on. React supports all the common HTML tags and attributes. For a complete list of HTML tags and attributes supported by React, you can refer to http://facebook.github.io/react/docs/tags-and-attributes.html. A ReactClass is created via the React.createClass() method. The type parameter describes how an HTML tag or a ReactClass is going to be rendered. In our example, we're rendering the h1 HTML tag. The props parameter The props parameter is a JavaScript object passed from a parent element to a child element (and not the other way around) with some properties that are considered immutable, that is, those that should not be changed. While creating DOM elements with React, we can pass the props object with properties that represent the HTML attributes such as class, style, and so on. For example, run the following commands: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }); ReactDOM.render(reactElement, document.getElementById('react-application')); The preceding code will create an h1 HTML element with a class attribute set to header: <h1 class="header" data-reactid=".0"></h1> Notice that we name our property className rather than class. The reason is that the class keyword is reserved in JavaScript. If you use class as a property name, it will be ignored by React, and a helpful warning message will be printed on the web browser's console: Warning: Unknown DOM property class. Did you mean className?Use className instead. You might be wondering what this data-reactid=".0" attribute is doing in our h1 tag? We didn't pass it to our props object, so where did it come from? It is added and used by React to track the DOM nodes; it might be removed in a future version of React. The children parameter The children parameter describes what child elements this element should have, if any. A child element can be any type of ReactNode: a virtual DOM element represented by a ReactElement, a string or a number represented by a ReactText, or an array of other ReactNodes, which is also called ReactFragment. Let's take a look at this example: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }, 'This is React'); ReactDOM.render(reactElement, document.getElementById('react-application')); The following code will create an h1 HTML element with a class attribute and a text node, This is React: <h1 class="header" data-reactid=".0">This is React</h1> The h1 tag is represented by a ReactElement, while the This is React string is represented by a ReactText. Next, let's create a React element with a number of other React elements as it's children: var React = require('react'); var ReactDOM = require('react-dom');   var h1 = React.createElement('h1', { className: 'header', key: 'header' }, 'This is React'); var p = React.createElement('p', { className: 'content', key: 'content' }, "And that's how it works."); var reactFragment = [ h1, p ]; var section = React.createElement('section', { className: 'container' }, reactFragment);   ReactDOM.render(section, document.getElementById('react-application')); We've created three React elements: h1, p, and section. h1 and p both have child text nodes, "This is React" and "And that's how it works.", respectively. The section has a child that is an array of two ReactElements, h1 and p, called reactFragment. This is also an array of ReactNodes. Each ReactElement in the reactFragment array must have a key property that helps React to identify that ReactElement. As a result, we get the following HTML markup: <section class="container" data-reactid=".0">   <h1 class="header" data-reactid=".0.$header">This is React</h1>   <p class="content" data-reactid=".0.$content">And that's how it works.</p> </section> Now we understand how to create React elements. What if we want to create a number of React elements of the same type? Does it mean that we need to call React.createElement('type') over and over again for each element of the same type? We can, but we don't need to because React provides us with a factory function called React.createFactory(). A factory function is a function that creates other functions. This is exactly what React.createFactory(type) does: it creates a function that produces a ReactElement of a given type. Consider the following example: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.createElement('li', { className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.createElement('li', { className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.createElement('li', { className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); The preceding example produces this HTML: <ul class="list-of-items" data-reactid=".0">   <li class="item-1" data-reactid=".0.$item-1">Item 1</li>   <li class="item-2" data-reactid=".0.$item-2">Item 2</li>   <li class="item-3" data-reactid=".0.$item-3">Item 3</li> </ul> We can simplify it by first creating a factory function: var React = require('react'); var ReactDOM = require('react-dom'); var createListItemElement = React.createFactory('li'); var listItemElement1 = createListItemElement({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = createListItemElement({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = createListItemElement({ className: 'item-3', key: 'item-3' }, 'Item 3'); var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment); ReactDOM.render(listOfItems, document.getElementById('react-application')); In the preceding example, we're first calling the React.createFactory() function and passing a li HTML tag name as a type parameter. Then, the React.createFactory() function returns a new function that we can use as a convenient shorthand to create elements of type li. We store a reference to this function in a variable called createListItemElement. Then, we call this function three times, and each time we only pass the props and children parameters, which are unique for each element. Notice that React.createElement() and React.createFactory() both expect the HTML tag name string (such as li) or the ReactClass object as a type parameter. React provides us with a number of built-in factory functions to create the common HTML tags. You can call them from the React.DOM object; for example, React.DOM.ul(), React.DOM.li(), React.DOM.div(), and so on. Using them, we can simplify our previous example even further: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Now we know how to create a tree of ReactNodes. However, there is one important line of code that we need to discuss before we can progress further: ReactDOM.render(listOfItems, document.getElementById('react-application')); As you might have already guessed, it renders our ReactNode tree to the DOM. Let's take a closer look at how it works. Rendering React Elements The ReactDOM.render() method takes three parameters: ReactElement, a regular DOMElement, and a callback function: ReactDOM.render(ReactElement, DOMElement, callback); ReactElement is a root element in the tree of ReactNodes that you've created. A regular DOMElement is a container DOM node for that tree. The callback is a function executed after the tree is rendered or updated. It's important to note that if this ReactElement was previously rendered to a parent DOM Element, then ReactDOM.render() will perform an update on the already rendered DOM tree and only mutate the DOM as it is necessary to reflect the latest version of the ReactElement. This is why a virtual DOM requires fewer DOM mutations. So far, we've assumed that we're always creating our virtual DOM in a web browser. This is understandable because, after all, React is a user interface library, and all the user interfaces are rendered in a web browser. Can you think of a case when rendering a user interface on a client would be slow? Some of you might have already guessed that I am talking about the initial page load. The problem with the initial page load is the one I mentioned at the beginning of this article—we're not creating static web pages anymore. Instead, when a web browser loads our web application, it receives only the bare minimum HTML markup that is usually used as a container or a parent element for our web application. Then, our JavaScript code creates the rest of the DOM, but in order for it to do so it often needs to request extra data from the server. However, getting this data takes time. Once this data is received, our JavaScript code starts to mutate the DOM. We know that DOM mutations are slow. How can we solve this problem? The solution is somewhat unexpected. Instead of mutating the DOM in a web browser, we mutate it on a server. Just like we would with our static web pages. A web browser will then receive an HTML that fully represents a user interface of our web application at the time of the initial page load. Sounds simple, but we can't mutate the DOM on a server because it doesn't exist outside a web browser. Or can we? We have a virtual DOM that is just a JavaScript, and as you know using Node.js, we can run JavaScript on a server. So technically, we can use the React library on a server, and we can create our ReactNode tree on a server. The question is how can we render it to a string that we can send to a client? React has a method called ReactDOMServer.renderToString() just to do this: var ReactDOMServer = require('react-dom/server'); ReactDOMServer.renderToString(ReactElement); It takes a ReactElement as a parameter and renders it to its initial HTML. Not only is this faster than mutating a DOM on a client, but it also improves the Search Engine Optimization (SEO) of your web application. Speaking of generating static web pages, we can do this too with React: var ReactDOMServer = require('react-dom/server'); ReactDOM.renderToStaticMarkup(ReactElement); Similar to ReactDOM.renderToString(), this method also takes a ReactElement as a parameter and outputs an HTML string. However, it doesn't create the extra DOM attributes that React uses internally, it produces shorter HTML strings that we can transfer to the wire quickly. Now you know not only how to create a virtual DOM tree using React elements, but you also know how to render it to a client and server. Our next question is whether we can do it quickly and in a more visual manner. Creating React Elements with JSX When we build our virtual DOM by constantly calling the React.createElement() method, it becomes quite hard to visually translate these multiple function calls into a hierarchy of HTML tags. Don't forget that, even though we're working with a virtual DOM, we're still creating a structure layout for our content and user interface. Wouldn't it be great to be able to visualize that layout easily by simply looking at our React code? JSX is an optional HTML-like syntax that allows us to create a virtual DOM tree without using the React.createElement() method. Let's take a look at the previous example that we created without JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Translate this to the one with JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listOfItems = <ul className="list-of-items">                     <li className="item-1">Item 1</li>                     <li className="item-2">Item 2</li>                     <li className="item-3">Item 3</li>                   </ul>; ReactDOM.render(listOfItems, document.getElementById('react-application'));   As you can see, JSX allows us to write HTML-like syntax in our JavaScript code. More importantly, we can now clearly see what our HTML layout will look like once it's rendered. JSX is a convenience tool and it comes with a price in the form of an additional transformation step. Transformation of the JSX syntax into valid JavaScript syntax must happen before our "invalid" JavaScript code is interpreted. We know that the babely module transforms our JSX syntax into a JavaScript one. This transformation happens every time we run our default task from gulpfile.js: gulp.task('default', function () {   return browserify('./source/app.js')         .transform(babelify)         .bundle()         .pipe(source('snapterest.js'))         .pipe(gulp.dest('./build/')); }); As you can see, the .transform(babelify) function call transforms JSX into JavaScript before bundling it with the other JavaScript code. To test our transformation, run this command: gulp Then, navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a list of three items. The React team has built an online JSX Compiler that you can use to test your understanding of how JSX works at http://facebook.github.io/react/jsx-compiler.html. Using JSX, you might feel very unusual in the beginning, but it can become a very intuitive and convenient tool to use. The best part is that you can choose whether to use it or not. I found that JSX saves me development time, so I chose to use it in this project that we're building. If you choose to not use it, then I believe that you have learned enough in this article to be able to translate the JSX syntax into a JavaScript code with the React.createElement() function calls. If you have a question about what we have discussed in this article, then you can refer to https://github.com/fedosejev/react-essentials and create a new issue. Summary We started this article by discussing the issues with single web page applications and how they can be addressed. Then, we learned what a virtual DOM is and how React allows us to build it. We also installed React and created our first React element using only JavaScript. Then, we also learned how to render React elements in a web browser and on a server. Finally, we looked at a simpler way of creating React elements with JSX. Resources for Article: Further resources on this subject: Changing Views [article] Introduction to Akka [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 7710

article-image-developing-basic-site-nodejs-and-express
Packt
17 Feb 2016
21 min read
Save for later

Developing a Basic Site with Node.js and Express

Packt
17 Feb 2016
21 min read
In this article, we will continue with the Express framework. It's one of the most popular frameworks available and is certainly a pioneering one. Express is still widely used and several developers use it as a starting point. (For more resources related to this topic, see here.) Getting acquainted with Express Express (http://expressjs.com/) is a web application framework for Node.js. It is built on top of Connect (http://www.senchalabs.org/connect/), which means that it implements middleware architecture. In the previous chapter, when exploring Node.js, we discovered the benefit of such a design decision: the framework acts as a plugin system. Thus, we can say that Express is suitable for not only simple but also complex applications because of its architecture. We may use only some of the popular types of middleware or add a lot of features and still keep the application modular. In general, most projects in Node.js perform two functions: run a server that listens on a specific port, and process incoming requests. Express is a wrapper for these two functionalities. The following is basic code that runs the server: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); This is an example extracted from the official documentation of Node.js. As shown, we use the native module http and run a server on the port 1337. There is also a request handler function, which simply sends the Hello world string to the browser. Now, let's implement the same thing but with the Express framework, using the following code: var express = require('express'); var app = express(); app.get("/", function(req, res, next) { res.send("Hello world"); }).listen(1337); console.log('Server running at http://127.0.0.1:1337/'); It's pretty much the same thing. However, we don't need to specify the response headers or add a new line at the end of the string because the framework does it for us. In addition, we have a bunch of middleware available, which will help us process the requests easily. Express is like a toolbox. We have a lot of tools to do the boring stuff, allowing us to focus on the application's logic and content. That's what Express is built for: saving time for the developer by providing ready-to-use functionalities. Installing Express There are two ways to install Express. We'll will start with the simple one and then proceed to the more advanced technique. The simpler approach generates a template, which we may use to start writing the business logic directly. In some cases, this can save us time. From another viewpoint, if we are developing a custom application, we need to use custom settings. We can also use the boilerplate, which we get with the advanced technique; however, it may not work for us. Using package.json Express is like every other module. It has its own place in the packages register. If we want to use it, we need to add the framework in the package.json file. The ecosystem of Node.js is built on top of the Node Package Manager. It uses the JSON file to find out what we need and installs it in the current directory. So, the content of our package.json file looks like the following code: { "name": "projectname", "description": "description", "version": "0.0.1", "dependencies": { "express": "3.x" } } These are the required fields that we have to add. To be more accurate, we have to say that the mandatory fields are name and version. However, it is always good to add descriptions to our modules, particularly if we want to publish our work in the registry, where such information is extremely important. Otherwise, the other developers will not know what our library is doing. Of course, there are a bunch of other fields, such as contributors, keywords, or development dependencies, but we will stick to limited options so that we can focus on Express. Once we have our package.json file placed in the project's folder, we have to call npm install in the console. By doing so, the package manager will create a node_modules folder and will store Express and its dependencies there. At the end of the command's execution, we will see something like the following screenshot: The first line shows us the installed version, and the proceeding lines are actually modules that Express depends on. Now, we are ready to use Express. If we type require('express'), Node.js will start looking for that library inside the local node_modules directory. Since we are not using absolute paths, this is normal behavior. If we miss running the npm install command, we will be prompted with Error: Cannot find module 'express'. Using a command-line tool There is a command-line instrument called express-generator. Once we run npm install -g express-generator, we will install and use it as every other command in our terminal. If you use the framework inseveral projects, you will notice that some things are repeated. We can even copy and paste them from one application to another, and this is perfectly fine. We may even end up with our own boiler plate and can always start from there. The command-line version of Express does the same thing. It accepts few arguments and based on them, creates a skeleton for use. This can be very handy in some cases and will definitely save some time. Let's have a look at the available arguments: -h, --help: This signifies output usage information. -V, --version: This shows the version of Express. -e, --ejs: This argument adds the EJS template engine support. Normally, we need a library to deal with our templates. Writing pure HTML is not very practical. The default engine is set to JADE. -H, --hogan: This argument is Hogan-enabled (another template engine). -c, --css: If wewant to use the CSS preprocessors, this option lets us use LESS(short forLeaner CSS) or Stylus. The default is plain CSS. -f, --force: This forces Express to operate on a nonempty directory. Let's try to generate an Express application skeleton with LESS as a CSS preprocessor. We use the following line of command: express --css less myapp A new myapp folder is created with the file structure, as seen in the following screenshot: We still need to install the dependencies, so cd myapp && npm install is required. We will skip the explanation of the generated directories for now and will move to the created app.js file. It starts with initializing the module dependencies, as follows: var express = require('express'); var path = require('path'); var favicon = require('static-favicon'); var logger = require('morgan'); var cookieParser = require('cookie-parser'); var bodyParser = require('body-parser'); var routes = require('./routes/index'); var users = require('./routes/users'); var app = express(); Our framework is express, and path is a native Node.js module. The middleware are favicon, logger, cookieParser, and bodyParser. The routes and users are custom-made modules, placed in local for the project folders. Similarly, as in the Model-View-Controller(MVC) pattern, these are the controllers for our application. Immediately after, an app variable is created; this represents the Express library. We use this variable to configure our application. The script continues by setting some key-value pairs. The next code snippet defines the path to our views and the default template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); The framework uses the methods set and get to define the internal properties. In fact, we may use these methods to define our own variables. If the value is a Boolean, we can replace set and get with enable and disable. For example, see the following code: app.set('color', 'red'); app.get('color'); // red app.enable('isAvailable'); The next code adds middleware to the framework. Wecan see the code as follows: app.use(favicon()); app.use(logger('dev')); app.use(bodyParser.json()); app.use(bodyParser.urlencoded()); app.use(cookieParser()); app.use(require('less-middleware')({ src: path.join(__dirname, 'public') })); app.use(express.static(path.join(__dirname, 'public'))); The first middleware serves as the favicon of our application. The second is responsible for the output in the console. If we remove it, we will not get information about the incoming requests to our server. The following is a simple output produced by logger: GET / 200 554ms - 170b GET /stylesheets/style.css 200 18ms - 110b The json and urlencoded middleware are related to the data sent along with the request. We need them because they convert the information in an easy-to-use format. There is also a middleware for the cookies. It populates the request object, so we later have access to the required data. The generated app uses LESS as a CSS preprocessor, and we need to configure it by setting the directory containing the .less files. Eventually, we define our static resources, which should be delivered by the server. These are just few lines, but we've configured the whole application. We may remove or replace some of the modules, and the others will continue working. The next code in the file maps two defined routes to two different handlers, as follows: app.use('/', routes); app.use('/users', users); If the user tries to open a missing page, Express still processes the request by forwarding it to the error handler, as follows: app.use(function(req, res, next) { var err = new Error('Not Found'); err.status = 404; next(err); }); The framework suggests two types of error handling:one for the development environment and another for the production server. The difference is that the second one hides the stack trace of the error, which should be visible only for the developers of the application. As we can see in the following code, we are checking the value of the env property and handling the error differently: // development error handler if (app.get('env') === 'development') { app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); At the end, the app.js file exports the created Express instance, as follows: module.exports = app; To run the application, we need to execute node ./bin/www. The code requires app.js and starts the server, which by default listens on port 3000. #!/usr/bin/env node var debug = require('debug')('my-application'); var app = require('../app'); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function() { debug('Express server listening on port ' + server.address().port); }); The process.env declaration provides an access to variables defined in the current development environment. If there is no PORT setting, Express uses 3000 as the value. The required debug module uses a similar approach to find out whether it has to show messages to the console. Managing routes The input of our application is the routes. The user visits our page at a specific URL and we have to map this URL to a specific logic. In the context of Express, this can be done easily, as follows: var controller = function(req, res, next) { res.send("response"); } app.get('/example/url', controller); We even have control over the HTTP's method, that is, we are able to catch POST, PUT, or DELETE requests. This is very handy if we want to retain the address path but apply a different logic. For example, see the following code: var getUsers = function(req, res, next) { // ... } var createUser = function(req, res, next) { // ... } app.get('/users', getUsers); app.post('/users', createUser); The path is still the same, /users, but if we make a POST request to that URL, the application will try to create a new user. Otherwise, if the method is GET, it will return a list of all the registered members. There is also a method, app.all, which we can use to handle all the method types at once. We can see this method in the following code snippet: app.all('/', serverHomePage); There is something interesting about the routing in Express. We may pass not just one but many handlers. This means that we can create a chain of functions that correspond to one URL. For example, it we need to know if the user is logged in, there is a module for that. We can add another method that validates the current user and attaches a variable to the request object, as follows: var isUserLogged = function(req, res, next) { req.userLogged = Validator.isCurrentUserLogged(); next(); } var getUser = function(req, res, next) { if(req.userLogged) { res.send("You are logged in. Hello!"); } else { res.send("Please log in first."); } } app.get('/user', isUserLogged, getUser); The Validator class is a class that checks the current user's session. The idea is simple: we add another handler, which acts as an additional middleware. After performing the necessary actions, we call the next function, which passes the flow to the next handler, getUser. Because the request and response objects are the same for all the middlewares, we have access to the userLogged variable. This is what makes Express really flexible. There are a lot of great features available, but they are optional. At the end of this chapter, we will make a simple website that implements the same logic. Handling dynamic URLs and the HTML forms The Express framework also supports dynamic URLs. Let's say we have a separate page for every user in our system. The address to those pages looks like the following code: /user/45/profile Here, 45 is the unique number of the user in our database. It's of course normal to use one route handler for this functionality. We can't really define different functions for every user. The problem can be solved by using the following syntax: var getUser = function(req, res, next) { res.send("Show user with id = " + req.params.id); } app.get('/user/:id/profile', getUser); The route is actually like a regular expression with variables inside. Later, that variable is accessible in the req.params object. We can have more than one variable. Here is a slightly more complex example: var getUser = function(req, res, next) { var userId = req.params.id; var actionToPerform = req.params.action; res.send("User (" + userId + "): " + actionToPerform) } app.get('/user/:id/profile/:action', getUser); If we open http://localhost:3000/user/451/profile/edit, we see User (451): edit as a response. This is how we can get a nice looking, SEO-friendly URL. Of course, sometimes we need to pass data via the GET or POST parameters. We may have a request like http://localhost:3000/user?action=edit. To parse it easily, we need to use the native url module, which has few helper functions to parse URLs: var getUser = function(req, res, next) { var url = require('url'); var url_parts = url.parse(req.url, true); var query = url_parts.query; res.send("User: " + query.action); } app.get('/user', getUser); Once the module parses the given URL, our GET parameters are stored in the .query object. The POST variables are a bit different. We need a new middleware to handle that. Thankfully, Express has one, which is as follows: app.use(express.bodyParser()); var getUser = function(req, res, next) { res.send("User: " + req.body.action); } app.post('/user', getUser); The express.bodyParser() middleware populates the req.body object with the POST data. Of course, we have to change the HTTP method from .get to .post or .all. If we want to read cookies in Express, we may use the cookieParser middleware. Similar to the body parser, it should also be installed and added to the package.json file. The following example sets the middleware and demonstrates its usage: var cookieParser = require('cookie-parser'); app.use(cookieParser('optional secret string')); app.get('/', function(req, res, next){ var prop = req.cookies.propName }); Returning a response Our server accepts requests, does some stuff, and finally, sends the response to the client's browser. This can be HTML, JSON, XML, or binary data, among others. As we know, by default, every middleware in Express accepts two objects, request and response. The response object has methods that we can use to send an answer to the client. Every response should have a proper content type or length. Express simplifies the process by providing functions to set HTTP headers and sending content to the browser. In most cases, we will use the .send method, as follows: res.send("simple text"); When we pass a string, the framework sets the Content-Type header to text/html. It's great to know that if we pass an object or array, the content type is application/json. If we develop an API, the response status code is probably going to be important for us. With Express, we are able to set it like in the following code snippet: res.send(404, 'Sorry, we cannot find that!'); It's even possible to respond with a file from our hard disk. If we don't use the framework, we will need to read the file, set the correct HTTP headers, and send the content. However, Express offers the .sendfile method, which wraps all these operations as follows: res.sendfile(__dirname + "/images/photo.jpg"); Again, the content type is set automatically; this time it is based on the filename's extension. When building websites or applications with a user interface, we normally need to serve an HTML. Sure, we can write it manually in JavaScript, but it's good practice to use a template engine. This means we save everything in external files and the engine reads the markup from there. It populates them with some data and, at the end, provides ready-to-show content. In Express, the whole process is summarized in one method, .render. However, to work properly, we have to instruct the framework regarding which template engine to use. We already talked about this in the beginning of this chapter. The following two lines of code, set the path to our views and the template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); Let's say we have the following template ( /views/index.jade ): h1= title p Welcome to #{title} Express provides a method to serve templates. It accepts the path to the template, the data to be applied, and a callback. To render the previous template, we should use the following code: res.render("index", {title: "Page title here"}); The HTML produced looks as follows: <h1>Page title here</h1><p>Welcome to Page title here</p> If we pass a third parameter, function, we will have access to the generated HTML. However, it will not be sent as a response to the browser. The example-logging system We've seen the main features of Express. Now let's build something real. The next few pages present a simple website where users can read only if they are logged in. Let's start and set up the application. We are going to use Express' command-line instrument. It should be installed using npm install -g express-generator. We create a new folder for the example, navigate to it via the terminal, and execute express --css less site. A new directory, site, will be created. If we go there and run npm install, Express will download all the required dependencies. As we saw earlier, by default, we have two routes and two controllers. To simplify the example, we will use only the first one: app.use('/', routes). Let's change the views/index.jade file content to the following HTML code: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr p That's a simple application using Express. Now, if we run node ./bin/www and open http://127.0.0.1:3000, we will see the page. Jade uses indentation to parse our template. So, we should not mix tabs and spaces. Otherwise, we will get an error. Next, we need to protect our content. We check whether the current user has a session created; if not, a login form is shown. It's the perfect time to create a new middleware. To use sessions in Express, install an additional module: express-session. We need to open our package.json file and add the following line of code: "express-session": "~1.0.0" Once we do that, a quick run of npm install will bring the module to our application. All we have to do is use it. The following code goes to app.js: var session = require('express-session'); app.use(session({ secret: 'app', cookie: { maxAge: 60000 }})); var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { res.send("show login form"); } } app.use('/', verifyUser, routes); Note that we changed the original app.use('/', routes) line. The session middleware is initialized and added to Express. The verifyUser function is called before the page rendering. It uses the req.session object, and checks whether there is a loggedIn variable defined and if its value is true. If we run the script again, we will see that the show login form text is shown for every request. It's like this because no code sets the session exactly the way we want it. We need a form where users can type their username and password. We will process the result of the form and if the credentials are correct, the loggedIn variable will be set to true. Let's create a new Jade template, /views/login.jade: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr form(method='post') label Username: br input(type='text', name='username') br label Password: br input(type='password', name='password') br input(type='submit') Instead of sending just a text with res.send("show login form"); we should render the new template, as follows: res.render("login", {title: "Please log in."}); We choose POST as the method for the form. So, we need to add the middleware that populates the req.body object with the user's data, as follows: app.use(bodyParser()); Process the submitted username and password as follows: var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { var username = "admin", password = "admin"; if(req.body.username === username && req.body.password === password) { req.session.loggedIn = true; res.redirect('/'); } else { res.render("login", {title: "Please log in."}); } } } The valid credentials are set to admin/admin. In a real application, we may need to access a database or get this information from another place. It's not really a good idea to place the username and password in the code; however, for our little experiment, it is fine. The previous code checks whether the passed data matches our predefined values. If everything is correct, it sets the session, after which the user is forwarded to the home page. Once you log in, you should be able to log out. Let's add a link for that just after the content on the index page (views/index.jade ): a(href='/logout') logout Once users clicks on this link, they will be forward to a new page. We just need to create a handler for the new route, remove the session, and forward them to the index page where the login form is reflected. Here is what our logging out handler looks like: // in app.js var logout = function(req, res, next) { req.session.loggedIn = false; res.redirect('/'); } app.all('/logout', logout); Setting loggedIn to false is enough to make the session invalid. The redirect sends users to the same content page they came from. However, this time, the content is hidden and the login form pops up. Summary In this article, we learned about one of most widely used Node.js frameworks, Express. We discussed its fundamentals, how to set it up, and its main characteristics. The middleware architecture, which we mentioned in the previous chapter, is the base of the library and gives us the power to write complex but, at the same time, flexible applications. The example we used was a simple one. We required a valid session to provide page access. However, it illustrates the usage of the body parser middleware and the process of registering the new routes. We also updated the Jade templates and saw the results in the browser. For more information on Node.js Refer to the following URLs: https://www.packtpub.com/web-development/instant-nodejs-starter-instant https://www.packtpub.com/web-development/learning-nodejs-net-developers https://www.packtpub.com/web-development/nodejs-essentials Resources for Article: Further resources on this subject: Writing a Blog Application with Node.js and AngularJS [article] Testing in Node and Hapi [article] Learning Node.js for Mobile Application Development [article]
Read more
  • 0
  • 0
  • 4750
Packt
17 Feb 2016
14 min read
Save for later

"D-J-A-N-G-O... The D is silent." - Authentication in Django

Packt
17 Feb 2016
14 min read
The authentication module saves a lot of time in creating space for users. The following are the main advantages of this module: The main actions related to users are simplified (connection, account activation, and so on) Using this system ensures a certain level of security Access restrictions to pages can be done very easily (For more resources related to this topic, see here.) It's such a useful module that we have already used it without noticing. Indeed, access to the administration module is performed by the authentication module. The user we created during the generation of our database was the first user of the site. This article greatly alters the application we wrote earlier. At the end of this article, we will have: Modified our UserProfile model to make it compatible with the module Created a login page Modified the addition of developer and supervisor pages Added the restriction of access to connected users How to use the authentication module In this section, we will learn how to use the authentication module by making our application compatible with the module. Configuring the Django application There is normally nothing special to do for the administration module to work in our TasksManager application. Indeed, by default, the module is enabled and allows us to use the administration module. However, it is possible to work on a site where the web Django authentication module has been disabled. We will check whether the module is enabled. In the INSTALLED_APPS section of the settings.py file, we have to check the following line: 'django.contrib.auth', Editing the UserProfile model The authentication module has its own User model. This is also the reason why we have created a UserProfile model and not just User. It is a model that already contains some fields, such as nickname and password. To use the administration module, you have to use the User model on the Python33/Lib/site-package/django/contrib/auth/models.py file. We will modify the UserProfile model in the models.py file that will become the following: class UserProfile(models.Model): user_auth = models.OneToOneField(User, primary_key=True) phone = models.CharField(max_length=20, verbose_name="Phone number", null=True, default=None, blank=True) born_date = models.DateField(verbose_name="Born date", null=True, default=None, blank=True) last_connexion = models.DateTimeField(verbose_name="Date of last connexion", null=True, default=None, blank=True) years_seniority = models.IntegerField(verbose_name="Seniority", default=0) def __str__(self): return self.user_auth.username We must also add the following line in models.py: from django.contrib.auth.models import User In this new model, we have: Created a OneToOneField relationship with the user model we imported Deleted the fields that didn't exist in the user model The OneToOne relation means that for each recorded UserProfile model, there will be a record of the User model. In doing all this, we deeply modify the database. Given these changes and because the password is stored as a hash, we will not perform the migration with South. It is possible to keep all the data and do a migration with South, but we should develop a specific code to save the information of the UserProfile model to the User model. The code should also generate a hash for the password, but it would be long. To reset South, we must do the following: Delete the TasksManager/migrations folder and all the files contained in this folder Delete the database.db file To use the migration system, we have to use the following commands: manage.py schemamigration TasksManager --initial manage.py syncdb –migrate After the deletion of the database, we must remove the initial data in create_developer.py. We must also delete the URL developer_detail and the following line in index.html: <a href="{% url "developer_detail" "2" %}">Detail second developer (The second user must be a developer)</a><br /> Adding a user The pages that allow you to add a developer and supervisor no longer work because they are not compatible with our recent changes. We will change these pages to integrate our style changes. The view contained in the create_supervisor.py file will contain the following code: from django.shortcuts import render from TasksManager.models import Supervisor from django import forms from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse from django.contrib.auth.models import User def page(request): if request.POST: form = Form_supervisor(request.POST) if form.is_valid(): name = form.cleaned_data['name'] login = form.cleaned_data['login'] password = form.cleaned_data['password'] specialisation = form.cleaned_data['specialisation'] email = form.cleaned_data['email'] new_user = User.objects.create_user(username = login, email = email, password=password) # In this line, we create an instance of the User model with the create_user() method. It is important to use this method because it can store a hashcode of the password in database. In this way, the password cannot be retrieved from the database. Django uses the PBKDF2 algorithm to generate the hash code password of the user. new_user.is_active = True # In this line, the is_active attribute defines whether the user can connect or not. This attribute is false by default which allows you to create a system of account verification by email, or other system user validation. new_user.last_name=name # In this line, we define the name of the new user. new_user.save() # In this line, we register the new user in the database. new_supervisor = Supervisor(user_auth = new_user, specialisation=specialisation) # In this line, we create the new supervisor with the form data. We do not forget to create the relationship with the User model by setting the property user_auth with new_user instance. new_supervisor.save() return HttpResponseRedirect(reverse('public_empty')) else: return render(request, 'en/public/create_supervisor.html', {'form' : form}) else: form = Form_supervisor() form = Form_supervisor() return render(request, 'en/public/create_supervisor.html', {'form' : form}) class Form_supervisor(forms.Form): name = forms.CharField(label="Name", max_length=30) login = forms.CharField(label = "Login") email = forms.EmailField(label = "Email") specialisation = forms.CharField(label = "Specialisation") password = forms.CharField(label = "Password", widget = forms.PasswordInput) password_bis = forms.CharField(label = "Password", widget = forms.PasswordInput) def clean(self): cleaned_data = super (Form_supervisor, self).clean() password = self.cleaned_data.get('password') password_bis = self.cleaned_data.get('password_bis') if password and password_bis and password != password_bis: raise forms.ValidationError("Passwords are not identical.") return self.cleaned_data The create_supervisor.html template remains the same, as we are using a Django form. You can change the page() method in the create_developer.py file to make it compatible with the authentication module (you can refer to downloadable Packt code files for further help): def page(request): if request.POST: form = Form_inscription(request.POST) if form.is_valid(): name = form.cleaned_data['name'] login = form.cleaned_data['login'] password = form.cleaned_data['password'] supervisor = form.cleaned_data['supervisor'] new_user = User.objects.create_user(username = login, password=password) new_user.is_active = True new_user.last_name=name new_user.save() new_developer = Developer(user_auth = new_user, supervisor=supervisor) new_developer.save() return HttpResponse("Developer added") else: return render(request, 'en/public/create_developer.html', {'form' : form}) else: form = Form_inscription() return render(request, 'en/public/create_developer.html', {'form' : form}) We can also modify developer_list.html with the following content: {% extends "base.html" %} {% block title_html %} Developer list {% endblock %} {% block h1 %} Developer list {% endblock %} {% block article_content %} <table> <tr> <td>Name</td> <td>Login</td> <td>Supervisor</td> </tr> {% for dev in object_list %} <tr> <!-- The following line displays the __str__ method of the model. In this case it will display the username of the developer --> <td><a href="">{{ dev }}</a></td> <!-- The following line displays the last_name of the developer --> <td>{{ dev.user_auth.last_name }}</td> <!-- The following line displays the __str__ method of the Supervisor model. In this case it will display the username of the supervisor --> <td>{{ dev.supervisor }}</td> </tr> {% endfor %} </table> {% endblock %} Login and logout pages Now that you can create users, you must create a login page to allow the user to authenticate. We must add the following URL in the urls.py file: url(r'^connection$', 'TasksManager.views.connection.page', name="public_connection"), You must then create the connection.py view with the following code: from django.shortcuts import render from django import forms from django.contrib.auth import authenticate, login # This line allows you to import the necessary functions of the authentication module. def page(request): if request.POST: # This line is used to check if the Form_connection form has been posted. If mailed, the form will be treated, otherwise it will be displayed to the user. form = Form_connection(request.POST) if form.is_valid(): username = form.cleaned_data["username"] password = form.cleaned_data["password"] user = authenticate(username=username, password=password) # This line verifies that the username exists and the password is correct. if user: # In this line, the authenticate function returns None if authentication has failed, otherwise it returns an object that validates the condition. login(request, user) # In this line, the login() function allows the user to connect. else: return render(request, 'en/public/connection.html', {'form' : form}) else: form = Form_connection() return render(request, 'en/public/connection.html', {'form' : form}) class Form_connection(forms.Form): username = forms.CharField(label="Login") password = forms.CharField(label="Password", widget=forms.PasswordInput) def clean(self): cleaned_data = super(Form_connection, self).clean() username = self.cleaned_data.get('username') password = self.cleaned_data.get('password') if not authenticate(username=username, password=password): raise forms.ValidationError("Wrong login or passwsord") return self.cleaned_data You must then create the connection.html template with the following code: {% extends "base.html" %} {% block article_content %} {% if user.is_authenticated %} <-- This line checks if the user is connected.--> <h1>You are connected.</h1> <p> Your email : {{ user.email }} <-- In this line, if the user is connected, this line will display his/her e-mail address.--> </p> {% else %} <!-- In this line, if the user is not connected, we display the login form.--> <h1>Connexion</h1> <form method="post" action="{{ public_connection }}"> {% csrf_token %} <table> {{ form.as_table }} </table> <input type="submit" class="button" value="Connection" /> </form> {% endif %} {% endblock %} When the user logs in, Django will save his/her data connection in session variables. This example has allowed us to verify that the audit login and password was transparent to the user. Indeed, the authenticate() and login() methods allow the developer to save a lot of time. Django also provides convenient shortcuts for the developer such as the user.is_authenticated attribute that checks if the user is logged in. Users prefer when a logout link is present on the website, especially when connecting from a public computer. We will now create the logout page. First, we need to create the logout.py file with the following code: from django.shortcuts import render from django.contrib.auth import logout def page(request): logout(request) return render(request, 'en/public/logout.html') In the previous code, we imported the logout() function of the authentication module and used it with the request object. This function will remove the user identifier of the request object, and delete flushes their session data. When the user logs out, he/she needs to know that the site was actually disconnected. Let's create the following template in the logout.html file: {% extends "base.html" %} {% block article_content %} <h1>You are not connected.</h1> {% endblock %} Restricting access to the connected members When developers implement an authentication system, it's usually to limit access to anonymous users. In this section, we'll see two ways to control access to our web pages. Restricting access in views The authentication module provides simple ways to prevent anonymous users from accessing some pages. Indeed, there is a very convenient decorator to restrict access to a view. This decorator is called login_required. In the example that follows, we will use the designer to limit access to the page() view from the create_developer module in the following manner: First, we must import the decorator with the following line: from django.contrib.auth.decorators import login_required Then, we will add the decorator just before the declaration of the view: @login_required def page(request): # This line already exists. Do not copy it. With the addition of these two lines, the page that lets you add a developer is only available to the logged-in users. If you try to access the page without being connected, you will realize that it is not very practical because the obtained page is a 404 error. To improve this, simply tell Django what the connection URL is by adding the following line in the settings.py file: LOGIN_URL = 'public_connection' With this line, if the user tries to access a protected page, he/she will be redirected to the login page. You may have noticed that if you're not logged in and you click the Create a developer link, the URL contains a parameter named next. The following is the screen capture of the URL: This parameter contains the URL that the user tried to consult. The authentication module redirects the user to the page when he/she connects. To do this, we will modify the connection.py file we created. We add the line that imports the render() function to import the redirect() function: from django.shortcuts import render, redirect To redirect the user after they log in, we must add two lines after the line that contains the code login (request, user). There are two lines to be added: if request.GET.get('next') is not None: return redirect(request.GET['next']) This system is very useful when the user session has expired and he/she wants to see a specific page. Restricting access to URLs The system that we have seen does not simply limit access to pages generated by CBVs. For this, we will use the same decorator, but this time in the urls.py file. We will add the following line to import the decorator: from django.contrib.auth.decorators import login_required We need to change the line that corresponds to the URL named create_project: url (r'^create_project$', login_required(CreateView.as_view(model=Project, template_name="en/public/create_project.html", success_url = 'index')), name="create_project"), The use of the login_required decorator is very simple and allows the developer to not waste too much time. Summary In this article, we modified our application to make it compatible with the authentication module. We created pages that allow the user to log in and log out. We then learned how to restrict access to some pages for the logged in users. To learn more about Django, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Django By Example (https://www.packtpub.com/web-development/django-example) Learning Django Web Development (https://www.packtpub.com/web-development/learning-django-web-development) Resources for Article: Further resources on this subject: Code Style in Django [Article] Adding a developer with Django forms [Article] Enhancing Your Blog with Advanced Features [Article]
Read more
  • 0
  • 0
  • 1922

article-image-understanding-php-basics
Packt
17 Feb 2016
27 min read
Save for later

Understanding PHP basics

Packt
17 Feb 2016
27 min read
In this article by Antonio Lopez Zapata, the author of the book Learning PHP 7, you need to understand not only the syntax of the language, but also its grammatical rules, that is, when and why to use each element of the language. Luckily, for you, some languages come from the same root. For example, Spanish and French are romance languages as they both evolved from spoken Latin; this means that these two languages share a lot of rules, and learning Spanish if you already know French is much easier. (For more resources related to this topic, see here.) Programming languages are quite the same. If you already know another programming language, it will be very easy for you to go through this chapter. If it is your first time though, you will need to understand from scratch all the grammatical rules, so it might take some more time. But fear not! We are here to help you in this endeavor. In this chapter, you will learn about these topics: PHP in web applications Control structures Functions PHP in web applications Even though the main purpose of this chapter is to show you the basics of PHP, doing so in a reference-manual way is not interesting enough. If we were to copy paste what the official documentation says, you might as well go there and read it by yourself. Instead, let's not forget the main purpose of this book and your main goal—to write web applications with PHP. We will show you how can you apply everything you are learning as soon as possible, before you get too bored. In order to do that, we will go through the journey of building an online bookstore. At the very beginning, you might not see the usefulness of it, but that is just because we still haven't seen all that PHP can do. Getting information from the user Let's start by building a home page. In this page, we are going to figure out whether the user is looking for a book or just browsing. How do we find this out? The easiest way right now is to inspect the URL that the user used to access our application and extract some information from there. Save this content as your index.php file: <?php $looking = isset($_GET['title']) || isset($_GET['author']); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>You lookin'? <?php echo (int) $looking; ?></p> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> </body> </html> And now, access http://localhost:8000/?author=Harper Lee&title=To Kill a Mockingbird. You will see that the page is printing some of the information that you passed on to the URL. For each request, PHP stores in an array—called $_GET- all the parameters that are coming from the query string. Each key of the array is the name of the parameter, and its associated value is the value of the parameter. So, $_GET contains two entries: $_GET['author'] contains Harper Lee and $_GET['title'] contains To Kill a Mockingbird. On the first highlighted line, we are assigning a Boolean value to the $looking variable. If either $_GET['title'] or $_GET['author'] exists, this variable will be true; otherwise, false. Just after that, we close the PHP tag and then we start printing some HTML, but as you can see, we are actually mixing HTML with PHP code. Another interesting line here is the second highlighted line. We are printing the content of $looking, but before that, we cast the value. Casting means forcing PHP to transform a type of value to another one. Casting a Boolean to an integer means that the resultant value will be 1 if the Boolean is true or 0 if the Boolean is false. As $looking is true since $_GET contains valid keys, the page shows 1. If we try to access the same page without sending any information as in http://localhost:8000, the browser will say "Are you looking for a book? 0". Depending on the settings of your PHP configuration, you will see two notice messages complaining that you are trying to access the keys of the array that do not exist. Casting versus type juggling We already knew that when PHP needs a specific type of variable, it will try to transform it, which is called type juggling. But PHP is quite flexible, so sometimes, you have to be the one specifying the type that you need. When printing something with echo, PHP tries to transform everything it gets into strings. Since the string version of the false Boolean is an empty string, this would not be useful for our application. Casting the Boolean to an integer first assures that we will see a value, even if it is just "0". HTML forms HTML forms are one of the most popular ways to collect information from users. They consist a series of fields called inputs in the HTML world and a final submit button. In HTML, the form tag contains two attributes: the action points, where the form will be submitted and method that specifies which HTTP method the form will use—GET or POST. Let's see how it works. Save the following content as login.html and go to http://localhost:8000/login.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore - Login</title> </head> <body> <p>Enter your details to login:</p> <form action="authenticate.php" method="post"> <label>Username</label> <input type="text" name="username" /> <label>Password</label> <input type="password" name="password" /> <input type="submit" value="Login"/> </form> </body> </html> This form contains two fields, one for the username and one for the password. You can see that they are identified by the name attribute. If you try to submit this form, the browser will show you a Page Not Found message, as it is trying to access http://localhost:8000/authenticate.phpand the web server cannot find it. Let's create it then: <?php $submitted = !empty($_POST); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>Form submitted? <?php echo (int) $submitted; ?></p> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> </body> </html> As with $_GET, $_POST is an array that contains the parameters received by POST. In this piece of code, we are first asking whether that array is not empty—note the ! operator. Afterwards, we just display the information received, just as in index.php. Note that the keys of the $_POST array are the values for the name argument of each input field. Control structures So far, our files have been executed line by line. Due to that, we are getting some notices on some scenarios, such as when the array does not contain what we are looking for. Would it not be nice if we could choose which lines to execute? Control structures to the rescue! A control structure is like a traffic diversion sign. It directs the execution flow depending on some predefined conditions. There are different control structures, but we can categorize them in conditionals and loops. A conditional allows us to choose whether to execute a statement or not. A loop will execute a statement as many times as you need. Let's take a look at each one of them. Conditionals A conditional evaluates a Boolean expression, that is, something that returns a value. If the expression is true, it will execute everything inside its block of code. A block of code is a group of statements enclosed by {}. Let's see how it works: <?php echo "Before the conditional."; if (4 > 3) { echo "Inside the conditional."; } if (3 > 4) { echo "This will not be printed."; } echo "After the conditional."; In this piece of code, we are using two conditionals. A conditional is defined by the keyword if followed by a Boolean expression in parentheses and by a block of code. If the expression is true, it will execute the block; otherwise, it will skip it. You can increase the power of conditionals by adding the keyword else. This tells PHP to execute a block of code if the previous conditions were not satisfied. Let's see an example: if (2 > 3) { echo "Inside the conditional."; } else { echo "Inside the else."; } This will execute the code inside else as the condition of if was not satisfied. Finally, you can also add an elseif keyword followed by another condition and block of code to continue asking PHP for more conditions. You can add as many elseif as you need after if. If you add else, it has to be the last one of the chain of conditions. Also keep in mind that as soon as PHP finds a condition that resolves to true, it will stop evaluating the rest of the conditions: <?php if (4 > 5) { echo "Not printed"; } elseif (4 > 4) { echo "Not printed"; } elseif (4 == 4) { echo "Printed."; } elseif (4 > 2) { echo "Not evaluated."; } else { echo "Not evaluated."; } if (4 == 4) { echo "Printed"; } In this last example, the first condition that evaluates to true is the one that is highlighted. After that, PHP does not evaluate any more conditions until a new if starts. With this knowledge, let's try to clean up a bit of our application, executing statements only when needed. Copy this code to your index.php file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p> <?php if (isset($_COOKIE[username'])) { echo "You are " . $_COOKIE['username']; } else { echo "You are not authenticated."; } ?> </p> <?php if (isset($_GET['title']) && isset($_GET['author'])) { ?> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> <?php } else { ?> <p>You are not looking for a book?</p> <?php } ?> </body> </html> In this new code, we are mixing conditionals and HTML code in two different ways. The first one opens a PHP tag and adds an if-else clause that will print whether we are authenticated or not with echo. No HTML is merged within the conditionals, which makes it clear. The second option—the second highlighted block—shows an uglier solution, but this is sometimes necessary. When you have to print a lot of HTML code, echo is not that handy, and it is better to close the PHP tag; print all the HTML you need and then open the tag again. You can do that even inside the code block of an if clause, as you can see in the code. Mixing PHP and HTML If you feel like the last file we edited looks rather ugly, you are right. Mixing PHP and HTML is confusing, and you have to avoid it by all means. Let's edit our authenticate.php file too, as it is trying to access $_POST entries that might not be there. The new content of the file would be as follows: <?php $submitted = isset($_POST['username']) && isset($_POST['password']); if ($submitted) { setcookie('username', $_POST['username']); } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <?php if ($submitted): ?> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> <?php else: ?> <p>You did not submitted anything.</p> <?php endif; ?> </body> </html> This code also contains conditionals, which we already know. We are setting a variable to know whether we've submitted a login or not and to set the cookies if we have. However, the highlighted lines show you a new way of including conditionals with HTML. This way, tries to be more readable when working with HTML code, avoiding the use of {} and instead using : and endif. Both syntaxes are correct, and you should use the one that you consider more readable in each case. Switch-case Another control structure similar to if-else is switch-case. This structure evaluates only one expression and executes the block depending on its value. Let's see an example: <?php switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; break; case 'Lord of the Rings': echo "A classic!"; break; default: echo "Dunno that one."; break; } The switch case takes an expression; in this case, a variable. It then defines a series of cases. When the case matches the current value of the expression, PHP executes the code inside it. As soon as PHP finds break, it will exit switch-case. In case none of the cases are suitable for the expression, if there is a default case         , PHP will execute it, but this is optional. You also need to know that breaks are mandatory if you want to exit switch-case. If you do not specify any, PHP will keep on executing statements, even if it encounters a new case. Let's see a similar example but without breaks: <?php $title = 'Twilight'; switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; case 'Twilight': echo 'Uh...'; case 'Lord of the Rings': echo "A classic!"; default: echo "Dunno that one."; } If you test this code in your browser, you will see that it is printing "Uh...A classic!Dunno that one.". PHP found that the second case is valid so it executes its content. But as there are no breaks, it keeps on executing until the end. This might be the desired behavior sometimes, but not usually, so we need to be careful when using it! Loops Loops are control structures that allow you to execute certain statements several times—as many times as you need. You might use them on several different scenarios, but the most common one is when interacting with arrays. For example, imagine you have an array with elements but you do not know what is in it. You want to print all its elements so you loop through all of them. There are four types of loops. Each of them has their own use cases, but in general, you can transform one type of loop into another. Let's see them closely While While is the simplest of the loops. It executes a block of code until the expression to evaluate returns false. Let's see one example: <?php $i = 1; while ($i < 4) { echo $i . " "; $i++; } Here, we are defining a variable with the value 1. Then, we have a while clause in which the expression to evaluate is $i < 4. This loop will execute the content of the block of code until that expression is false. As you can see, inside the loop we are incrementing the value of $i by 1 each time, so after 4 iterations, the loop will end. Check out the output of that script, and you will see "0 1 2 3". The last value printed is 3, so by that time, $i was 3. After that, we increased its value to 4, so when the while was evaluating whether $i < 4, the result was false. Whiles and infinite loops One of the most common problems with while loops is creating an infinite loop. If you do not add any code inside while, which updates any of the variables considered in the while expression so it can be false at some point, PHP will never exit the loop! For This is the most complex of the four loops. For defines an initialization expression, an exit condition, and the end of the iteration expression. When PHP first encounters the loop, it executes what is defined as the initialization expression. Then, it evaluates the exit condition, and if it resolves to true, it enters the loop. After executing everything inside the loop, it executes the end of the iteration expression. Once this is done, it will evaluate the end condition again, going through the loop code and the end of iteration expression until it evaluates to false. As always, an example will help clarify this: <?php for ($i = 1; $i < 10; $i++) { echo $i . " "; } The initialization expression is $i = 1 and is executed only the first time. The exit condition is $i < 10, and it is evaluated at the beginning of each iteration. The end of the iteration expression is $i++, which is executed at the end of each iteration. This example prints numbers from 1 to 9. Another more common usage of the for loop is with arrays: <?php $names = ['Harry', 'Ron', 'Hermione']; for ($i = 0; $i < count($names); $i++) { echo $names[$i] . " "; } In this example, we have an array of names. As it is defined as a list, its keys will be 0, 1, and 2. The loop initializes the $i variable to 0, and it will iterate until the value of $i is not less than the amount of elements in the array 3 The first iteration $i is 0, the second will be 1, and the third one will be 2. When $i is 3, it will not enter the loop as the exit condition evaluates to false. On each iteration, we are printing the content of the $i position of the array; hence, the result of this code will be all three names in the array. We careful with exit conditions It is very common to set an exit condition. This is not exactly what we need, especially with arrays. Remember that arrays start with 0 if they are a list, so an array of 3 elements will have entries 0, 1, and 2. Defining the exit condition as $i <= count($array) will cause an error on your code, as when $i is 3, it also satisfies the exit condition and will try to access to the key 3, which does not exist. Foreach The last, but not least, type of loop is foreach. This loop is exclusive for arrays, and it allows you to iterate an array entirely, even if you do not know its keys. There are two options for the syntax, as you can see in these examples: <?php $names = ['Harry', 'Ron', 'Hermione']; foreach ($names as $name) { echo $name . " "; } foreach ($names as $key => $name) { echo $key . " -> " . $name . " "; } The foreach loop accepts an array; in this case, $names. It specifies a variable, which will contain the value of the entry of the array. You can see that we do not need to specify any end condition, as PHP will know when the array has been iterated. Optionally, you can specify a variable that will contain the key of each iteration, as in the second loop. Foreach loops are also useful with maps, where the keys are not necessarily numeric. The order in which PHP will iterate the array will be the same order in which you used to insert the content in the array. Let's use some loops in our application. We want to show the available books in our home page. We have the list of books in an array, so we will have to iterate all of them with a foreach loop, printing some information from each one. Append the following code to the body tag in index.php: <?php endif; $books = [ [ 'title' => 'To Kill A Mockingbird', 'author' => 'Harper Lee', 'available' => true, 'pages' => 336, 'isbn' => 9780061120084 ], [ 'title' => '1984', 'author' => 'George Orwell', 'available' => true, 'pages' => 267, 'isbn' => 9780547249643 ], [ 'title' => 'One Hundred Years Of Solitude', 'author' => 'Gabriel Garcia Marquez', 'available' => false, 'pages' => 457, 'isbn' => 9785267006323 ], ]; ?> <ul> <?php foreach ($books as $book): ?> <li> <i><?php echo $book['title']; ?></i> - <?php echo $book['author']; ?> <?php if (!$book['available']): ?> <b>Not available</b> <?php endif; ?> </li> <?php endforeach; ?> </ul> The highlighted code shows a foreach loop using the : notation, which is better when mixing it with HTML. It iterates all the $books arrays, and for each book, it will print some information as a HTML list. Also note that we have a conditional inside a loop, which is perfectly fine. Of course, this conditional will be executed for each entry in the array, so you should keep the block of code of your loops as simple as possible. Functions A function is a reusable block of code that, given an input, performs some actions and optionally returns a result. You already know several predefined functions, such as empty, in_array, or var_dump. These functions come with PHP so you do not have to reinvent the wheel, but you can create your own very easily. You can define functions when you identify portions of your application that have to be executed several times or just to encapsulate some functionality. Function declaration Declaring a function means to write it down so that it can be used later. A function has a name, takes arguments, and has a block of code. Optionally, it can define what kind of value is returning. The name of the function has to follow the same rules as variable names; that is, it has to start by a letter or underscore and can contain any letter, number, or underscore. It cannot be a reserved word. Let's see a simple example: function addNumbers($a, $b) { $sum = $a + $b; return $sum; } $result = addNumbers(2, 3); Here, the function's name is addNumbers, and it takes two arguments: $a and $b. The block of code defines a new variable $sum that is the sum of both the arguments and then returns its content with return. In order to use this function, you just need to call it by its name, sending all the required arguments, as shown in the highlighted line. PHP does not support overloaded functions. Overloading refers to the ability of declaring two or more functions with the same name but different arguments. As you can see, you can declare the arguments without knowing what their types are, so PHP would not be able to decide which function to use. Another important thing to note is the variable scope. We are declaring a $sum variable inside the block of code, so once the function ends, the variable will not be accessible any more. This means that the scope of variables declared inside the function is just the function itself. Furthermore, if you had a $sum variable declared outside the function, it would not be affected at all since the function cannot access that variable unless we send it as an argument. Function arguments A function gets information from outside via arguments. You can define any number of arguments—including 0. These arguments need at least a name so that they can be used inside the function, and there cannot be two arguments with the same name. When invoking the function, you need to send the arguments in the same order as we declared them. A function may contain optional arguments; that is, you are not forced to provide a value for those arguments. When declaring the function, you need to provide a default value for those arguments, so in case the user does not provide a value, the function will use the default one: function addNumbers($a, $b, $printResult = false) { $sum = $a + $b; if ($printResult) { echo 'The result is ' . $sum; } return $sum; } $sum1 = addNumbers(1, 2); $sum1 = addNumbers(3, 4, false); $sum1 = addNumbers(5, 6, true); // it will print the result This new function takes two mandatory arguments and an optional one. The default value is false, and is used as a normal value inside the function. The function will print the result of the sum if the user provides true as the third argument, which happens only the third time that the function is invoked. For the first two times, $printResult is set to false. The arguments that the function receives are just copies of the values that the user provided. This means that if you modify these arguments inside the function, it will not affect the original values. This feature is known as sending arguments by a value. Let's see an example: function modify($a) { $a = 3; } $a = 2; modify($a); var_dump($a); // prints 2 We are declaring the $a variable with the value 2, and then we are calling the modify method, sending $a. The modify method modifies the $a argument, setting its value to 3. However, this does not affect the original value of $a, which reminds to 2 as you can see in the var_dump function. If what you want is to actually change the value of the original variable used in the invocation, you need to pass the argument by reference. To do that, you add & in front of the argument when declaring the function: function modify(&$a) { $a = 3; } Now, after invoking the modify function, $a will be always 3. Arguments by value versus by reference PHP allows you to do it, and in fact, some native functions of PHP use arguments by reference—remember the array sorting functions; they did not return the sorted array; instead, they sorted the array provided. But using arguments by reference is a way of confusing developers. Usually, when someone uses a function, they expect a result, and they do not want their provided arguments to be modified. So, try to avoid it; people will be grateful! The return statement You can have as many return statements as you want inside your function, but PHP will exit the function as soon as it finds one. This means that if you have two consecutive return statements, the second one will never be executed. Still, having multiple return statements can be useful if they are inside conditionals. Add this function inside your functions.php file: function loginMessage() { if (isset($_COOKIE['username'])) { return "You are " . $_COOKIE['username']; } else { return "You are not authenticated."; } } Let's use it in your index.php file by replacing the highlighted content—note that to save some tees, I replaced most of the code that was not changed at all with //…: //... <body> <p><?php echo loginMessage(); ?></p> <?php if (isset($_GET['title']) && isset($_GET['author'])): ?> //... Additionally, you can omit the return statement if you do not want the function to return anything. In this case, the function will end once it reaches the end of the block of code. Type hinting and return types With the release of PHP7, the language allows developers to be more specific about what functions get and return. You can—always optionally—specify the type of argument that the function needs, for example, type hinting, and the type of result the function will return—return type. Let's first see an example: <?php declare(strict_types=1); function addNumbers(int $a, int $b, bool $printSum): int { $sum = $a + $b; if ($printSum) { echo 'The sum is ' . $sum; } return $sum; } addNumbers(1, 2, true); addNumbers(1, '2', true); // it fails when strict_types is 1 addNumbers(1, 'something', true); // it always fails This function states that the arguments need to be an integer, and Boolean, and that the result will be an integer. Now, you know that PHP has type juggling, so it can usually transform a value of one type to its equivalent value of another type, for example, the string 2 can be used as integer 2. To stop PHP from using type juggling with the arguments and results of functions, you can declare the strict_types directive as shown in the first highlighted line. This directive has to be declared on the top of each file, where you want to enforce this behavior. The three invocations work as follows: The first invocation sends two integers and a Boolean, which is what the function expects. So, regardless of the value of strict_types, it will always work. The second invocation sends an integer, a string, and a Boolean. The string has a valid integer value, so if PHP was allowed to use type juggling, the invocation would resolve to just normal. But in this example, it will fail because of the declaration on top of the file. The third invocation will always fail as the something string cannot be transformed into a valid integer. Let's try to use a function within our project. In our index.php file, we have a foreach loop that iterates the books and prints them. The code inside the loop is kind of hard to understand as it is mixing HTML with PHP, and there is a conditional too. Let's try to abstract the logic inside the loop into a function. First, create the new functions.php file with the following content: <?php function printableTitle(array $book): string { $result = '<i>' . $book['title'] . '</i> - ' . $book['author']; if (!$book['available']) { $result .= ' <b>Not available</b>'; } return $result; } This file will contain our functions. The first one, printableTitle, takes an array representing a book and builds a string with a nice representation of the book in HTML. The code is the same as before, just encapsulated in a function. Now, index.php will have to include the functions.php file and then use the function inside the loop. Let's see how this is done: <?php require_once 'functions.php' ?> <!DOCTYPE html> <html lang="en"> //... ?> <ul> <?php foreach ($books as $book): ?> <li><?php echo printableTitle($book); ?> </li> <?php endforeach; ?> </ul> //... Well, now our loop looks way cleaner, right? Also, if we need to print the title of the book somewhere else, we can reuse the function instead of duplicating code! Summary In this article, we went through all the basics of procedural PHP while writing simple examples in order to practice them. You now know how to use variables and arrays with control structures and functions and how to get information from HTTP requests among others. Resources for Article: Further resources on this subject: Getting started with Modernizr using PHP IDE[article] PHP 5 Social Networking: Implementing Public Messages[article] Working with JSON in PHP jQuery[article]
Read more
  • 0
  • 0
  • 5764

article-image-types-variables-and-function-techniques
Packt
16 Feb 2016
39 min read
Save for later

Types, Variables, and Function Techniques

Packt
16 Feb 2016
39 min read
This article is an introduction to the syntax used in the TypeScript language to apply strong typing to JavaScript. It is intended for readers that have not used TypeScript before, and covers the transition from standard JavaScript to TypeScript. We will cover the following topics in this article: Basic types and type syntax: strings, numbers, and booleans Inferred typing and duck-typing Arrays and enums The any type and explicit casting Functions and anonymous functions Optional and default function parameters Argument arrays Function callbacks and function signatures Function scoping rules and overloads (For more resources related to this topic, see here.) Basic types JavaScript variables can hold a number of data types, including numbers, strings, arrays, objects, functions, and more. The type of an object in JavaScript is determined by its assignment–so if a variable has been assigned a string value, then it will be of type string. This can, however, introduce a number of problems in our code. JavaScript is not strongly typed JavaScript objects and variables can be changed or reassigned on the fly. As an example of this, consider the following JavaScript code: var myString = "test"; var myNumber = 1; var myBoolean = true; We start by defining three variables, named myString, myNumber and myBoolean. The myString variable is set to a string value of "test", and as such will be of type string. Similarly, myNumber is set to the value of 1, and is therefore of type number, and myBoolean is set to true, making it of type boolean. Now let's start assigning these variables to each other, as follows: myString = myNumber; myBoolean = myString; myNumber = myBoolean; We start by setting the value of myString to the value of myNumber (which is the numeric value of 1). We then set the value of myBoolean to the value of myString, (which would now be the numeric value of 1). Finally, we set the value of myNumber to the value of myBoolean. What is happening here, is that even though we started out with three different types of variables—a string, a number, and a boolean—we are able to reassign any of these variables to one of the other types. We can assign a number to a string, a string to boolean, or a boolean to a number. While this type of assignment in JavaScript is legal, it shows that the JavaScript language is not strongly typed. This can lead to unwanted behaviour in our code. Parts of our code may be relying on the fact that a particular variable is holding a string, and if we inadvertently assign a number to this variable, our code may start to break in unexpected ways. TypeScript is strongly typed TypeScript, on the other hand, is a strongly typed language. Once you have declared a variable to be of type string, you can only assign string values to it. All further code that uses this variable must treat it as though it has a type of string. This helps to ensure that code that we write will behave as expected. While strong typing may not seem to be of any use with simple strings and numbers—it certainly does become important when we apply the same rules to objects, groups of objects, function definitions and classes. If you have written a function that expects a string as the first parameter and a number as the second, you cannot be blamed, if someone calls your function with a boolean as the first parameter and something else as the second. JavaScript programmers have always relied heavily on documentation to understand how to call functions, and the order and type of the correct function parameters. But what if we could take all of this documentation and include it within the IDE? Then, as we write our code, our compiler could point out to us—automatically—that we were using objects and functions in the wrong way. Surely this would make us more efficient, more productive programmers, allowing us to generating code with fewer errors? TypeScript does exactly that. It introduces a very simple syntax to define the type of a variable or a function parameter to ensure that we are using these objects, variables, and functions in the correct manner. If we break any of these rules, the TypeScript compiler will automatically generate errors, pointing us to the lines of code that are in error. This is how TypeScript got its name. It is JavaScript with strong typing - hence TypeScript. Let's take a look at this very simple language syntax that enables the "Type" in TypeScript. Type syntax The TypeScript syntax for declaring the type of a variable is to include a colon (:), after the variable name, and then indicate its type. Consider the following TypeScript code: var myString : string = "test"; var myNumber: number = 1; var myBoolean : boolean = true; This code snippet is the TypeScript equivalent of our preceding JavaScript code. We can now see an example of the TypeScript syntax for declaring a type for the myString variable. By including a colon and then the keyword string (: string), we are telling the compiler that the myString variable is of type string. Similarly, the myNumber variable is of type number, and the myBoolean variable is of type boolean. TypeScript has introduced the string, number and boolean keywords for each of these basic JavaScript types. If we attempt to assign a value to a variable that is not of the same type, the TypeScript compiler will generate a compile-time error. Given the variables declared in the preceding code, the following TypeScript code will generate some compile errors: myString = myNumber; myBoolean = myString; myNumber = myBoolean; TypeScript build errors when assigning incorrect types The TypeScript compiler is generating compile errors, because we are attempting to mix these basic types. The first error is generated by the compiler because we cannot assign a number value to a variable of type string. Similarly, the second compile error indicates that we cannot assign a string value to a variable of type boolean. Again, the third error is generated because we cannot assign a boolean value to a variable of type number. The strong typing syntax that the TypeScript language introduces, means that we need to ensure that the types on the left-hand side of an assignment operator (=) are the same as the types on the right-hand side of the assignment operator. To fix the preceding TypeScript code, and remove the compile errors, we would need to do something similar to the following: myString = myNumber.toString(); myBoolean = (myString === "test"); if (myBoolean) { myNumber = 1; } Our first line of code has been changed to call the .toString() function on the myNumber variable (which is of type number), in order to return a value that is of type string. This line of code, then, does not generate a compile error because both sides of the equal sign are of the same type. Our second line of code has also been changed so that the right hand side of the assignment operator returns the result of a comparison, myString === "test", which will return a value of type boolean. The compiler will therefore allow this code, because both sides of the assignment resolve to a value of type boolean. The last line of our code snippet has been changed to only assign the value 1 (which is of type number) to the myNumber variable, if the value of the myBoolean variable is true. Anders Hejlsberg describes this feature as "syntactic sugar". With a little sugar on top of comparable JavaScript code, TypeScript has enabled our code to conform to strong typing rules. Whenever you break these strong typing rules, the compiler will generate errors for your offending code. Inferred typing TypeScript also uses a technique called inferred typing, in cases where you do not explicitly specify the type of your variable. In other words, TypeScript will find the first usage of a variable within your code, figure out what type the variable is first initialized to, and then assume the same type for this variable in the rest of your code block. As an example of this, consider the following code: var myString = "this is a string"; var myNumber = 1; myNumber = myString; We start by declaring a variable named myString, and assign a string value to it. TypeScript identifies that this variable has been assigned a value of type string, and will, therefore, infer any further usages of this variable to be of type string. Our second variable, named myNumber has a number assigned to it. Again, TypeScript is inferring the type of this variable to be of type number. If we then attempt to assign the myString variable (of type string) to the myNumber variable (of type number) in the last line of code, TypeScript will generate a familiar error message: error TS2011: Build: Cannot convert 'string' to 'number' This error is generated because of TypeScript's inferred typing rules. Duck-typing TypeScript also uses a method called duck-typing for more complex variable types. Duck-typing means that if it looks like a duck, and quacks like a duck, then it probably is a duck. Consider the following TypeScript code: var complexType = { name: "myName", id: 1 }; complexType = { id: 2, name: "anotherName" }; We start with a variable named complexType that has been assigned a simple JavaScript object with a name and id property. On our second line of code, we can see that we are re-assigning the value of this complexType variable to another object that also has an id and a name property. The compiler will use duck-typing in this instance to figure out whether this assignment is valid. In other words, if an object has the same set of properties as another object, then they are considered to be of the same type. To further illustrate this point, let's see how the compiler reacts if we attempt to assign an object to our complexType variable that does not conform to this duck-typing: var complexType = { name: "myName", id: 1 }; complexType = { id: 2 }; complexType = { name: "anotherName" }; complexType = { address: "address" }; The first line of this code snippet defines our complexType variable, and assigns to it an object that contains both an id and name property. From this point, TypeScript will use this inferred type on any value we attempt to assign to the complexType variable. On our second line of code, we are attempting to assign a value that has an id property but not the name property. On the third line of code, we again attempt to assign a value that has a name property, but does not have an id property. On the last line of our code snippet, we have completely missed the mark. Compiling this code will generate the following errors: error TS2012: Build: Cannot convert '{ id: number; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ name: string; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ address: string; }' to '{ name: string; id: number; }': As we can see from the error messages, TypeScript is using duck-typing to ensure type safety. In each message, the compiler gives us clues as to what is wrong with the offending code – by explicitly stating what it is expecting. The complexType variable has both an id and a name property. To assign a value to the complexType variable, then, this value will need to have both an id and a name property. Working through each of these errors, TypeScript is explicitly stating what is wrong with each line of code. Note that the following code will not generate any error messages: var complexType = { name: "myName", id: 1 }; complexType = { name: "name", id: 2, address: "address" }; Again, our first line of code defines the complexType variable, as we have seen previously, with an id and a name property. Now, look at the second line of this example. The object we are using actually has three properties: name, id, and address. Even though we have added a new address property, the compiler will only check to see if our new object has both an id and a name. Because our new object has these properties, and will therefore match the original type of the variable, TypeScript will allow this assignment through duck-typing. Inferred typing and duck-typing are powerful features of the TypeScript language – bringing strong typing to our code, without the need to use explicit typing, that is, a colon : and then the type specifier syntax. Arrays Besides the base JavaScript types of string, number, and boolean, TypeScript has two other data types: Arrays and enums. Let's look at the syntax for defining arrays. An array is simply marked with the [] notation, similar to JavaScript, and each array can be strongly typed to hold a specific type as seen in the code below: var arrayOfNumbers: number[] = [1, 2, 3]; arrayOfNumbers = [3, 4, 5]; arrayOfNumbers = ["one", "two", "three"]; On the first line of this code snippet, we are defining an array named arrayOfNumbers, and further specify that each element of this array must be of type number. The second line then reassigns this array to hold some different numerical values. The last line of this snippet, however, will generate the following error message: error TS2012: Build: Cannot convert 'string[]' to 'number[]': This error message is warning us that the variable arrayOfNumbers is strongly typed to only accept values of type number. Our code tries to assign an array of strings to this array of numbers, and is therefore, generating a compile error. The any type All this type checking is well and good, but JavaScript is flexible enough to allow variables to be mixed and matched. The following code snippet is actually valid JavaScript code: var item1 = { id: 1, name: "item 1" }; item1 = { id: 2 }; Our first line of code assigns an object with an id property and a name property to the variable item1. The second line then re-assigns this variable to an object that has an id property but not a name property. Unfortunately, as we have seen previously, TypeScript will generate a compile time error for the preceding code: error TS2012: Build: Cannot convert '{ id: number; }' to '{ id: number; name: string; }' TypeScript introduces the any type for such occasions. Specifying that an object has a type of any in essence relaxes the compiler's strict type checking. The following code shows how to use the any type: var item1 : any = { id: 1, name: "item 1" }; item1 = { id: 2 }; Note how our first line of code has changed. We specify the type of the variable item1 to be of type : any so that our code will compile without errors. Without the type specifier of : any, the second line of code, would normally generate an error. Explicit casting As with any strongly typed language, there comes a time where you need to explicitly specify the type of an object. An object can be cast to the type of another by using the < > syntax. This is not a cast in the strictest sense of the word; it is more of an assertion that is used at runtime by the TypeScript compiler. Any explicit casting that you use will be compiled away in the resultant JavaScript and will not affect the code at runtime. Let's modify our previous code snippet to use explicit casting: var item1 = <any>{ id: 1, name: "item 1" }; item1 = { id: 2 }; Note that on the first line of this snippet, we have now replaced the : any type specifier on the left hand side of the assignment, with an explicit cast of <any> on the right hand side. This snippet of code is telling the compiler to explicitly cast, or to explicitly treat the { id: 1, name: "item 1" } object on the right-hand side as a type of any. So the item1 variable, therefore, also has the type of any (due to TypeScript's inferred typing rules). This then allows us to assign an object with only the { id: 2 } property to the variable item1 on the second line of code. This technique of using the < > syntax on the right hand side of an assignment, is called explicit casting. While the any type is a necessary feature of the TypeScript language – its usage should really be limited as much as possible. It is a language shortcut that is necessary to ensure compatibility with JavaScript, but over-use of the any type will quickly lead to coding errors that will be difficult to find. Rather than using the type any, try to figure out the correct type of the object you are using, and then use this type instead. We use an acronym within our programming teams: S.F.I.A.T. (pronounced sviat or sveat). Simply Find an Interface for the Any Type. While this may sound silly – it brings home the point that the any type should always be replaced with an interface – so simply find it. Just remember that by actively trying to define what an object's type should be, we are building strongly typed code, and therefore protecting ourselves from future coding errors and bugs. Enums Enums are a special type that has been borrowed from other languages such as C#, and provide a solution to the problem of special numbers. An enum associates a human-readable name for a specific number. Consider the following code: enum DoorState { Open, Closed, Ajar } In this code snippet, we have defined an enum called DoorState to represent the state of a door. Valid values for this door state are Open, Closed, or Ajar. Under the hood (in the generated JavaScript), TypeScript will assign a numeric value to each of these human-readable enum values. In this example, the DoorState.Open enum value will equate to a numeric value of 0. Likewise, the enum value DoorState.Closed will be equate to the numeric value of 1, and the DoorState.Ajar enum value will equate to 2. Let's have a quick look at how we would use these enum values: window.onload = () => { var myDoor = DoorState.Open; console.log("My door state is " + myDoor.toString()); }; The first line within the window.onload function creates a variable named myDoor, and sets its value to DoorState.Open. The second line simply logs the value of myDoor to the console. The output of this console.log function would be: My door state is 0 This clearly shows that the TypeScript compiler has substituted the enum value of DoorState.Open with the numeric value 0. Now let's use this enum in a slightly different way: window.onload = () => { var openDoor = DoorState["Closed"]; console.log("My door state is " + openDoor.toString()); }; This code snippet uses a string value of "Closed" to lookup the enum type, and assign the resulting enum value to the openDoor variable. The output of this code would be: My door state is 1 This sample clearly shows that the enum value of DoorState.Closed is the same as the enum value of DoorState["Closed"], because both variants resolve to the numeric value of 1. Finally, let's have a look at what happens when we reference an enum using an array type syntax: window.onload = () => { var ajarDoor = DoorState[2]; console.log("My door state is " + ajarDoor.toString()); }; Here, we assign the variable openDoor to an enum value based on the 2nd index value of the DoorState enum. The output of this code, though, is surprising: My door state is Ajar You may have been expecting the output to be simply 2, but here we are getting the string "Ajar" – which is a string representation of our original enum name. This is actually a neat little trick – allowing us to access a string representation of our enum value. The reason that this is possible is down to the JavaScript that has been generated by the TypeScript compiler. Let's have a look, then, at the closure that the TypeScript compiler has generated: var DoorState; (function (DoorState) { DoorState[DoorState["Open"] = 0] = "Open"; DoorState[DoorState["Closed"] = 1] = "Closed"; DoorState[DoorState["Ajar"] = 2] = "Ajar"; })(DoorState || (DoorState = {})); This strange looking syntax is building an object that has a specific internal structure. It is this internal structure that allows us to use this enum in the various ways that we have just explored. If we interrogate this structure while debugging our JavaScript, we will see the internal structure of the DoorState object is as follows: DoorState {...} [prototype]: {...} [0]: "Open" [1]: "Closed" [2]: "Ajar" [prototype]: [] Ajar: 2 Closed: 1 Open: 0 The DoorState object has a property called "0", which has a string value of "Open". Unfortunately, in JavaScript the number 0 is not a valid property name, so we cannot access this property by simply using DoorState.0. Instead, we must access this property using either DoorState[0] or DoorState["0"]. The DoorState object also has a property named Open, which is set to the numeric value 0. The word Open IS a valid property name in JavaScript, so we can access this property using DoorState["Open"], or simply DoorState.Open, which equate to the same property in JavaScript. While the underlying JavaScript can be a little confusing, all we need to remember about enums is that they are a handy way of defining an easily remembered, human-readable name to a special number. Using human-readable enums, instead of just scattering various special numbers around in our code, also makes the intent of the code clearer. Using an application wide value named DoorState.Open or DoorState.Closed is far simpler than remembering to set a value to 0 for Open, 1 for Closed, and 3 for ajar. As well as making our code more readable, and more maintainable, using enums also protects our code base whenever these special numeric values change – because they are all defined in one place. One last note on enums – we can set the numeric value manually, if needs be: enum DoorState { Open = 3, Closed = 7, Ajar = 10 } Here, we have overridden the default values of the enum to set DoorState.Open to 3, DoorState.Closed to 7, and DoorState.Ajar to 10. Const enums With the release of TypeScript 1.4, we are also able to define const enums as follows: const enum DoorStateConst { Open, Closed, Ajar } var myState = DoorStateConst.Open; These types of enums have been introduced largely for performance reasons, and the resultant JavaScript will not contain the full closure definition for the DoorStateConst enum as we saw previously. Let's have a quick look at the JavaScript that is generated from this DoorStateConst enum: var myState = 0 /* Open */; Note how we do not have a full JavaScript closure for the DoorStateConst at all. The compiler has simply resolved the DoorStateConst.Open enum to its internal value of 0, and removed the const enum definition entirely. With const enums, we therefore cannot reference the internal string value of an enum, as we did in our previous code sample. Consider the following example: // generates an error console.log(DoorStateConst[0]); // valid usage console.log(DoorStateConst["Open"]); The first console.log statement will now generate a compile time error – as we do not have the full closure available with the property of [0] for our const enum. The second usage of this const enum is valid, however, and will generate the following JavaScript: console.log(0 /* "Open" */); When using const enums, just keep in mind that the compiler will strip away all enum definitions and simply substitute the numeric value of the enum directly into our JavaScript code. Functions JavaScript defines functions using the function keyword, a set of braces, and then a set of curly braces. A typical JavaScript function would be written as follows: function addNumbers(a, b) { return a + b; } var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); This code snippet is fairly self-explanatory; we have defined a function named addNumbers that takes two variables and returns their sum. We then invoke this function, passing in the values of 1 and 2. The value of the variable result would then be 1 + 2, which is 3. Now have a look at the last line of code. Here, we are invoking the addNumbers function, passing in two strings as arguments, instead of numbers. The value of the variable result2 would then be a string, "12". This string value seems like it may not be the desired result, as the name of the function is addNumbers. Copying the preceding code into a TypeScript file would not generate any errors, but let's insert some type rules to the preceding JavaScript to make it more robust: function addNumbers(a: number, b: number): number { return a + b; }; var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); In this TypeScript code, we have added a :number type to both of the parameters of the addNumbers function (a and b), and we have also added a :number type just after the ( ) braces. Placing a type descriptor here means that the return type of the function itself is strongly typed to return a value of type number. In TypeScript, the last line of code, however, will cause a compilation error: error TS2082: Build: Supplied parameters do not match any signature of call target: This error message is generate because we have explicitly stated that the function should accept only numbers for both of the arguments a and b, but in our offending code, we are passing two strings. The TypeScript compiler, therefore, cannot match the signature of a function named addNumbers that accepts two arguments of type string. Anonymous functions The JavaScript language also has the concept of anonymous functions. These are functions that are defined on the fly and don't specify a function name. Consider the following JavaScript code: var addVar = function(a, b) { return a + b; }; var result = addVar(1, 2); This code snippet defines a function that has no name and adds two values. Because the function does not have a name, it is known as an anonymous function. This anonymous function is then assigned to a variable named addVar. The addVar variable, then, can then be invoked as a function with two parameters, and the return value will be the result of executing the anonymous function. In this case, the variable result will have a value of 3. Let's now rewrite the preceding JavaScript function in TypeScript, and add some type syntax, in order to ensure that the function only accepts two arguments of type number, and returns a value of type number: var addVar = function(a: number, b: number): number { return a + b; } var result = addVar(1, 2); var result2 = addVar("1", "2"); In this code snippet, we have created an anonymous function that accepts only arguments of type number for the parameters a and b, and also returns a value of type number. The types for both the a and b parameters, as well as the return type of the function, are now using the :number syntax. This is another example of the simple "syntactic sugar" that TypeScript injects into the language. If we compile this code, TypeScript will reject the code on the last line, where we try to call our anonymous function with two string parameters: error TS2082: Build: Supplied parameters do not match any signature of call target: Optional parameters When we call a JavaScript function that has is expecting parameters, and we do not supply these parameters, then the value of the parameter within the function will be undefined. As an example of this, consider the following JavaScript code: var concatStrings = function(a, b, c) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); Here, we have defined a function called concatStrings that takes three parameters, a, b, and c, and simply returns the sum of these values. If we call this function with all three parameters, as seen in the second last line of this snipped, we will end up with the string "abc" logged to the console. If, however, we only supply two parameters, as seen in the last line of this snippet, the string "abundefined" will be logged to the console. Again, if we call a function and do not supply a parameter, then this parameter, c in our case, will be simply undefined. TypeScript introduces the question mark ? syntax to indicate optional parameters. Consider the following TypeScript function definition: var concatStrings = function(a: string, b: string, c?: string) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); console.log(concatStrings("a")); This is a strongly typed version of the original concatStrings JavaScript function that we were using previously. Note the addition of the ? character in the syntax for the third parameter: c?: string. This indicates that the third parameter is optional, and therefore, all of the preceding code will compile cleanly, except for the last line. The last line will generate an error: error TS2081: Build: Supplied parameters do not match any signature of call target. This error is generated because we are attempting to call the concatStrings function with only a single parameter. Our function definition, though, requires at least two parameters, with only the third parameter being optional. The optional parameters must be the last parameters in the function definition. You can have as many optional parameters as you want, as long as non-optional parameters precede the optional parameters. Default parameters A subtle variant on the optional parameter function definition, allows us to specify the value of a parameter if it is not passed in as an argument from the calling code. Let's modify our preceding function definition to use an optional parameter: var concatStrings = function(a: string, b: string, c: string = "c") { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); This function definition has now dropped the ? optional parameter syntax, but instead has assigned a value of "c" to the last parameter: c:string = "c". By using default parameters, if we do not supply a value for the final parameter named c, the concatStrings function will substitute the default value of "c" instead. The argument c, therefore, will not be undefined. The output of the last two lines of code will both be "abc". Note that using the default parameter syntax will automatically make the parameter optional. The arguments variable The JavaScript language allows a function to be called with a variable number of arguments. Every JavaScript function has access to a special variable, named arguments, that can be used to retrieve all arguments that have been passed into the function. As an example of this, consider the following JavaScript code: function testParams() { if (arguments.length > 0) { for (var i = 0; i < arguments.length; i++) { console.log("Argument " + i + " = " + arguments[i]); } } } testParams(1, 2, 3, 4); testParams("first argument"); In this code snippet, we have defined a function name testParams that does not have any named parameters. Note, though, that we can use the special variable, named arguments, to test whether the function was called with any arguments. In our sample, we can simply loop through the arguments array, and log the value of each argument to the console, by using an array indexer : arguments[i]. The output of the console.log calls are as follows: Argument 0 = 1 Argument 1 = 2 Argument 2 = 3 Argument 3 = 4 Argument 0 = first argument So, how do we express a variable number of function parameters in TypeScript? The answer is to use what are called rest parameters, or the three dots (…) syntax. Here is the equivalent testParams function, expressed in TypeScript: function testParams(...argArray: number[]) { if (argArray.length > 0) { for (var i = 0; i < argArray.length; i++) { console.log("argArray " + i + " = " + argArray[i]); console.log("arguments " + i + " = " + arguments[i]); } } } testParams(1); testParams(1, 2, 3, 4); testParams("one", "two"); Note the use of the …argArray: number[] syntax for our testParams function. This syntax is telling the TypeScript compiler that the function can accept any number of arguments. This means that our usages of this function, i.e. calling the function with either testParams(1) or testParams(1,2,3,4), will both compile correctly. In this version of the testParams function, we have added two console.log lines, just to show that the arguments array can be accessed by either the named rest parameter, argArray[i], or through the normal JavaScript array, arguments[i]. The last line in this sample will, however, generate a compile error, as we have defined the rest parameter to only accept numbers, and we are attempting to call the function with strings. The the subtle difference between using argArray and arguments is the inferred type of the argument. Since we have explicitly specified that argArray is of type number, TypeScript will treat any item of the argArray array as a number. However, the internal arguments array does not have an inferred type, and so will be treated as the any type. We can also combine normal parameters along with rest parameters in a function definition, as long as the rest parameters are the last to be defined in the parameter list, as follows: function testParamsTs2(arg1: string, arg2: number, ...ArgArray: number[]) { } Here, we have two normal parameters named arg1 and arg2 and then an argArray rest parameter. Mistakenly placing the rest parameter at the beginning of the parameter list will generate a compile error. Function callbacks One of the most powerful features of JavaScript–and in fact the technology that Node was built on–is the concept of callback functions. A callback function is a function that is passed into another function. Remember that JavaScript is not strongly typed, so a variable can also be a function. This is best illustrated by having a look at some JavaScript code: function myCallBack(text) { console.log("inside myCallback " + text); } function callingFunction(initialText, callback) { console.log("inside CallingFunction"); callback(initialText); } callingFunction("myText", myCallBack); Here, we have a function named myCallBack that takes a parameter and logs its value to the console. We then define a function named callingFunction that takes two parameters: initialText and callback. The first line of this funciton simply logs "inside CallingFunction" to the console. The second line of the callingFunction is the interesting bit. It assumes that the callback argument is in fact a function, and invokes it. It also passes the initialText variable to the callback function. If we run this code, we will get two messages logged to the console, as follows: inside CallingFunction inside myCallback myText But what happens if we do not pass a function as a callback? There is nothing in the preceding code that signals to us that the second parameter of callingFunction must be a function. If we inadvertently called the callingFunction function with a string, instead of a function as the second parameter as follows: callingFunction("myText", "this is not a function"); We would get a JavaScript runtime error: 0x800a138a - JavaScript runtime error: Function expected Defensive minded programmers, however, would first check whether the callback parameter was in fact a function before invoking it, as follows: function callingFunction(initialText, callback) { console.log("inside CallingFunction"); if (typeof callback === "function") { callback(initialText); } else { console.log(callback + " is not a function"); } } callingFunction("myText", "this is not a function"); Note the third line of this code snippet, where we check the type of the callback variable before invoking it. If it is not a function, we then log a message to the console. On the last line of this snippet, we are executing the callingFunction, but this time passing a string as the second parameter. The output of the code snipped would be: inside CallingFunction this is not a function is not a function When using function callbacks, then, JavaScript programmers need to do two things; firstly, understand which parameters are in fact callbacks and secondly, code around the invalid use of callback functions. Function signatures The TypeScript "syntactic sugar" that enforces strong typing, is not only intended for variables and types, but for function signatures as well. What if we could document our JavaScript callback functions in code, and then warn users of our code when they are passing the wrong type of parameter to our functions ? TypeScript does this through function signatures. A function signature introduces a fat arrow syntax, () =>, to define what the function should look like. Let's re-write the preceding JavaScript sample in TypeScript: function myCallBack(text: string) { console.log("inside myCallback " + text); } function callingFunction(initialText: string, callback: (text: string) => void) { callback(initialText); } callingFunction("myText", myCallBack); callingFunction("myText", "this is not a function"); Our first function definition, myCallBack now strongly types the text parameter to be of type string. Our callingFunction function has two parameters; initialText, which is of type string, and callback, which now has the new function signature syntax. Let's look at this function signature more closely: callback: (text: string) => void What this function definition is saying, is that the callback argument is typed (by the : syntax) to be a function, using the fat arrow syntax () =>. Additionally, this function takes a parameter named text that is of type string. To the right of the fat arrow syntax, we can see a new TypeScript basic type, called void. Void is a keyword to denote that a function does not return a value. So, the callingFunction function will only accept, as its second argument, a function that takes a single string parameter and returns nothing. Compiling the preceding code will correctly highlight an error in the last line of the code snippet, where we passing a string as the second parameter, instead of a callback function: error TS2082: Build: Supplied parameters do not match any signature of call target: Type '(text: string) => void' requires a call signature, but type 'String' lacks one Given the preceding function signature for the callback function, the following code would also generate compile time errors: function myCallBackNumber(arg1: number) { console.log("arg1 = " + arg1); } callingFunction("myText", myCallBackNumber); Here, we are defining a function named myCallBackNumber, that takes a number as its only parameter. When we attempt to compile this code, we will get an error message indicating that the callback parameter, which is our myCallBackNumber function, also does not have the correct function signature: Call signatures of types 'typeof myCallBackNumber' and '(text: string) => void' are incompatible. The function signature of myCallBackNumber would actually be (arg1:number) => void, instead of the required (text: string) => void, hence the error. In function signatures, the parameter name (arg1 or text) does not need to be the same. Only the number of parameters, their types, and the return type of the function need to be the same. This is a very powerful feature of TypeScript — defining in code what the signatures of functions should be, and warning users when they do not call a function with the correct parameters. As we saw in our introduction to TypeScript, this is most significant when we are working with third-party libraries. Before we are able to use third-party functions, classes, or objects in TypeScript, we need to define what their function signatures are. These function definitions are put into a special type of TypeScript file, called a declaration file, and saved with a .d.ts extension. Function callbacks and scope JavaScript uses lexical scoping rules to define the valid scope of a variable. This means that the value of a variable is defined by its location within the source code. Nested functions have access to variables that are defined in their parent scope. As an example of this, consider the following TypeScript code: function testScope() { var testVariable = "myTestVariable"; function print() { console.log(testVariable); } } console.log(testVariable); This code snippet defines a function named testScope. The variable testVariable is defined within this function. The print function is a child function of testScope, so it has access to the testVariable variable. The last line of the code, however, will generate a compile error, because it is attempting to use the variabletestVariable, which is lexically scoped to be valid only inside the body of the testScope function: error TS2095: Build: Could not find symbol 'testVariable'. Simple, right? A nested function has access to variables depending on its location within the source code. This is all well and good, but in large JavaScript projects, there are many different files and many areas of the code are designed to be re-usable. Let's take a look at how these scoping rules can become a problem. For this sample, we will use a typical callback scenario—using jQuery to execute an asynchronous call to fetch some data. Consider the following TypeScript code: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json" success: (data, status, jqXhr) => { console.log("success : testVariable is " + testVariable); console.log("success : testVariable_2 is" + testVariable_2); }, error: (message, status, stack) => { alert("error " + message); } } ); } getData(); In this code snippet, we are defining a variable named testVariable and setting its value. We then define a function called getData. The getData function sets another variable called testVariable_2, and then calls the jQuery $.ajax function. The $.ajax function is configured with three properties: url, success, and error. The url property is a simple string that points to a sample_json.json file in our project directory. The success property is an anonymous function callback, that simply logs the values of testVariable and testVariable_2 to the console. Finally, the error property is also an anonymous function callback, that simply pops up an alert. This code runs as expected, and the success function will log the following results to the console: success : testVariable is :testValue success : testVariable_2 is :testValue_2 So far so good. Now, let's assume that we are trying to refactor the preceding code, as we are doing quite a few similar $.ajax calls, and want to reuse the success callback function elsewhere. We can easily switch out this anonymous function, and create a named function for our success callback, as follows: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json", success: successCallback, error: (message, status, stack) => { alert("error " + message); } } ); } function successCallback(data, status, jqXhr) { console.log("success : testVariable is :" + testVariable); console.log("success : testVariable_2 is :" + testVariable_2); } getData(); In this sample, we have created a new function named successCallback with the same parameters as our previous anonymous function. We have also modified the $.ajax call to simply pass this function in, as a callback function for the success property: success: successCallback. If we were to compile this code now, TypeScript would generate an error, as follows: error TS2095: Build: Could not find symbol ''testVariable_2''. Since we have changed the lexical scope of our code, by creating a named function, the new successCallback function no longer has access the variable testVariable_2. It is fairly easy to spot this sort of error in a trivial example, but in larger projects, and when using third-party libraries, these sorts of errors become more difficult to track down. It is, therefore, worth mentioning that when using callback functions, we need to understand this lexical scope. If your code expects a property to have a value, and it does not have one after a callback, then remember to have a look at the context of the calling code. Function overloads As JavaScript is a dynamic language, we can often call the same function with different argument types. Consider the following JavaScript code: function add(x, y) { return x + y; } console.log("add(1,1)=" + add(1,1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); Here, we are defining a simple add function that returns the sum of its two parameters, x and y. The last three lines of this code snippet simply log the result of the add function with different types: two numbers, two strings, and two boolean values. If we run this code, we will see the following output: add(1,1)=2 add('1','1')=11 add(true,false)=1 TypeScript introduces a specific syntax to indicate multiple function signatures for the same function. If we were to replicate the preceding code in TypeScript, we would need to use the function overload syntax: function add(arg1: string, arg2: string): string; function add(arg1: number, arg2: number): number; function add(arg1: boolean, arg2: boolean): boolean; function add(arg1: any, arg2: any): any { return arg1 + arg2; } console.log("add(1,1)=" + add(1, 1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); The first line of this code snippet specifies a function overload signature for the add function that accepts two strings and returns a string. The second line specifies another function overload that uses numbers, and the third line uses booleans. The fourth line contains the actual body of the function and uses the type specifier of any. The last three lines of this snippet show how we would use these function signatures, and are similar to the JavaScript code that we have been using previously. There are three points of interest in the preceding code snippet. Firstly, none of the function signatures on the first three lines of the snippet actually have a function body. Secondly, the final function definition uses the type specifier of any and eventually includes the function body. The function overload syntax must follow this structure, and the final function signature, that includes the body of the function must use the any type specifier, as anything else will generate compile-time errors. The third point to note, is that we are limiting the add function, by using these function overload signatures, to only accept two parameters that are of the same type. If we were to try and mix our types; for example, if we call the function with a boolean and a string, as follows: console.log("add(true,''1'')", add(true, "1")); TypeScript would generate compile errors: error TS2082: Build: Supplied parameters do not match any signature of call target: error TS2087: Build: Could not select overload for ''call'' expression. This seems to contradict our final function definition though. In the original TypeScript sample, we had a function signature that accepted (arg1: any, arg2: any); so, in theory, this should be called when we try to add a boolean and a number. The TypeScript syntax for function overloads, however, does not allow this. Remember that the function overload syntax must include the use of the any type for the function body, as all overloads eventually call this function body. However, the inclusion of the function overloads above the function body indicates to the compiler that these are the only signatures that should be available to the calling code. Summary To learn more about TypeScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning TypeScript (https://www.packtpub.com/web-development/learning-typescript) TypeScript Essentials (https://www.packtpub.com/web-development/typescript-essentials) Resources for Article: Further resources on this subject: Introduction to TypeScript[article] Writing SOLID JavaScript code with TypeScript[article] JavaScript Execution with Selenium[article]
Read more
  • 0
  • 0
  • 3491
article-image-writing-blog-application-nodejs-and-angularjs
Packt
16 Feb 2016
35 min read
Save for later

Writing a Blog Application with Node.js and AngularJS

Packt
16 Feb 2016
35 min read
In this article, we are going to build a blog application by using Node.js and AngularJS. Our system will support adding, editing, and removing articles, so there will be a control panel. The MongoDB or MySQL database will handle the storing of the information and the Express framework will be used as the site base. It will deliver the JavaScript, CSS, and the HTML to the end user, and will provide an API to access the database. We will use AngularJS to build the user interface and control the client-side logic in the administration page. (For more resources related to this topic, see here.) This article will cover the following topics: AngularJS fundamentals Choosing and initializing a database Implementing the client-side part of an application with AngularJS Exploring AngularJS AngularJS is an open source, client-side JavaScript framework developed by Google. It's full of features and is really well documented. It has almost become a standard framework in the development of single-page applications. The official site of AngularJS, http://angularjs.org, provides a well-structured documentation. As the framework is widely used, there is a lot of material in the form of articles and video tutorials. As a JavaScript library, it collaborates pretty well with Node.js. In this article, we will build a simple blog with a control panel. Before we start developing our application, let's first take a look at the framework. AngularJS gives us very good control over the data on our page. We don't have to think about selecting elements from the DOM and filling them with values. Thankfully, due to the available data-binding, we may update the data in the JavaScript part and see the change in the HTML part. This is also true for the reverse. Once we change something in the HTML part, we get the new values in the JavaScript part. The framework has a powerful dependency injector. There are predefined classes in order to perform AJAX requests and manage routes. You could also read Mastering Web Development with AngularJS by Peter Bacon Darwin and Pawel Kozlowski, published by Packt Publishing. Bootstrapping AngularJS applications To bootstrap an AngularJS application, we need to add the ng-app attribute to some of our HTML tags. It is important that we pick the right one. Having ng-app somewhere means that all the child nodes will be processed by the framework. It's common practice to put that attribute on the <html> tag. In the following code, we have a simple HTML page containing ng-app: <html ng-app> <head> <script src="angular.min.js"></script> </head> <body> ... </body> </html>   Very often, we will apply a value to the attribute. This will be a module name. We will do this while developing the control panel of our blog application. Having the freedom to place ng-app wherever we want means that we can decide which part of our markup will be controlled by AngularJS. That's good, because if we have a giant HTML file, we really don't want to spend resources parsing the whole document. Of course, we may bootstrap our logic manually, and this is needed when we have more than one AngularJS application on the page. Using directives and controllers In AngularJS, we can implement the Model-View-Controller pattern. The controller acts as glue between the data (model) and the user interface (view). In the context of the framework, the controller is just a simple function. For example, the following HTML code illustrates that a controller is just a simple function: <html ng-app> <head> <script src="angular.min.js"></script> <script src="HeaderController.js"></script> </head> <body> <header ng-controller="HeaderController"> <h1>{{title}}</h1> </header> </body> </html>   In <head> of the page, we are adding the minified version of the library and HeaderController.js; a file that will host the code of our controller. We also set an ng-controller attribute in the HTML markup. The definition of the controller is as follows: function HeaderController($scope) { $scope.title = "Hello world"; } Every controller has its own area of influence. That area is called the scope. In our case, HeaderController defines the {{title}} variable. AngularJS has a wonderful dependency-injection system. Thankfully, due to this mechanism, the $scope argument is automatically initialized and passed to our function. The ng-controller attribute is called the directive, that is, an attribute, which has meaning to AngularJS. There are a lot of directives that we can use. That's maybe one of the strongest points of the framework. We can implement complex logic directly inside our templates, for example, data binding, filtering, or modularity. Data binding Data binding is a process of automatically updating the view once the model is changed. As we mentioned earlier, we can change a variable in the JavaScript part of the application and the HTML part will be automatically updated. We don't have to create a reference to a DOM element or attach event listeners. Everything is handled by the framework. Let's continue and elaborate on the previous example, as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> </header>   A link is added and it contains the ng-click directive. The updateTitle function is a function defined in the controller, as seen in the following code snippet: function HeaderController($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } }   We don't care about the DOM element and where the {{title}} variable is. We just change a property of $scope and everything works. There are, of course, situations where we will have the <input> fields and we want to bind their values. If that's the case, then the ng-model directive can be used. We can see this as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> <input type="text" ng-model="title" /> </header>   The data in the input field is bound to the same title variable. This time, we don't have to edit the controller. AngularJS automatically changes the content of the h1 tag. Encapsulating logic with modules It's great that we have controllers. However, it's not a good practice to place everything into globally defined functions. That's why it is good to use the module system. The following code shows how a module is defined: angular.module('HeaderModule', []); The first parameter is the name of the module and the second one is an array with the module's dependencies. By dependencies, we mean other modules, services, or something custom that we can use inside the module. It should also be set as a value of the ng-app directive. The code so far could be translated to the following code snippet: angular.module('HeaderModule', []) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   So, the first line defines a module. We can chain the different methods of the module and one of them is the controller method. Following this approach, that is, putting our code inside a module, we will be encapsulating logic. This is a sign of good architecture. And of course, with a module, we have access to different features such as filters, custom directives, and custom services. Preparing data with filters The filters are very handy when we want to prepare our data, prior to be displayed to the user. Let's say, for example, that we need to mention our title in uppercase once it reaches a length of more than 20 characters: angular.module('HeaderModule', []) .filter('customuppercase', function() { return function(input) { if(input.length > 20) { return input.toUpperCase(); } else { return input; } }; }) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   That's the definition of the custom filter called customuppercase. It receives the input and performs a simple check. What it returns, is what the user sees at the end. Here is how this filter could be used in HTML: <h1>{{title | customuppercase}}</h1> Of course, we may add more than one filter per variable. There are some predefined filters to limit the length, such as the JavaScript to JSON conversion or, for example, date formatting. Dependency injection Dependency management can be very tough sometimes. We may split everything into different modules/components. They have nicely written APIs and they are very well documented. However, very soon, we may realize that we need to create a lot of objects. Dependency injection solves this problem by providing what we need, on the fly. We already saw this in action. The $scope parameter passed to our controller, is actually created by the injector of AngularJS. To get something as a dependency, we need to define it somewhere and let the framework know about it. We do this as follows: angular.module('HeaderModule', []) .factory("Data", function() { return { getTitle: function() { return "A better title."; } } }) .controller('HeaderController', function($scope, Data) { $scope.title = Data.getTitle(); $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   The Module class has a method called factory. It registers a new service that could later be used as a dependency. The function returns an object with only one method, getTitle. Of course, the name of the service should match the name of the controller's parameter. Otherwise, AngularJS will not be able to find the dependency's source. The model in the context of AngularJS In the well-known Model-View-Controller pattern, the model is the part that stores the data in the application. AngularJS doesn't have a specific workflow to define models. The $scope variable could be considered a model. We keep the data in properties attached to the current scope. Later, we can use the ng-model directive and bind a property to the DOM element. We already saw how this works in the previous sections. The framework may not provide the usual form of a model, but it's made like that so that we can write our own implementation. The fact that AngularJS works with plain JavaScript objects, makes this task easily doable. Final words on AngularJS AngularJS is one of the leading frameworks, not only because it is made by Google, but also because it's really flexible. We could use just a small piece of it or build a solid architecture using the giant collection of features. Selecting and initializing the database To build a blog application, we need a database that will store the published articles. In most cases, the choice of the database depends on the current project. There are factors such as performance and scalability and we should keep them in mind. In order to have a better look at the possible solutions, we will have a look at the two of the most popular databases: MongoDB and MySQL. The first one is a NoSQL type of database. According to the Wikipedia entry (http://en.wikipedia.org/wiki/ NoSQL) on NoSQL databases: "A NoSQL or Not Only SQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases." In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" }   We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } }   It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } });   We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } }   The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." }   The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package. json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" }   Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection;   The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, the following code is a short dump of the table used in this article: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;   Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } });   The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if (err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } }   The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Let's continue with the part that shows the articles to our users. Developing the client side with AngularJS Let's assume that there is some data in the database and we are ready to present it to the users. So far, we have only developed the model, which is the class that takes care of the access to the information. To simplify the process, we will Express here. We need to first update the package.json file and include that in the framework, as follows: "dependencies": { "express": "3.4.6", "jade": "0.35.0", "mongodb": "1.3.20", "mysql": "2.0.0" }   We are also adding Jade, because we are going to use it as a template language. The writing of markup in plain HTML is not very efficient nowadays. By using the template engine, we can split the data and the HTML markup, which makes our application much better structured. Jade's syntax is kind of similar to HTML. We can write tags without the need to close them: body p(class="paragraph", data-id="12"). Sample text here footer a(href="#"). my site   The preceding code snippet is transformed to the following code snippet: <body> <p data-id="12" class="paragraph">Sample text here</p> <footer><a href="#">my site</a></footer> </body>   Jade relies on the indentation in the content to distinguish the tags. Let's start with the project structure, as seen in the following screenshot: We placed our already written class, Articles.js, inside the models directory. The public directory will contain CSS styles, and all the necessary client-side JavaScript: the AngularJS library, the AngularJS router module, and our custom code. We will skip some of the explanations about the following code. Our index.js file looks as follows: var express = require('express'); var app = express(); var articles = require("./models/Articles")(); app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); app.use(express.static(__dirname + '/public')); app.use(function(req, res, next) { req.articles = articles; next(); }); app.get('/api/get', require("./controllers/api/get")); app.get('/', require("./controllers/index")); app.listen(3000); console.log('Listening on port 3000');   At the beginning, we require the Express framework and our model. Maybe it's better to initialize the model inside the controller, but in our case this is not necessary. Just after that, we set up some basic options for Express and define our own middleware. It has only one job to do and that is to attach the model to the request object. We are doing this because the request object is passed to all the route handlers. In our case, these handlers are actually the controllers. So, Articles.js becomes accessible everywhere via the req.articles property. At the end of the script, we placed two routes. The second one catches the usual requests that come from the users. The first one, /api/get, is a bit more interesting. We want to build our frontend on top of AngularJS. So, the data that is stored in the database should not enter the Node.js part but on the client side where we use Google's framework. To make this possible, we will create routes/controllers to get, add, edit, and delete records. Everything will be controlled by HTTP requests performed by AngularJS. In other words, we need an API. Before we start using AngularJS, let's take a look at the /controllers/api/get.js controller: module.exports = function(req, res, next) { req.articles.get(function(rows) { res.send(rows); }); }   The main job is done by our model and the response is handled by Express. It's nice because if we pass a JavaScript object, as we did, (rows is actually an array of objects) the framework sets the response headers automatically. To test the result, we could run the application with node index.js and open http://localhost:3000/api/ get. If we don't have any records in the database, we will get an empty array. If not, the stored articles will be returned. So, that's the URL, which we should hit from within the AngularJS controller in order to get the information. The code of the /controller/index.js controller is also just a few lines. We can see the code as follows: module.exports = function(req, res, next) { res.render("list", { app: "" }); }   It simply renders the list view, which is stored in the list.jade file. That file should be saved in the /views directory. But before we see its code, we will check another file, which acts as a base for all the pages. Jade has a nice feature called blocks. We may define different partials and combine them into one template. The following is our layout.jade file: doctype html html(ng-app="#{app}") head title Blog link(rel='stylesheet', href='/style.css') script(src='/angular.min.js') script(src='/angular-route.min.js') body block content   There is only one variable passed to this template, which is #{app}. We will need it later to initialize the administration's module. The angular.min.js and angular-route.min.js files should be downloaded from the official AngularJS site, and placed in the /public directory. The body of the page contains a block placeholder called content, which we will later fill with the list of the articles. The following is the list.jade file: extends layout block content .container(ng-controller="BlogCtrl") section.articles article(ng-repeat="article in articles") h2 {{article.title}} br small published on {{article.date}} p {{article.text}} script(src='/blog.js')   The two lines in the beginning combine both the templates into one page. The Express framework transforms the Jade template into HTML and serves it to the browser of the user. From there, the client-side JavaScript takes control. We are using the ng-controller directive saying that the div element will be controlled by an AngularJS controller called BlogCtrl. The same class should have variable, articles, filled with the information from the database. ng-repeat goes through the array and displays the content to the users. The blog.js class holds the code of the controller: function BlogCtrl($scope, $http) { $scope.articles = [ { title: "", text: "Loading ..."} ]; $http({method: 'GET', url: '/api/get'}) .success(function(data, status, headers, config) { $scope.articles = data; }) .error(function(data, status, headers, config) { console.error("Error getting articles."); }); }   The controller has two dependencies. The first one, $scope, points to the current view. Whatever we assign as a property there is available as a variable in our HTML markup. Initially, we add only one element, which doesn't have a title, but has text. It is shown to indicate that we are still loading the articles from the database. The second dependency, $http, provides an API in order to make HTTP requests. So, all we have to do is query /api/get, fetch the data, and pass it to the $scope dependency. The rest is done by AngularJS and its magical two-way data binding. To make the application a little more interesting, we will add a search field, as follows: // views/list.jade header .search input(type="text", placeholder="type a filter here", ng-model="filterText") h1 Blog hr   The ng-model directive, binds the value of the input field to a variable inside our $scope dependency. However, this time, we don't have to edit our controller and can simply apply the same variable as a filter to the ng-repeat: article(ng-repeat="article in articles | filter:filterText") As a result, the articles shown will be filtered based on the user's input. Two simple additions, but something really valuable is on the page. The filters of AngularJS can be very powerful. Implementing a control panel The control panel is the place where we will manage the articles of the blog. Several things should be made in the backend before continuing with the user interface. They are as follows: app.set("username", "admin"); app.set("password", "pass"); app.use(express.cookieParser('blog-application')); app.use(express.session());   The previous lines of code should be added to /index.js. Our administration should be protected, so the first two lines define our credentials. We are using Express as data storage, simply creating key-value pairs. Later, if we need the username we can get it with app.get("username"). The next two lines enable session support. We need that because of the login process. We added a middleware, which attaches the articles to the request object. We will do the same with the current user's status, as follows: app.use(function(req, res, next) { if (( req.session && req.session.admin === true ) || ( req.body && req.body.username === app.get("username") && req.body.password === app.get("password") )) { req.logged = true; req.session.admin = true; }; next(); });   Our if statement is a little long, but it tells us whether the user is logged in or not. The first part checks whether there is a session created and the second one checks whether the user submitted a form with the correct username and password. If these expressions are true, then we attach a variable, logged, to the request object and create a session that will be valid during the following requests. There is only one thing that we need in the main application's file. A few routes that will handle the control panel operations. In the following code, we are defining them along with the needed route handler: var protect = function(req, res, next) { if (req.logged) { next(); } else { res.send(401, 'No Access.'); } } app.post('/api/add', protect, require("./controllers/api/add")); app.post('/api/edit', protect, require("./controllers/api/edit")); app.post('/api/delete', protect, require("./controllers/api/ delete")); app.all('/admin', require("./controllers/admin"));   The three routes, which start with /api, will use the model Articles.js to add, edit, and remove articles from the database. These operations should be protected. We will add a middleware function that takes care of this. If the req.logged variable is not available, it simply responds with a 401 - Unauthorized status code. The last route, /admin, is a little different because it shows a login form instead. The following is the controller to create new articles: module.exports = function(req, res, next) { req.articles.add(req.body, function() { res.send({success: true}); }); }   We transfer most of the logic to the frontend, so again, there are just a few lines. What is interesting here is that we pass req.body directly to the model. It actually contains the data submitted by the user. The following code, is how the req.articles.add method looks for the MongoDB implementation: add: function(data, callback) { data.ID = crypto.randomBytes(20).toString('hex'); collection.insert(data, {}, callback || function() {}); } And the MySQL implementation is as follows: add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); } In both the cases, we need title and text in the passed data object. Thankfully, due to Express' bodyParser middleware, this is what we have in the req.body object. We can directly forward it to the model. The other route handlers are almost the same: // api/edit.js module.exports = function(req, res, next) { req.articles.update(req.body, function() { res.send({success: true}); }); } What we changed is the method of the Articles.js class. It is not add but update. The same technique is applied in the route to delete an article. We can see it as follows: // api/delete.js module.exports = function(req, res, next) { req.articles.remove(req.body.id, function() { res.send({success: true}); }); }   What we need for deletion is not the whole body of the request but only the unique ID of the record. Every API method sends {success: true} as a response. While we are dealing with API requests, we should always return a response. Even if something goes wrong. The last thing in the Node.js part, which we have to cover, is the controller responsible for the user interface of the administration panel, that is, the. / controllers/admin.js file: module.exports = function(req, res, next) { if(req.logged) { res.render("admin", { app: "admin" }); } else { res.render("login", { app: "" }); } }   There are two templates that are rendered: /views/admin.jade and /views/login. jade. Based on the variable, which we set in /index.js, the script decides which one to show. If the user is not logged in, then a login form is sent to the browser, as follows: extends layout block content .container header h1 Administration hr section.articles article form(method="post", action="/admin") span Username: br input(type="text", name="username") br span Password: br input(type="password", name="password") br br input(type="submit", value="login")   There is no AngularJS code here. All we have is the good old HTML form, which submits its data via POST to the same URL—/admin. If the username and password are correct, the .logged variable is set to true and the controller renders the other template: extends layout block content .container header h1 Administration hr a(href="/") Public span | a(href="#/") List span | a(href="#/add") Add section(ng-view) script(src='/admin.js')   The control panel needs several views to handle all the operations. AngularJS has a great router module, which works with hashtags-type URLs, that is, URLs such as / admin#/add. The same module requires a placeholder for the different partials. In our case, this is a section tag. The ng-view attribute tells the framework that this is the element prepared for that logic. At the end of the template, we are adding an external file, which keeps the whole client-side JavaScript code that is needed by the control panel. While the client-side part of the applications needs only loading of the articles, the control panel requires a lot more functionalities. It is good to use the modular system of AngularJS. We need the routes and views to change, so the ngRoute module is needed as a dependency. This module is not added in the main angular.min.js build. It is placed in the angular-route.min.js file. The following code shows how our module starts: var admin = angular.module('admin', ['ngRoute']); admin.config(['$routeProvider', function($routeProvider) { $routeProvider .when('/', {}) .when('/add', {}) .when('/edit/:id', {}) .when('/delete/:id', {}) .otherwise({ redirectTo: '/' }); } ]);   We configured the router by mapping URLs to specific routes. At the moment, the routes are just empty objects, but we will fix that shortly. Every controller will need to make HTTP requests to the Node.js part of the application. It will be nice if we have such a service and use it all over our code. We can see an example as follows: admin.factory('API', function($http) { var request = function(method, url) { return function(callback, data) { $http({ method: method, url: url, data: data }) .success(callback) .error(function(data, status, headers, config) { console.error("Error requesting '" + url + "'."); }); } } return { get: request('GET', '/api/get'), add: request('POST', '/api/add'), edit: request('POST', '/api/edit'), remove: request('POST', '/api/delete') } });   One of the best things about AngularJS is that it works with plain JavaScript objects. There are no unnecessary abstractions and no extending or inheriting special classes. We are using the .factory method to create a simple JavaScript object. It has four methods that can be called: get, add, edit, and remove. Each one of them calls a function, which is defined in the helper method request. The service has only one dependency, $http. We already know this module; it handles HTTP requests nicely. The URLs that we are going to query are the same ones that we defined in the Node.js part. Now, let's create a controller that will show the articles currently stored in the database. First, we should replace the empty route object .when('/', {}) with the following object: .when('/', { controller: 'ListCtrl', template: ' <article ng-repeat="article in articles"> <hr /> <strong>{{article.title}}</strong><br /> (<a href="#/edit/{{article.id}}">edit</a>) (<a href="#/delete/{{article.id}}">remove</a>) </article> ' })   The object has to contain a controller and a template. The template is nothing more than a few lines of HTML markup. It looks a bit like the template used to show the articles on the client side. The difference is the links used to edit and delete. JavaScript doesn't allow new lines in the string definitions. The backward slashes at the end of the lines prevent syntax errors, which will eventually be thrown by the browser. The following is the code for the controller. It is defined, again, in the module: admin.controller('ListCtrl', function($scope, API) { API.get(function(articles) { $scope.articles = articles; }); });   And here is the beauty of the AngularJS dependency injection. Our custom-defined service API is automatically initialized and passed to the controller. The .get method fetches the articles from the database. Later, we send the information to the current $scope dependency and the two-way data binding does the rest. The articles are shown on the page. The work with AngularJS is so easy that we could combine the controller to add and edit in one place. Let's store the route object in an external variable, as follows: var AddEditRoute = { controller: 'AddEditCtrl', template: ' <hr /> <article> <form> <span>Title</spna><br /> <input type="text" ng-model="article.title"/><br /> <span>Text</spna><br /> <textarea rows="7" ng-model="article.text"></textarea> <br /><br /> <button ng-click="save()">save</button> </form> </article> ' };   And later, assign it to the both the routes, as follows: .when('/add', AddEditRoute) .when('/edit/:id', AddEditRoute)   The template is just a form with the necessary fields and a button, which calls the save method in the controller. Notice that we bound the input field and the text area to variables inside the $scope dependency. This comes in handy because we don't need to access the DOM to get the values. We can see this as follows: admin.controller( 'AddEditCtrl', function($scope, API, $location, $routeParams) { var editMode = $routeParams.id ? true : false; if (editMode) { API.get(function(articles) { articles.forEach(function(article) { if (article.id == $routeParams.id) { $scope.article = article; } }); }); } $scope.save = function() { API[editMode ? 'edit' : 'add'](function() { $location.path('/'); }, $scope.article); } })   The controller receives four dependencies. We already know about $scope and API. The $location dependency is used when we want to change the current route, or, in other words, to forward the user to another view. The $routeParams dependency is needed to fetch parameters from the URL. In our case, /edit/:id is a route with a variable inside. Inside the code, the id is available in $routeParams.id. The adding and editing of articles uses the same form. So, with a simple check, we know what the user is currently doing. If the user is in the edit mode, then we fetch the article based on the provided id and fill the form. Otherwise, the fields are empty and new records will be created. The deletion of an article can be done by using a similar approach, which is adding a route object and defining a new controller. We can see the deletion as follows: .when('/delete/:id', { controller: 'RemoveCtrl', template: ' ' })   We don't need a template in this case. Once the article is deleted from the database, we will forward the user to the list page. We have to call the remove method of the API. Here is how the RemoveCtrl controller looks like: admin.controller( 'RemoveCtrl', function($scope, $location, $routeParams, API) { API.remove(function() { $location.path('/'); }, $routeParams); } );   The preceding code depicts same dependencies like in the previous controller. This time, we simply forward the $routeParams dependency to the API. And because it is a plain JavaScript object, everything works as expected. Summary In this article, we built a simple blog by writing the backend of the application in Node.js. The module for database communication, which we wrote, can work with the MongoDB or MySQL database and store articles. The client-side part and the control panel of the blog were developed with AngularJS. We then defined a custom service using the built-in HTTP and routing mechanisms. Node.js works well with AngularJS, mainly because both are written in JavaScript. We found out that AngularJS is built to support the developer. It removes all those boring tasks such as DOM element referencing, attaching event listeners, and so on. It's a great choice for the modern client-side coding stack. You can refer to the following books to learn more about Node.js: Node.js Essentials Learning Node.js for Mobile Application Development Node.js Design Patterns Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 2
  • 9191

article-image-changing-views
Packt
15 Feb 2016
25 min read
Save for later

Changing Views

Packt
15 Feb 2016
25 min read
In this article by Christopher Pitt, author of the book React Components, has explained how to change sections without reloading the page. We'll use this knowledge to create the public pages of the website our CMS is meant to control. (For more resources related to this topic, see here.) Location, location, and location Before we can learn about alternatives to reloading pages, let's take a look at how the browser manages reloads. You've probably encountered the window object. It's a global catch-all object for browser-based functionality and state. It's also the default this scope in any HTML page: We've even accessed window before. When we rendered to document.body or used document.querySelector, the window object was assumed. It's the same as if we were to call window.document.querySelector. Most of the time document is the only property we need. That doesn't mean it's the only property useful to us. Try the following, in the console: console.log(window.location); You should see something similar to: Location {     hash: ""     host: "127.0.0.1:3000"     hostname: "127.0.0.1"     href: "http://127.0.0.1:3000/examples/login.html"     origin: "http://127.0.0.1:3000"     pathname: "/examples/login.html"     port: "3000"     ... } If we were trying to work out which components to show based on the browser URL, this would be an excellent place to start. Not only can we read from this object, but we can also write to it: <script>     window.location.href = "http://material-ui.com"; </script> Putting this in an HTML page or entering that line of JavaScript in the console will redirect the browser to material-ui.com. It's the same if you click on the link. And if it's to a different page (than the one the browser is pointing at), then it will cause a full page reload. A bit of history So how does this help us? We're trying to avoid full page reloads, after all. Let's experiment with this object. Let's see what happens when we add something like #page-admin to the URL: Adding #page-admin to the URL leads to the window.location.hash property being populated with the same page. What's more, changing the hash value won't reload the page! It's the same as if we clicked on a link that had that hash in the href attribute. We can modify it without causing full page reloads, and each modification will store a new entry in the browser history. Using this trick, we can step through a number of different "states" without reloading the page, and be able to backtrack each with the browser's back button. Using browser history Let's put this trick to use in our CMS. First, let's add a couple functions to our Nav component: export default (props) => {     // ...define class names       var redirect = (event, section) => {         window.location.hash = `#${section}`;         event.preventDefault();     }       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <a className="mdl-navigation__link"                href="/examples/login.html"                onClick={(e) => redirect(e, "login")}>                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </a>             <a className="mdl-navigation__link"                href="/examples/page-admin.html"                onClick={(e) => redirect(e, "page-admin")}>                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </a>         </nav>     </div>; }; We add an onClick attribute to our navigation links. We've created a special function that will change window.location.hash and prevent the default full page reload behavior the links would otherwise have caused. This is a neat use of arrow functions, but we're ultimately creating three new functions in each render call. Remember that this can be expensive, so it's best to move the function creation out of render. We'll replace this shortly. It's also interesting to see template strings in action. Instead of "#" + section, we can use `#${section}` to interpolate the section name. It's not as useful in small strings, but becomes increasingly useful in large ones. Clicking on the navigation links will now change the URL hash. We can add to this behavior by rendering different components when the navigation links are clicked: import React from "react"; import ReactDOM from "react-dom"; import Component from "src/component"; import Login from "src/login"; import Backend from "src/backend"; import PageAdmin from "src/page-admin";   class Nav extends Component {     render() {         // ...define class names           return <div className={drawerClassNames}>             <header className="demo-drawer-header">                 <img src="images/user.jpg"                      className="demo-avatar" />             </header>             <nav className={navClassNames}>                 <a className="mdl-navigation__link"                    href="/examples/login.html"                    onClick={(e) => this.redirect(e, "login")}>                     <i className={buttonIconClassNames}                        role="presentation">                         lock                     </i>                     Login                 </a>                 <a className="mdl-navigation__link"                    href="/examples/page-admin.html"                    onClick={(e) => this.redirect(e, "page-admin")}>                     <i className={buttonIconClassNames}                        role="presentation">                         pages                     </i>                     Pages                 </a>             </nav>         </div>;     }       redirect(event, section) {         window.location.hash = `#${section}`;           var component = null;           switch (section) {             case "login":                 component = <Login />;                 break;             case "page-admin":                 var backend = new Backend();                 component = <PageAdmin backend={backend} />;                 break;         }           var layoutClassNames = [             "demo-layout",             "mdl-layout",             "mdl-js-layout",             "mdl-layout--fixed-drawer"         ].join(" ");           ReactDOM.render(             <div className={layoutClassNames}>                 <Nav />                 {component}             </div>,             document.querySelector(".react")         );           event.preventDefault();     } };   export default Nav; We've had to convert the Nav function to a Nav class. We want to create the redirect method outside of render (as that is more efficient) and also isolate the choice of which component to render. Using a class also gives us a way to name and reference Nav, so we can create a new instance to overwrite it from within the redirect method. It's not ideal packaging this kind of code within a component, so we'll clean that up in a bit. We can now switch between different sections without full page reloads. There is one problem still to solve. When we use the browser back button, the components don't change to reflect the component that should be shown for each hash. We can solve this in a couple of ways. The first approach we can try is checking the hash frequently: componentDidMount() {     var hash = window.location.hash;       setInterval(() => {         if (hash !== window.location.hash) {             hash = window.location.hash;             this.redirect(null, hash.slice(1), true);         }     }, 100); }   redirect(event, section, respondingToHashChange = false) {     if (!respondingToHashChange) {         window.location.hash = `#${section}`;     }       var component = null;       switch (section) {         case "login":             component = <Login />;             break;         case "page-admin":             var backend = new Backend();             component = <PageAdmin backend={backend} />;             break;     }       var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       ReactDOM.render(         <div className={layoutClassNames}>             <Nav />             {component}         </div>,         document.querySelector(".react")     );       if (event) {         event.preventDefault();     } } Our redirect method has an extra parameter, to apply the new hash whenever we're not responding to a hash change. We've also wrapped the call to event.preventDefault in case we don't have a click event to work with. Other than those changes, the redirect method is the same. We've also added a componentDidMount method, in which we have a call to setInterval. We store the initial window.location.hash and check 10 times a second to see if it has change. The hash value is #login or #page-admin, so we slice the first character off and pass the rest to the redirect method. Try clicking on the different navigation links, and then use the browser back button. The second option is to use the newish pushState and popState methods on the window.history object. They're not very well supported yet, so you need to be careful to handle older browsers or sure you don't need to handle them. You can learn more about pushState and popState at https://developer.mozilla.org/en-US/docs/Web/API/History_API. Using a router Our hash code is functional but invasive. We shouldn't be calling the render method from inside a component (at least not one we own). So instead, we're going to use a popular router to manage this stuff for us. Download it with the following: $ npm install react-router --save Then we need to join login.html and page-admin.html back into the same file: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>         <script src="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js"></script>         <link rel="stylesheet" href="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css" />         <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons" />         <link rel="stylesheet" href="admin.css" />     </head>     <body class="         mdl-demo         mdl-color--grey-100         mdl-color-text--grey-700         mdl-base">         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/admin");         </script>     </body> </html> Notice how we've added the ReactRouter file to the import map? We'll use that in admin.js. First, let's define our layout component: var App = function(props) {     var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       return (         <div className={layoutClassNames}>             <Nav />             {props.children}         </div>     ); }; This creates the page layout we've been using and allows a dynamic content component. Every React component has a this.props.children property (or props.children in the case of a stateless component), which is an array of nested components. For example, consider this component: <App>     <Login /> </App> Inside the App component, this.props.children will be an array with a single item—an instance of the Login. Next, we'll define handler components for the two sections we want to route: var LoginHandler = function() {     return <Login />; }; var PageAdminHandler = function() {     var backend = new Backend();     return <PageAdmin backend={backend} />; }; We don't really need to wrap Login in LoginHandler but I've chosen to do it to be consistent with PageAdminHandler. PageAdmin expects an instance of Backend, so we have to wrap it as we see in this example. Now we can define routes for our CMS: ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App}>             <IndexRoute component={LoginHandler} />             <Route path="login" component={LoginHandler} />             <Route path="page-admin" component={PageAdminHandler} />         </Route>     </Router>,     document.querySelector(".react") ); There's a single root route, for the path /. It creates an instance of App, so we always get the same layout. Then we nest a login route and a page-admin route. These create instances of their respective components. We also define an IndexRoute so that the login page will be displayed as a landing page. We need to remove our custom history code from Nav: import React from "react"; import ReactDOM from "react-dom"; import { Link } from "router";   export default (props) => {     // ...define class names       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <Link className="mdl-navigation__link" to="login">                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </Link>             <Link className="mdl-navigation__link" to="page-admin">                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </Link>         </nav>     </div>; }; And since we no longer need a separate redirect method, we can convert the class back into a statement component (function). Notice we've swapped anchor components for a new Link component. This interacts with the router to show the correct section when we click on the navigation links. We can also change the route paths without needing to update this component (unless we also change the route names). Creating public pages Now that we can easily switch between CMS sections, we can use the same trick to show the public pages of our website. Let's create a new HTML page just for these: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>     </head>     <body>         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/index");         </script>     </body> </html> This is a reduced form of admin.html without the material design resources. I think we can ignore the appearance of these pages for the moment, while we focus on the navigation. The public pages are almost 100%, so we can use stateless components for them. Let's begin with the layout component: var App = function(props) {     return (         <div className="layout">             <Nav pages={props.route.backend.all()} />             {props.children}         </div>     ); }; This is similar to the App admin component, but it also has a reference to a Backend. We define that when we render the components: var backend = new Backend(); ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App} backend={backend}>             <IndexRoute component={StaticPage} backend={backend} />             <Route path="pages/:page" component={StaticPage} backend={backend} />         </Route>     </Router>,     document.querySelector(".react") ); For this to work, we also need to define a StaticPage: var StaticPage = function(props) {     var id = props.params.page || 1;     var backend = props.route.backend;       var pages = backend.all().filter(         (page) => {             return page.id == id;         }     );       if (pages.length < 1) {         return <div>not found</div>;     }       return (         <div className="page">             <h1>{pages[0].title}</h1>             {pages[0].content}         </div>     ); }; This component is more interesting. We access the params property, which is a map of all the URL path parameters defined for this route. We have :page in the path (pages/:page), so when we go to pages/1, the params object is {"page":1}. We also pass a Backend to Page, so we can fetch all pages and filter them by page.id. If no page.id is provided, we default to 1. After filtering, we check to see if there are any pages. If not, we return a simple not found message. Otherwise, we render the content of the first page in the array (since we expect the array to have a length of at least 1). We now have a page for the public pages of the website: Summary In this article, we learned about how the browser stores URL history and how we can manipulate it to load different sections without full page reloads. Resources for Article:   Further resources on this subject: Introduction to Akka [article] An Introduction to ReactJs [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 1297