Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-forms-and-views
Packt
13 Jan 2016
12 min read
Save for later

Forms and Views

Packt
13 Jan 2016
12 min read
In this article by Aidas Bendoraitis, author of the book Web Development with Django Cookbook - Second Edition we will cover the following topics: Passing HttpRequest to the form Utilizing the save method of the form (For more resources related to this topic, see here.) Introduction When the database structure is defined in the models, we need some views to let the users enter data or show the data to the people. In this chapter, we will focus on the views managing forms, the list views, and views generating an alternative output than HTML. For the simplest examples, we will leave the creation of URL rules and templates up to you. Passing HttpRequest to the form The first argument of every Django view is the HttpRequest object that is usually named request. It contains metadata about the request. For example, current language code, current user, current cookies, and current session. By default, the forms that are used in the views accept the GET or POST parameters, files, initial data, and other parameters; however, not the HttpRequest object. In some cases, it is useful to additionally pass HttpRequest to the form, especially when you want to filter out the choices of form fields using the request data or handle saving something such as the current user or IP in the form. In this recipe, we will see an example of a form where a person can choose a user and write a message for them. We will pass the HttpRequest object to the form in order to exclude the current user from the recipient choices; we don't want anybody to write a message to themselves. Getting ready Let's create a new app called email_messages and put it in INSTALLED_APPS in the settings. This app will have no models, just forms and views. How to do it... To complete this recipe, execute the following steps: Add a new forms.py file with the message form containing two fields: the recipient selection and message text. Also, this form will have an initialization method, which will accept the request object and then, modify QuerySet for the recipient's selection field: # email_messages/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from django.utils.translation import ugettext_lazy as _ from django.contrib.auth.models import User class MessageForm(forms.Form): recipient = forms.ModelChoiceField( label=_("Recipient"), queryset=User.objects.all(), required=True, ) message = forms.CharField( label=_("Message"), widget=forms.Textarea, required=True, ) def __init__(self, request, *args, **kwargs): super(MessageForm, self).__init__(*args, **kwargs) self.request = request self.fields["recipient"].queryset = self.fields["recipient"].queryset. exclude(pk=request.user.pk) Then, create views.py with the message_to_user() view in order to handle the form. As you can see, the request object is passed as the first parameter to the form, as follows: # email_messages/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.contrib.auth.decorators import login_required from django.shortcuts import render, redirect from .forms import MessageForm @login_required def message_to_user(request): if request.method == "POST": form = MessageForm(request, data=request.POST) if form.is_valid(): # do something with the form return redirect("message_to_user_done") else: form = MessageForm(request) return render(request, "email_messages/message_to_user.html", {"form": form} ) How it works... In the initialization method, we have the self variable that represents the instance of the form itself, we also have the newly added request variable, and then we have the rest of the positional arguments (*args) and named arguments (**kwargs). We call the super() initialization method passing all the positional and named arguments to it so that the form is properly initiated. We will then assign the request variable to a new request attribute of the form for later access in other methods of the form. Then, we modify the queryset attribute of the recipient's selection field, excluding the current user from the request. In the view, we will pass the HttpRequest object as the first argument in both situations: when the form is posted as well as when it is loaded for the first time. See also The Utilizing the save method of the form recipe Utilizing the save method of the form To make your views clean and simple, it is good practice to move the handling of the form data to the form itself whenever possible and makes sense. The common practice is to have a save() method that will save the data, perform search, or do some other smart actions. We will extend the form that is defined in the previous recipe with the save() method, which will send an e-mail to the selected recipient. Getting ready We will build upon the example that is defined in the Passing HttpRequest to the form recipe. How to do it... To complete this recipe, execute the following two steps: From Django, import the function in order to send an e-mail. Then, add the save() method to MessageForm. It will try to send an e-mail to the selected recipient and will fail quietly if any errors occur: # email_messages/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from django.utils.translation import ugettext, ugettext_lazy as _ from django.core.mail import send_mail from django.contrib.auth.models import User class MessageForm(forms.Form): recipient = forms.ModelChoiceField( label=_("Recipient"), queryset=User.objects.all(), required=True, ) message = forms.CharField( label=_("Message"), widget=forms.Textarea, required=True, ) def __init__(self, request, *args, **kwargs): super(MessageForm, self).__init__(*args, **kwargs) self.request = request self.fields["recipient"].queryset = self.fields["recipient"].queryset. exclude(pk=request.user.pk) def save(self): cleaned_data = self.cleaned_data send_mail( subject=ugettext("A message from %s") % self.request.user, message=cleaned_data["message"], from_email=self.request.user.email, recipient_list=[ cleaned_data["recipient"].email ], fail_silently=True, ) Then, call the save() method from the form in the view if the posted data is valid: # email_messages/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.contrib.auth.decorators import login_required from django.shortcuts import render, redirect from .forms import MessageForm @login_required def message_to_user(request): if request.method == "POST": form = MessageForm(request, data=request.POST) if form.is_valid(): form.save() return redirect("message_to_user_done") else: form = MessageForm(request) return render(request, "email_messages/message_to_user.html", {"form": form} ) How it works... Let's take a look at the form. The save() method uses the cleaned data from the form to read the recipient's e-mail address and the message. The sender of the e-mail is the current user from the request. If the e-mail cannot be sent due to an incorrect mail server configuration or another reason, it will fail silently; that is, no error will be raised. Now, let's look at the view. When the posted form is valid, the save() method of the form will be called and the user will be redirected to the success page. See also The Passing HttpRequest to the form recipe Uploading images In this recipe, we will take a look at the easiest way to handle image uploads. You will see an example of an app, where the visitors can upload images with inspirational quotes. Getting ready Make sure to have Pillow or PIL installed in your virtual environment or globally. Then, let's create a quotes app and put it in INSTALLED_APPS in the settings. Then, we will add an InspirationalQuote model with three fields: the author, quote text, and picture, as follows: # quotes/models.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals import os from django.db import models from django.utils.timezone import now as timezone_now from django.utils.translation import ugettext_lazy as _ from django.utils.encoding import python_2_unicode_compatible def upload_to(instance, filename): now = timezone_now() filename_base, filename_ext = os.path.splitext(filename) return "quotes/%s%s" % ( now.strftime("%Y/%m/%Y%m%d%H%M%S"), filename_ext.lower(), ) @python_2_unicode_compatible class InspirationalQuote(models.Model): author = models.CharField(_("Author"), max_length=200) quote = models.TextField(_("Quote")) picture = models.ImageField(_("Picture"), upload_to=upload_to, blank=True, null=True, ) class Meta: verbose_name = _("Inspirational Quote") verbose_name_plural = _("Inspirational Quotes") def __str__(self): return self.quote In addition, we created an upload_to function, which sets the path of the uploaded picture to be something similar to quotes/2015/04/20150424140000.png. As you can see, we use the date timestamp as the filename to ensure its uniqueness. We pass this function to the picture image field. How to do it... Execute these steps to complete the recipe: Create the forms.py file and put a simple model form there: # quotes/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from .models import InspirationalQuote class InspirationalQuoteForm(forms.ModelForm): class Meta: model = InspirationalQuote fields = ["author", "quote", "picture", "language"] In the views.py file, put a view that handles the form. Don't forget to pass the FILES dictionary-like object to the form. When the form is valid, trigger the save() method as follows: # quotes/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.shortcuts import redirect from django.shortcuts import render from .forms import InspirationalQuoteForm def add_quote(request): if request.method == "POST": form = InspirationalQuoteForm( data=request.POST, files=request.FILES, ) if form.is_valid(): quote = form.save() return redirect("add_quote_done") else: form = InspirationalQuoteForm() return render(request, "quotes/change_quote.html", {"form": form} ) Lastly, create a template for the view in templates/quotes/change_quote.html. It is very important to set the enctype attribute to multipart/form-data for the HTML form, otherwise the file upload won't work: {# templates/quotes/change_quote.html #} {% extends "base.html" %} {% load i18n %} {% block content %} <form method="post" action="" enctype="multipart/form-data"> {% csrf_token %} {{ form.as_p }} <button type="submit">{% trans "Save" %}</button> </form> {% endblock %} How it works... Django model forms are forms that are created from models. They provide all the fields from the model so you don't need to define them again. In the preceding example, we created a model form for the InspirationalQuote model. When we save the form, the form knows how to save each field in the database as well as upload the files and save them in the media directory. There's more As a bonus, we will see an example of how to generate a thumbnail out of the uploaded image. Using this technique, you could also generate several other specific versions of the image, such as the list version, mobile version, and desktop computer version. We will add three methods to the InspirationalQuote model (quotes/models.py). They are save(), create_thumbnail(), and get_thumbnail_picture_url(). When the model is being saved, we will trigger the creation of the thumbnail. When we need to show the thumbnail in a template, we can get its URL using {{ quote.get_thumbnail_picture_url }}. The method definitions are as follows: # quotes/models.py # … from PIL import Image from django.conf import settings from django.core.files.storage import default_storage as storage THUMBNAIL_SIZE = getattr( settings, "QUOTES_THUMBNAIL_SIZE", (50, 50) ) class InspirationalQuote(models.Model): # ... def save(self, *args, **kwargs): super(InspirationalQuote, self).save(*args, **kwargs) # generate thumbnail picture version self.create_thumbnail() def create_thumbnail(self): if not self.picture: return "" file_path = self.picture.name filename_base, filename_ext = os.path.splitext(file_path) thumbnail_file_path = "%s_thumbnail.jpg" % filename_base if storage.exists(thumbnail_file_path): # if thumbnail version exists, return its url path return "exists" try: # resize the original image and # return URL path of the thumbnail version f = storage.open(file_path, 'r') image = Image.open(f) width, height = image.size if width > height: delta = width - height left = int(delta/2) upper = 0 right = height + left lower = height else: delta = height - width left = 0 upper = int(delta/2) right = width lower = width + upper image = image.crop((left, upper, right, lower)) image = image.resize(THUMBNAIL_SIZE, Image.ANTIALIAS) f_mob = storage.open(thumbnail_file_path, "w") image.save(f_mob, "JPEG") f_mob.close() return "success" except: return "error" def get_thumbnail_picture_url(self): if not self.picture: return "" file_path = self.picture.name filename_base, filename_ext = os.path.splitext(file_path) thumbnail_file_path = "%s_thumbnail.jpg" % filename_base if storage.exists(thumbnail_file_path): # if thumbnail version exists, return its URL path return storage.url(thumbnail_file_path) # return original as a fallback return self.picture.url In the preceding methods, we are using the file storage API instead of directly juggling the filesystem, as we could then exchange the default storage with Amazon S3 buckets or other storage services and the methods will still work. How does the creating the thumbnail work? If we had the original file saved as quotes/2014/04/20140424140000.png, we are checking whether the quotes/2014/04/20140424140000_thumbnail.jpg file doesn't exist and, in that case, we are opening the original image, cropping it from the center, resizing it to 50 x 50 pixels, and saving it to the storage. The get_thumbnail_picture_url() method checks whether the thumbnail version exists in the storage and returns its URL. If the thumbnail version does not exist, the URL of the original image is returned as a fallback. Summary In this article, we learned about passing an HttpRequest to the form and utilizing the save method of the form. You can find various book on Django on our website: Learning Website Development with Django (https://www.packtpub.com/web-development/learning-website-development-django) Instant Django 1.5 Application Development Starter (https://www.packtpub.com/web-development/instant-django-15-application-development-starter) Django Essentials (https://www.packtpub.com/web-development/django-essentials) Resources for Article: Further resources on this subject: So, what is Django?[article] Code Style in Django[article] Django JavaScript Integration: jQuery In-place Editing Using Ajax[article]
Read more
  • 0
  • 0
  • 1134

article-image-interactive-crime-map-using-flask
Packt
12 Jan 2016
18 min read
Save for later

Interactive Crime Map Using Flask

Packt
12 Jan 2016
18 min read
In this article by Gareth Dwyer, author of the book, Flask By Example, we will cover how to set up a MySQL database on our VPS and creating a database for the crime data. We'll follow on from this by setting up a basic page containing a map and textbox. We'll see how to link Flask to MySQL by storing data entered into the textbox in our database. We won't be using an ORM for our database queries or a JavaScript framework for user input and interaction. This means that there will be some laborious writing of SQL and vanilla JavaScript, but it's important to fully understand why tools and frameworks exist, and what problems they solve, before diving in and using them blindly. (For more resources related to this topic, see here.) We'll cover the following topics: Introduction to SQL Databases Installing MySQL on our VPS Connecting to MySQL from Python and creating the database Connecting to MySQL from Flask and inserting data Setting up We'll create a new git repository for our new code base, since although some of the setup will be similar, our new project should be completely unrelated to our first one. If you need more help with this step, head back to the setup of the first project and follow the detailed instructions there. If you're feeling confident, see if you can do it just with the following summary: Head over to the website for bitbucket, GitHub, or whichever hosting platform you used for the first project. Log in and use their Create a new repository functionality. Name your repo crimemap, and take note of the URL you're given. On your local machine, fire up a terminal and run the following commands: mkdir crimemap cd crimemap git init git remote add origin <git repository URL> We'll leave this repository empty for now as we need to set up a database on our VPS. Once we have the database installed, we'll come back here to set up our Flask project. Understanding relational databases In its simplest form, a relational database management system, such as MySQL, is a glorified spreadsheet program, such as Microsoft Excel: We store data in rows and columns. Every row is a "thing" and every column is a specific piece of information about the thing in the relevant row. I put "thing" in inverted commas because we're not limited to storing objects. In fact, the most common example, both in the real world and in explaining databases, is data about people. A basic database storing information about customers of an e-commerce website could look something like the following: ID First Name Surname Email Address Telephone 1 Frodo Baggins [email protected] +1 111 111 1111 2 Bilbo Baggins [email protected] +1 111 111 1010 3 Samwise Gamgee [email protected] +1 111 111 1001 If we look from left to right in a single row, we get all the information about one person. If we look at a single column from top to bottom, we get one piece of information (for example, an e-mail address) for everyone. Both can be useful—if we want to add a new person or contact a specific person, we're probably interested in a specific row. If we want to send a newsletter to all our customers, we're just interested in the e-mail column. So why can't we just use spreadsheets instead of databases then? Well, if we take the example of an e-commerce store further, we quickly see the limitations. If we want to store a list of all the items we have on offer, we can create another table similar to the preceding one, with columns such as "Item name", "Description", "Price", and "Quantity in stock". Our model continues to be useful. But now, if we want to store a list of all the items Frodo has ever purchased, there's no good place to put the data. We could add 1000 columns to our customer table: "Purchase 1", "Purchase 2", and so on up to "Purchase 1000", and hope that Frodo never buys more than 1000 items. This isn't scalable or easy to work with: How do we get the description for the item Frodo purchased last Tuesday? Do we just store the item's name in our new column? What happens with items that don't have unique names? Soon, we realise that we need to think about it backwards. Instead of storing the items purchased by a person in the "Customers" table, we create a new table called "Orders" and store a reference to the customer in every order. Thus, an order knows which customer it belongs to, but a customer has no inherent knowledge of what orders belong to them. While our model still fits into a spreadsheet at the push of a button, as we grow our data model and data size, our spreadsheet becomes cumbersome. We need to perform complicated queries such as "I want to see all the items that are in stock and have been ordered at least once in the last 6 months and cost more than $10." Enter Relational database management systems (RDBMS). They've been around for decades and are a tried and tested way of solving a common problem—storing data with complicated relations in an organized and accessible manner. We won't be touching on their full capabilities in our crime map (in fact, we could probably store our data in a .txt file if we needed to), but if you're interested in building web applications, you will need a database at some point. So, let's start small and add the powerful MySQL tool to our growing toolbox. I highly recommend learning more about databases. If the taster you experience while building our current project takes your fancy, go read and learn about databases. The history of RDBMS is interesting, and the complexities and subtleties of normalization and database varieties (including NoSQL databases, which we'll see something of in our next project) deserve more study time than we can devote to them in a book that focuses on Python web development. Installing and configuring MySQL Installing and configuring MySQL is an extremely common task. You can therefore find it in prebuilt images or in scripts that build entire stacks for you. A common stack is called the LAMP (Linux, Apache, MySQL, and PHP) stack, and many VPS providers provide a one-click LAMP stack image. As we are already using Linux and have already installed Apache manually, after installing MySQL, we'll be very close to the traditional LAMP stack, just using the P for Python instead of PHP. In keeping with our goal of "education first", we'll install MySQL manually and configure it through the command line instead of installing a GUI control panel. If you've used MySQL before, feel free to set it up as you see fit. Installing MySQL on our VPS Installing MySQL on our server is quite straightforward. SSH into your VPS and run the following commands: sudo apt-get update sudo apt-get install mysql-server You should see an interface prompting you for a root password for MySQL. Enter a password of your choice and repeat it when prompted. Once the installation has completed, you can get a live SQL shell by typing the following command and entering the password you chose earlier: mysql –p We could create a database and schema using this shell, but we'll be doing that through Python instead, so hit Ctrl + C to terminate the MySQL shell if you opened it. Installing Python drivers for MySQL Because we want to use Python to talk to our database, we need to install another package. There are two main MySQL connectors for Python: PyMySql and MySqlDB. The first is preferable from a simplicity and ease-of-use point of view. It is a pure Python library, meaning that it has no dependencies. MySqlDB is a C extension, and therefore has some dependencies, but is, in theory, a bit faster. They work very similarly once installed. To install it, run the following (still on your VPS): sudo pip install pymysql Creating our crimemap database in MySQL Some knowledge of SQL's syntax will be useful for the rest of this article, but you should be able to follow either way. The first thing we need to do is create a database for our web application. If you're comfortable using a command-line editor, you can create the following scripts directly on the VPS as we won't be running them locally and this can make them easier to debug. However, developing over an SSH session is far from ideal, so I recommend that you write them locally and use git to transfer them to the server before running. This can make debugging a bit frustrating, so be extra careful in writing these scripts. If you want, you can get them directly from the code bundle that comes with this book. In this case, you simply need to populate the Password field correctly and everything should work. Creating a database setup script In the crimemap directory where we initialised our git repo in the beginning, create a Python file called db_setup.py, containing the following code: import pymysql import dbconfig connection = pymysql.connect(host='localhost', user=dbconfig.db_user, passwd=dbconfig.db_password) try: with connection.cursor() as cursor: sql = "CREATE DATABASE IF NOT EXISTS crimemap" cursor.execute(sql) sql = """CREATE TABLE IF NOT EXISTS crimemap.crimes ( id int NOT NULL AUTO_INCREMENT, latitude FLOAT(10,6), longitude FLOAT(10,6), date DATETIME, category VARCHAR(50), description VARCHAR(1000), updated_at TIMESTAMP, PRIMARY KEY (id) )""" cursor.execute(sql); connection.commit() finally: connection.close() Let’s take a look at what this code does. First, we import the pymysql library we just installed. We also import dbconfig, which we’ll create locally in a bit and populate with the database credentials (we don’t want to store these in our repository). Then, we create a connection to our database using localhost (because our database is installed on the same machine as our code) and the credentials that don’t exist yet. Now that we have a connection to our database, we can get a cursor. You can think of a cursor as being a bit like the blinking object in your word processor that indicates where text will appear when you start typing. A database cursor is an object that points to a place in the database where we want to create, read, update, or delete data. Once we start dealing with database operations, there are various exceptions that could occur. We’ll always want to close our connection to the database, so we create a cursor (and do all subsequent operations) inside a try block with a connection.close() in a finally block (the finally block will get executed whether or not the try block succeeds). The cursor is also a resource, so we’ll grab one and use it in a with block so that it’ll automatically be closed when we’re done with it. With the setup done, we can start executing SQL code. Creating the database SQL reads similarly to English, so it's normally quite straightforward to work out what existing SQL does even if it's a bit more tricky to write new code. Our first SQL statement creates a database (crimemap) if it doesn't already exist (this means that if we come back to this script, we can leave this line in without deleting the entire database every time). We create our first SQL statement as a string and use the variable sql to store it. Then we execute the statement using the cursor we created. Using the database setup script We save our script locally and push it to the repository using the following command: git add db_setup.py git commit –m “database setup script” git push origin master We then SSH to our VPS and clone the new repository to our /var/www directory using the following command: ssh [email protected] cd /var/www git clone <your-git-url> cd crimemap Adding credentials to our setup script Now, we still don’t have the credentials that our script relies on. We’ll do the following things before using our setup script: Create the dbconfig.py file with the database and password. Add this file to .gitignore to prevent it from being added to our repository. The following are the steps to do so: Create and edit dbconfig.py using the nano command: nano dbconfig.py Then, type the following (using the password you chose when you installed MySQL): db_username = “root” db_password = “<your-mysql-password>” Save it by hitting Ctrl + X and entering Y when prompted. Now, use similar nano commands to create, edit, and save .gitignore, which should contain this single line: dbconfig.py Running our database setup script With that done, you can run the following command: python db_setup.py Assuming everything goes smoothly, you should now have a database with a table to store crimes. Python will output any SQL errors, allowing you to debug if necessary. If you make changes to the script from the server, run the same git add, git commit, and git push commands that you did from your local machine. That concludes our preliminary database setup! Now we can create a basic Flask project that uses our database. Creating an outline for our Flask app We're going to start by building a skeleton of our crime map application. It'll be a basic Flask application with a single page that: Displays all data in the crimes table of our database Allows users to input data and stores this data in the database Has a "clear" button that deletes all the previously input data Although what we're going to be storing and displaying can't really be described as "crime data" yet, we'll be storing it in the crimes table that we created earlier. We'll just be using the description field for now, ignoring all the other ones. The process to set up the Flask application is very similar to what we used before. We're going to separate out the database logic into a separate file, leaving our main crimemap.py file for the Flask setup and routing. Setting up our directory structure On your local machine, change to the crimemap directory. If you created the database setup script on the server or made any changes to it there, then make sure you sync the changes locally. Then, create the templates directory and touch the files we're going to be using, as follows: cd crimemap git pull origin master mkdir templates touch templates/home.html touch crimemap.py touch dbhelper.py Looking at our application code The crimemap.py file contains nothing unexpected and should be entirely familiar from our headlines project. The only thing to point out is the DBHelper() function, whose code we'll see next. We simply create a global DBHelper() function right after initializing our app and then use it in the relevant methods to grab data from, insert data into, or delete all data from the database. from dbhelper import DBHelper from flask import Flask from flask import render_template from flask import request app = Flask(__name__) DB = DBHelper() @app.route("/") def home(): try: data = DB.get_all_inputs() except Exception as e: print e data = None return render_template("home.html", data=data) @app.route("/add", methods=["POST"]) def add(): try: data = request.form.get("userinput") DB.add_input(data) except Exception as e: print e return home() @app.route("/clear") def clear(): try: DB.clear_all() except Exception as e: print e return home() if __name__ == '__main__': app.run(debug=True) Looking at our SQL code There's a little bit more SQL to learn from our database helper code. In dbhelper.py, we need the following: import pymysql import dbconfig class DBHelper: def connect(self, database="crimemap"): return pymysql.connect(host='localhost', user=dbconfig.db_user, passwd=dbconfig.db_password, db=database) def get_all_inputs(self): connection = self.connect() try: query = "SELECT description FROM crimes;" with connection.cursor() as cursor: cursor.execute(query) return cursor.fetchall() finally: connection.close() def add_input(self, data): connection = self.connect() try: query = "INSERT INTO crimes (description) VALUES ('{}');".format(data) with connection.cursor() as cursor: cursor.execute(query) connection.commit() finally: connection.close() def clear_all(self): connection = self.connect() try: query = "DELETE FROM crimes;" with connection.cursor() as cursor: cursor.execute(query) connection.commit() finally: connection.close() As in our setup script, we need to make a connection to our database and then get a cursor from our connection in order to do anything meaningful. Again, we perform all our operations in try: ...finally: blocks in order to ensure that the connection is closed. In our helper code, we see three of the four main database operations. CRUD (Create, Read, Update, and Delete) describes the basic database operations. We are either creating and inserting new data or reading, modifying, or deleting existing data. We have no need to update data in our basic app, but creating, reading, and deleting are certainly useful. Creating our view code Python and SQL code is fun to write, and it is indeed the main part of our application. However, at the moment, we have a house without doors or windows—the difficult and impressive bit is done, but it's unusable. Let's add a few lines of HTML to allow the world to interact with the code we've written. In /templates/home.html, add the following: <html> <body> <head> <title>Crime Map</title> </head> <h1>Crime Map</h1> <form action="/add" method="POST"> <input type="text" name="userinput"> <input type="submit" value="Submit"> </form> <a href="/clear">clear</a> {% for userinput in data %} <p>{{userinput}}</p> {% endfor %} </body> </html> There's nothing we haven't seen before. We have a form with a single text input box to add data to our database by calling the /add function of our app, and directly below it, we loop through all the existing data and display each piece within <p> tags. Running the code on our VPS Finally, we just need to make our code accessible to the world. This means pushing it to our git repo, pulling it onto the VPS, and configuring Apache to serve it. Run the following commands locally: git add git commit –m "Skeleton CrimeMap" git push origin master ssh <username>@<vps-ip-address> And on your VPS use the following command: cd /var/www/crimemap git pull origin master Now, we need a .wsgi file to link our Python code to Apache: nano crimemap.wsgi The .wsgi file should contain the following: import sys sys.path.insert(0, "/var/www/crimemap") from crimemap import app as application Hit Ctrl + X and then Y when prompted to save. We also need to create a new Apache .conf file and set this as the default (instead of the headlines.conf file that is our current default), as follows: cd /etc/apache2/sites-available nano crimemap.conf This file should contain the following: <VirtualHost *> ServerName example.com WSGIScriptAlias / /var/www/crimemap/crimemap.wsgi WSGIDaemonProcess crimemap <Directory /var/www/crimemap> WSGIProcessGroup crimemap WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost> This is so similar to the headlines.conf file we created for our previous project that you might find it easier to just copy that one and substitute code as necessary. Finally, we need to deactivate the old site (later on, we'll look at how to run multiple sites simultaneously off the same server) and activate the new one: sudo a2dissite headlines.conf sudo a2enssite crimemap.conf sudo service apache2 reload Now, everything should be working. If you copied the code out manually, it's almost certain that there's a bug or two to deal with. Don't be discouraged by this—remember that debugging is expected to be a large part of development! If necessary, do a tail –f on /var/log/apache2/error.log while you load the site in order to see any errors. If this fails, add some print statements to crimemap.py and dbhelper.py to narrow down the places where things are breaking. Once everything is working, you should be able to see the following in your browser: Notice how the data we get from the database is a tuple, which is why it is surrounded by brackets and has a trailing comma. This is because we selected only a single field (description) from our crimes table when we could, in theory, be dealing with many columns for each crime (and soon will be). Summary That's it for the introduction to our crime map project. Resources for Article: Further resources on this subject: Web Scraping with Python[article] Python 3: Building a Wiki Application[article] Using memcached with Python[article]
Read more
  • 0
  • 0
  • 3716

article-image-getting-started-emberjspart2
Daniel Ochoa
11 Jan 2016
5 min read
Save for later

Getting started with Ember.js – Part 2

Daniel Ochoa
11 Jan 2016
5 min read
In Part 1 of this blog, we got started with Ember.js by examining how to set up your development environment from beginning to end with Ember.js using ember-cli – Ember’s build tool. Ember-cli minifies and concatenates your JavaScript, giving you a strong conventional project structure and a powerful add-on system for extensions. In this Part 2 post, I’ll guide you through the setting up of a very basic todo-like Ember.js application to get your feet wet with actual Ember.js development. Setting up a more detailed overview for the posts Feel free to change the title of our app header (see Part 1). Go to ‘app/templates/application.hbs’ and change the wording inside the h2 tag to something like ‘Funny posts’ or anything you’d like. Let’s change our app so when a user clicks on the title of a post, it will take them to a different route based on the id of the post, for example, /posts/1bbe3 . By doing so, we are telling ember to display a different route and template. Next, let's run the following on the terminal: ember generate route post This will modify our app/router.js file by creating a route file for our post and a template. Let’s go ahead and open the app/router.js file to make sure it looks like the following: import Ember from 'ember'; import config from './config/environment'; var Router = Ember.Router.extend({ location: config.locationType }); Router.map(function() { this.resource('posts'); this.route('post', {path: '/posts/:post_id'}); }); export default Router; In the router file, we make sure the new ‘post’ route has a specific path by passing it a second argument with an object that contains a key called path and a value of ‘/posts/:post_id’. The colon in that path means the second parth of the path after /posts/ is a dynamic URL. In this URL, we will be passing the id of the post so we can determine what specific post to load on our post route. (So far, we have posts and post routes, so don’t get confused). Now, let's go to app/templates/posts.hbs and make sure we only have the following: <ul> {{#each model as |post|}} {{#link-to 'post' post tagName='li'}} {{post.title}} {{/link-to}} {{/each}} </ul> As you can see, we replaced our <li> element with an ember helper called ‘link-to’. What link-to does is that it generates for you the link for our single post route. The first argument is the name of the route, ‘post’, the second argument is the actual post itself and in the last part of the helper, we are telling Handlebars to render the link to as a <li> element by providing the tagName property. Ember is smart enough to know that if you link to a route and pass it an object, your intent is to set the model on that route to a single post. Now open ‘app/templates/post.hbs’ and replace the contents with just the following: {{model.title }} Now if you refresh the app from ‘/posts’ and click on a post title, you’ll be taken to a different route and you’ll see only the title of the post. What happens if you refresh the page at this URL? You’ll see errors on the console and nothing will be displayed. This is because you arrived at this URL from the previous post's route where you passed a single post as the argument to be the model for the current post route. When you hit refresh you lose this step so no model is set for the current route. You can fix that by adding the following to ‘app/routes/post.js’ : import Ember from 'ember'; export default Ember.Route.extend({ model(params) { return Ember.$.getJSON('https://www.reddit.com/tb/' + params.post_id + '.json?jsonp=?').then(result => { return result[0].data.children[0].data; }); } }); Now, whenever you refresh on a single post page, Ember will see that you don’t have a model so the model hook will be triggered on the route. In this case, it will grab the id of the post from the dynamic URL, which is passed as an argument to the query hook and it will make a request to reddit for the relevant post. In this case, notice that we are also returned the request promise and then filtering the results to only return the single post object we need. Change the app/templates/post.hbs template to the following: <div class="title"> <h1>{{model.title}}</h1> </div> <div class="image"> <img src="{{model.preview.images.firstObject.source.url}}" height="400"/> </div> <div class="author"> submitted by: {{model.author}} </div> Now, if you look at an individual post, you’ll get the title, image, and author for the post. Congratulations, you’ve built your first Ember.js application with dynamic data and routes. Hopefully, you now have a better grasp and understanding of some basic concepts for building more ambitious web applications using Ember. About the Author: Daniel Ochoa is a senior software engineer at Frog with a passion for crafting beautiful web and mobile experiences. His current interests are Node.js, Ember.js, Ruby on Rails, iOS development with Swift, and the Haskell language. He can be found on Twitter @DanyOchoaOzz.
Read more
  • 0
  • 0
  • 1915
Banner background image

article-image-working-events
Packt
08 Jan 2016
7 min read
Save for later

Working with Events

Packt
08 Jan 2016
7 min read
In this article by Troy Miles, author of the book jQuery Essentials, we will learn that an event is the occurrence of anything that the system considers significant. It can originate in the browser, the form, the keyboard, or any other subsystem, and it can also be generated by the application via a trigger. An event can be as simple as a key press or as complex as the completion of an Ajax request. (For more resources related to this topic, see here.) While there are a myriad of potential events, events only matter when the application listens for them. This is also known as hooking an event. By hooking an event, you tell the browser that this occurrence is important to you and to let you know when it happens. When the event occurs, the browser calls your event handling code passing the event object to it. The event object holds important event data, including which page element triggered it. Let's take a look at the first learned and possibly most important event, the ready event. The ready event The first event that programmers new to jQuery usually learn about is the ready event, sometimes referred to as the document ready event. This event signifies that the DOM is fully loaded and that jQuery is open for business. The ready event is similar to the document load event, except that it doesn't wait for all of the page's images and other assets to load. It only waits for the DOM to be ready. Also, if the ready event fires before it is hooked, the handler code will be called at least once unlike most events. The .ready() event can only be attached to the document element. When you think about it, it makes sense because it fires when the DOM, also known as Document Object Model is fully loaded. The .ready() event has few different hooking styles. All of the styles do the same thing—hook the event. Which one you use is up to you. In its most basic form, the hooking code looks similar to the following: $(document).ready(handler); As it can only be attached to the document element, the selector can be omitted. In which case the event hook looks as follows: $().ready(handler); However, the jQuery documentation does not recommend using the preceding form. There is still a terser version of this event's hook. This version omits nearly everything, and it is only passing an event handler to the jQuery function. It looks similar to the following: $(handler); While all of the different styles work, I only recommend the first form because it is the most clear. While the other forms work and save a few bytes worth of characters, they do so at the expense of code clarity. If you are worried about the number of bytes an expression uses, you should use a JavaScript minimizer instead. It will do a much more thorough job of shrinking code than you could ever do by hand. The ready event can be hooked as many times as you'd like. When the event is triggered, the handlers are called in the order in which they were hooked. Let's take a look at an example in code. // ready event style no# 1 $(document).ready(function () { console.log("document ready event handler style no# 1"); // we're in the event handler so the event has already fired. // let's hook it again and see what happens $(document).ready(function () { console.log("We get this handler even though the ready event has already fired"); }); }); // ready event style no# 2 $().ready(function () { console.log("document ready event handler style no# 2"); }); // ready event style no# 3 $(function () { console.log("document ready event handler style no# 3"); }); In the preceding code, we hook the ready event three times, each one using a different hooking style. The handlers are called in the same order that they are hooked. In the first event handler, we hook the event again. As the event has been triggered already, we may expect that the handler will never be called, but we would be wrong. jQuery treats the ready event differently than other events. Its handler is always called, even if the event has already been triggered. This makes the ready event a great place for initialization and other code, which must be run. Hooking events The ready event is different as compared to all of the other events. Its handler will be called once, unlike other events. It is also hooked differently than other events. All of the other events are hooked by chaining the .on() method to the set of elements that you wish to use to trigger the event. The first parameter passed to the hook is the name of the event followed by the handling function, which can either be an anonymous function or the name of a function. This is the basic pattern for event hooking. It is as follows: $(selector).on('event name', handling function); The .on() method and its companion the .off() method were first added in version 1.7 of jQuery. For older versions of jQuery, the method that is used to hook the event is .bind(). Neither the .bind() method nor its companion the .unbind() method are deprecated, but .on() and .off() are preferred over them. If you are switching from .bind(), the call to .on() is identical at its simplest levels. The .on() method has capabilities beyond that of the .bind() method, which requires different sets of parameters to be passed to it. If you would like for more than one event to share the same handler, simply place the name of the next event after the previous one with a space separating them: $("#clickA").on("mouseenter mouseleave", eventHandler); Unhooking events The main method that is used to unhook an event handler is .off(). Calling it is simple; it is similar to the following: $(elements).off('event name', handling function); The handling function is optional and the event name is also optional. If the event name is omitted, then all events that are attached to the elements are removed. If the event name is included, then all handlers for the specified event are removed. This can create problems. Think about the following scenario. You write a click event handler for a button. A bit later in the app's life cycle, someone else also needs to know when the button is clicked. Not wanting to interfere with already working code, they add a second handler. When their code is complete, they remove the handler as follows: $('#myButton').off('click'); As the handler was called using only using the event name, it removed not only the handler that it added but also all of the handlers for the click event. This is not what was wanted. Don't despair however; there are two fixes for this problem: function clickBHandler(event){ console.log('Button B has been clicked, external'); } $('#clickB').on('click', clickBHandler); $('#clickB').on('click', function(event){ console.log('Button B has been clicked, anonymous'); // turn off the 1st handler without during off the 2nd $('#clickB').off('click', clickBHandler); }); The first fix is to pass the event handler to the .off() method. In the preceding code, we placed two click event handlers on the button named clickB. The first event handler is installed using a function declaration, and the second is installed using an anonymous function. When the button is clicked, both of the event handlers are called. The second one turns off the first one by calling the .off() method and passing its event handler as a parameter. By passing the event handler, the .off() method is able to match the signature of the handler that you'd like to turn off. If you are not using anonymous functions, this fix is works well. But, what if you want to pass an anonymous function as the event handler? Is there a way to turn off one handler without turning off the other? Yes there is, the second fix is to use event namespacing. Summary In this article, we learned a lot about one of the most important constructs in modern web programming—events. They are the things that make a site interactive. Resources for Article: Further resources on this subject: Preparing Your First jQuery Mobile Project[article] Building a Custom Version of jQuery[article] Learning jQuery[article]
Read more
  • 0
  • 0
  • 1171

article-image-getting-started-emberjspart1
Daniel Ochoa
08 Jan 2016
9 min read
Save for later

Getting started with Ember.js – Part 1

Daniel Ochoa
08 Jan 2016
9 min read
Ember.js is a fantastic framework for developers and designers alike for building ambitious web applications. As touted by its website, Ember.js is built for productivity. Designed with the developer in mind, its friendly API’s help you get your job done fast. It also makes all the trivial choices for you. By taking care of certain architectural choices, it lets you concentrate on your application instead of reinventing the wheel or focusing on already solved problems. With Ember.js you will be empowered to rapidly prototype applications with more features for less code. Although Ember.js follows the MVC (Model-View-Controller) design pattern, it’s been slowly moving to a more component centric approach of building web applications. On this part 1 of 2 blog posts, I’ll be talking about how to quickly get started with Ember.js. I’ll go into detail on how to set up your development environment from beginning to end so you can immediately start building an Ember.js app with ember-cli – Ember’s build tool. Ember-cli provides an asset pipeline to handle your assets. It minifies and concatenates your JavaScript; it gives you a strong conventional project structure and a powerful addon system for extensions. In part two, I’ll guide you through setting up a very basic todo-like Ember.js application to get your feet wet with actual Ember.js development. Setup The first thing you need is node.js. Follow this guide on how to install node and npm from the npmjs.com website. Npm stands for Node Package Manager and it’s the most popular package manager out there for Node. Once you setup Node and Npm, you can install ember-cli with the following command on your terminal: npm install -g ember-cli You can verify whether you have correctly installed ember-cli by running the following command: ember –v If you see the output of the different versions you have for ember-cli, node, npm and your OS it means everything is correctly set up and you are ready to start building an Ember.js application. In order to get you more acquainted with ember-cli, you can run ember -h to see a list of useful ember-cli commands. Ember-cli gives you an easy way to install add-ons (packages created by the community to quickly set up functionality, so instead of creating a specific feature you need, someone may have already made a package for it. See http://emberobserver.com/). You can also generate a new project with ember init <app_name>, run the tests of your app with ember test and scaffold project files with ember generate. These are just one of the many useful commands ember-cli gives you by default. You can learn more specific subcommands for any given command by running ember <command_name> -help. Now that you know what ember-cli is useful for its time to move on to building a fun example application. Building an Ember.js app The application we will be building is an Ember.js application that will fetch a few posts from www.reddit.com/r/funny. It will display a list of these posts with some information about them such as title, author, and date. The purpose of this example application is to show you how easy it is to build an ember.js application that fetches data from a remote API and displays it. It will also show you have to leverage one of the most powerful features of Ember – its router. Now that you are more acquainted with ember-cli, let's create the skeleton of our application. There’s no need to worry and think about what packages and what features from ember we will need and we don’t even have to think about what local server to run in order to display our work in progress. First things first, run the following command on your terminal: ember new ember-r-funny We are running the ‘ember new’ command with the argument of ‘ember-r-funny’ which is what we are naming our app. Feel free to change the name to anything you’d like. From here, you’ll see a list of files being created. After it finishes, you’ll have the app directory with the app name being created in the current directory where you are working on. If you go into this directory and inspect the files, you’ll see that quite a few directories and files. For now don’t pay too much attention to these files except for the directory called ‘app’. This is where you’ll be mostly working on. On your terminal, if you got to the base path of your project (just inside of ember-r-funny/ ) and you run ember server, ember-cli will run a local server for you to see your app. If you now go on your browser to http://localhost:4200 you will see your newly created application, which is just a blank page with the wording Welcome to Ember. If you go into app/templates/application.js and change the <h1> tag and save the file, you’ll notice that your browser will automatically refresh the page. One thing to note before we continue, is that ember-cli projects allow you to use the ES6 JavaScript syntax. ES6 is a significant update to the language which although current browsers do not use it, ember-cli will compile your project to browser readable ES5. For a more in-depth explanation of ES6 visit: https://hacks.mozilla.org/2015/04/es6-in-depth-an-introduction/ Creating your first resource One of the strong points of Ember is the router. The router is responsible for displaying templates, loading data, and otherwise setting up application state. The next thing we need to do is to set up a route to display the /r/funny posts from reddit. Run the following command to create our base route: ember generate route index This will generate an index route. In Ember-speak, the index route is the base or lowest level route of the app. Now go to ‘app/routes/index.js’, and make sure the route looks like the following: import Ember from 'ember'; export default Ember.Route.extend({ beforeModel() { this.transitionTo('posts'); } }); This is telling the app that whenever a user lands on our base URL ‘/’, that it should transition to ‘posts’. Next, run the following command to generate our posts resource: ember generate resource posts If you open the ‘app/router.js file, you’ll see that that ‘this.route(‘posts’)’ was added. Change this to ‘this.resource(‘posts’)’ instead (since we want to deal with a resource and not a route). It should look like the following: import Ember from 'ember'; import config from './config/environment'; var Router = Ember.Router.extend({ location: config.locationType }); Router.map(function() { this.resource('posts'); }); export default Router;dfdf In this router.js file, we’ve created a ‘posts’ resource. So far, we’ve told the app to take the user to ‘/posts’ whenever the user goes to our app. Next, we’ll make sure to set up what data and templates to display when the user lands in ‘/posts’. Thanks to the generator, we now have a route file for posts under ‘app/routes/posts.js’. Open this file and make sure that it looks like the following: import Ember from 'ember'; export default Ember.Route.extend({ model() { return Ember.$.getJSON('https://www.reddit.com/r/funny.json?jsonp=?&limit=10').then(result => { return Ember.A(result.data.children).mapBy('data'); }); } }); Ember works by doing the asynchronous fetching of data on the model hook of our posts route. Routes are where we fetch the data so we can consume it and in this case, we are hitting reddit/r/funny and fetching the latest 10 posts. Once that data is returned, we filter out the unnecessary properties from the response so we can actually return an array with our 10 reddit post entries through the use of a handy function provided by ember called `mapBy`. One important thing to note is that you need to always return something on the model hook, be it either an array, object, or a Promise (which is what we are doing in this case; for a more in depth explanation of promises you can read more here). Now that we have our route wired up to fetch the information, let’s open the ‘app/templates/posts.hbs’ file, remove the current contents, and add the following: <ul> {{#each model as |post|}} <li>{{post.title}}</li> {{/each}} </ul> This is HTML mixed with the Handlebars syntax. Handlebars is the templating engine Ember.js uses to display your dynamic content. What this .hbs template is doing here is that it is looping through our model and displaying the title property for each object inside the model array. If you haven’t noticed yet, Ember.js is smart enough to know when the data has returned from the server and then displays it, so we don’t need to handle any callback functions as far as the model hook is concerned. At this point, it may be normal to see some deprecation warnings on the console, but if you see an error with the words ‘refused to load the script https://www.reddit.com/r/funny.json..’ you need to add the following key and value to the ‘config/environment.js’ file inside the ENV object: contentSecurityPolicy: { 'script-src': "'self' https://www.reddit.com" }, By default, ember-cli will prevent you from doing external requests and fetching external resources from different domain names. This is a security feature so we need to whitelist the reddit domain name so we can make requests against it. At this point, if you go to localhost:4200, you should be redirected to the /posts route and you should see something like the following: Congratulations, you’ve just created a simple ember.js app that displays the title of some reddit posts inside an html list element. So far we’ve added a few lines of code here and there and we already have most of what we need. In Part 2 of this blog, we will set up a more detailed view for each of our reddit posts. About the Author: Daniel Ochoa is a senior software engineer at Frog with a passion for crafting beautiful web and mobile experiences. His current interests are Node.js, Ember.js, Ruby on Rails, iOS development with Swift, and the Haskell language. He can be found on Twitter @DanyOchoaOzz.
Read more
  • 0
  • 0
  • 2966

article-image-courses-users-and-roles
Packt
30 Dec 2015
9 min read
Save for later

Courses, Users, and Roles

Packt
30 Dec 2015
9 min read
In this article by Alex Büchner, the author of Moodle 3 Administration, Third Edition, gives an overview of Moodle courses, users, and roles. The three concepts are inherently intertwined and any one of these cannot be used without the other two. We will deal with the basics of the three core elements and show how they work together. Let's see what they are: Moodle courses: Courses are central to Moodle as this is where learning takes place. Teachers upload their learning resources, create activities, assist in learning and grade work, monitor progress, and so on. Students, on the other hand, read, listen to or watch learning resources, participate in activities, submit work, collaborate with others, and so on. Moodle users: These are individuals accessing our Moodle system. Typical users are students and teachers/trainers, but also there are others such as teaching assistants, managers, parents, assessors, examiners, or guests. Oh, and the administrator, of course! Moodle roles: Roles are effectively permissions that specify which features users are allowed to access and, also, where and when (in Moodle) they can access them. Bear in mind that this articleonly covers the basic concepts of these three core elements. (For more resources related to this topic, see here.) A high-level overview To give you an overview of courses, users, and roles, let's have a look at the following diagram. It shows nicely how central the three concepts are and also how other features are related to them. Again, all of their intricacies will be dealt with in due course, so for now, just start getting familiar with some Moodle terminology. Let's start at the bottom-left and cycle through the pyramid clockwise. Users have to go through an Authentication process to get access to Moodle. They then have to go through theEnrolments step to be able to participate in Courses, which themselves are organized into Categories. Groups & Cohorts are different ways to group users at course level or site-wide. Users are granted Roles in particular Contexts. Which role is allowed to do what and which isn't, depends entirely on the Permissions set within that role. The diagram also demonstrates a catch-22 situation. If we start with users, we have no courses to enroll them in to (except the front page); if we start with courses, we have no users who can participate in them. Not to worry though. Moodle lets us go back and forth between any administrative areas and, often, perform multiple tasks at once. Moodle courses Moodle manages activities and stores resources in courses, and this is where learning and collaboration takes place. Courses themselves belong to categories, which are organized hierarchically, similar to folders on our local hard drive. Moodle comes with a default category called Miscellaneous, which is sufficient to show the basics of courses. Moodle is a course-centric system. To begin with, let's create the first course. To do so, go to Courses|Managecourses and categories. Here, select the Miscellaneous category. Then, select the Create newcourse link, and you will be directed to the screen where course details have to be entered. For now, let's focus on the two compulsory fields, namely Coursefullname and Courseshortname. The former is displayed at various places in Moodle, whereas the latter is, by default,used to identify the course and is also shown in the breadcrumb trail. For now, we leave all other fields empty or at their default values and save the course by clicking on the Savechanges button at the bottom. The screen displayed after clicking onSavechanges shows enrolled users, if any. Since we just created the course, there are no users present in the course yet. In fact, except the administrator account we are currently using, there are no users at all on our Moodle system. So, we leave the course without users for now and add some users to our LMS before we come back to this screen (select the Home link in the breadcrumb). Moodle users Moodle users, or rather their user accounts, are dealt within Users|Accounts. Before we start, it is important to understand the difference between authentication and enrolment. Moodle users have to be authenticated in order to log in to the system. Authentication grants users access to the system through login where a username and password have to be given (this also applies to guest accounts where a username is allotted internally). Moodle supports a significant number of authentication mechanisms, which are discussed later in detail. Enrolment happens at course level. However, a user has to be authenticated to the system before enrolment to a course can take place. So, a typical workflow is as follows (there are exceptions as always, but we will deal with them when we get there): Create your users Create your courses (and categories) Associate users to courses and assign roles Again, this sequence demonstrates nicely how intertwined courses, users, and roles are in Moodle. Another way of looking at the difference between authentication and enrolment is how a user will get access to a course. Please bear in mind that this is a very simplistic view and it ignores the supported features such as external authentication, guest access, and self-enrolment. During the authentication phase, a user enters his credentials (username and password) or they are entered automatically via single sign-on. If the account exists locally, that is within Moodle, and the password is valid, he/she is granted access. The next phase is enrolment. If the user is enrolled and the enrolment hasn't expired, he/she is granted access to the course. You will come across a more detailed version of these graphics later on, but for now, it hopefully demonstrates the difference between authentication and enrolment. To add a user account manually, go to Users | Accounts|Addanewuser. As with courses, we will only focus on the mandatory fields, which should be self-explanatory: Username (has to be unique) New password (if a password policy has been set, certain rules might apply) Firstname Surname Email address Make sure you save the account information by selecting Create user at the bottom of the page. If any entered information is invalid, Moodle will display error messages right above the field. I have created a few more accounts; to see who has access to your Moodle system, go to Users|Accounts|Browselistofusers, where you will see all users. Actually, I did this via batch upload. Now that we have a few users on our system, let's go back to the course we created a minute ago and manually enroll new participants to it. To achieve this, go back to Courses|Manage courses and categories, select the Miscellaneous category again, and select the created demo course. Underneath the listed demo course, course details will be displayed alongside a number of options (on large screens, details are shown to the right). Here, select Enrolledusers. As expected, the list of enrolled users is still empty. Click on the Enrolusers button to change this. To grant users access to the course, select the Enrol button beside them and close the window. In the following screenshot, three users, participant01 to participant03 have already been enrolled to the course. Two more users, participant04 and participant05, have been selected for enrolment. You have probably spotted the Assignroles dropdown at the top of the pop-up window. This is where you select what role the selected user has, once he/she is enrolled in the course. For example, to give Tommy Teacher appropriate access to the course, we have to select the Teacher role first, before enrolling him to the course. This leads nicely to the third part of the pyramid, namely, roles. Moodle roles Roles define what users can or cannot see and do in your Moodle system. Moodle comes with a number of predefined roles—we already saw Student and Teacher—but it also allows us to create our own roles, for instance, for parents or external assessors. Each role has a certain scope (called context), which is defined by a set of permissions (expressed as capabilities). For example, a teacher is allowed to grade an assignment, whereas a student isn't. Or, a student is allowed to submit an assignment, whereas a teacher isn't. A role is assigned to a user in a context. Okay, so what is a context? A context is a ring-fenced area in Moodle where roles can be assigned to users. A user can be assigned different roles in different contexts, where the context can be a course, a category, an activity module, a user, a block, the front page, or Moodle itself. For instance, you are assigned the Administrator role for the entire system, but additionally, you might be assigned the Teacher role in any courses you are responsible for; or, a learner will be given the Student role in a course, but might have been granted the Teacher role in a forum to act as a moderator. To give you a feel of how a role is defined, let's go to Users |Permissions, where roles are managed, and select Defineroles. Click on the Teacher role and, after some general settings, you will see a (very) long list of capabilities: For now, we only want to stick with the example we used throughout the article. Now that we know what roles are, we can slightly rephrase what we have done. Instead of saying, "We have enrolled the user participant 01 in the demo course as a student", we would say, "We have assigned the studentrole to the user participant 01 in the context of the demo course." In fact, the term enrolment is a little bit of a legacy and goes back to the times when Moodle didn't have the customizable, finely-grained architecture of roles and permissions that it does now. One can speculate whether there are linguistic connotations between the terms role and enrolment. Summary In this article, we very briefly introduced the concepts of Moodle courses, users, and roles. We also saw how central they are to Moodle and how they are linked together. Any one of these concepts simply cannot exist without the other two, and this is something you should bear in mind throughout. Well, theoretically they can, but it would be rather impractical when you try to model your learning environment. If you haven't fully understood any of the three areas, don't worry. The intention was only to provide you with a high-level overview of the three core components and to touch upon the basics. Resources for Article: Further resources on this subject: Moodle for Online Communities [article] Gamification with Moodle LMS [article] Moodle Plugins [article]
Read more
  • 0
  • 0
  • 1760
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-programming-littlebits-circuits-javascript-part-2
Anna Gerber
14 Dec 2015
5 min read
Save for later

Programming littleBits circuits with JavaScript Part 2

Anna Gerber
14 Dec 2015
5 min read
In this two-part series, we're programing littleBits circuits using the Johnny-Five JavaScript Robotics Framework. Be sure to read over Part 1 before continuing here. Let's create a circuit to play with, using all of the modules from the littleBits Arduino Coding Kit. Attach a button to the Arduino connector labelled d0. Attach a dimmer to the connector marked a0 and a second dimmer to a1. Turn the dimmers all the way to the right (max) to start with. Attach a power module to the single connector on the left-hand side of the fork module, and the three output connectors of the fork module to all of the input modules. The bargraph should be connected to d5, and the servo to d9, and both set to PWM output mode using the switches on board of the Arduino. The servo module has two modes: turn and swing. Swing mode makes the servo sweep betwen maximum and minimum. Set it to swing mode using the onboard switch. Reading input values We'll create an instance of the Johnny-Five Button class to respond to button press events. Our button is connected to the connector labelled d0 (i.e. digital "pin" 0) on our Arduino, so we'll need to specify the pin as an argument when we create the button. var five = require("johnny-five"); var board = new five.Board(); board.on("ready", function() { var button = new five.Button(0); }); Our dimmers are connected to analog pins (A0 and A1), so we'll specify these as strings when we create Sensor objects to read their values. We can also provide options for reading the values; for example, we'll set the frequency to 250 milliseconds, so we'll be receiving 4 readings per second for both dimmers. var dimmer1 = new five.Sensor({ pin: "A0", freq: 250 }); var dimmer2 = new five.Sensor({ pin: "A1", freq: 250 }); We can attach a function that will be run any time the value changes (on "change") or anytime we get a reading (on "data"): dimmer1.on("change", function() { // raw value (between 0 and 1023) console.log("dimmer 1 is " + this.raw); }); Run the code and try turning dimmer 1. You should see the value printed to the console whenever the dimmer value changes. Triggering behavior Now we can use code to hook our input components up to our output components. To use, for example, the dimmer to control the brightness of the bargraph, change the code in the event handler: var led = new five.Led(5); dimmer1.on("change", function() { // set bargraph brightness to one quarter // of the raw value from dimmer led.brightness(Math.floor(this.raw / 4)); }); You'll see the bargraph brightness fade as you turn the dimmer. We can use the JavaScript Math library and operators to manipulate the brightness value before we send it to the bargraph. Writing code gives us more control over the mapping between input values and output behaviors than if we'd snapped our littleBits modules directly together without going via the Arduino. We set our d5 output to PWM mode, so all of the LEDs should fade in and out at the same time. If we set the output to analog mode instead, we'd see the behavior change to light up more or fewer LEDs depending on value of the brightness. Let's use the button to trigger the servo stop and start functions. Add a button press handler to your code, and a variable to keep track of whether the servo is running or not. We'll toggle this variable between true and false using JavaScript's boolean not operator (!). We can determine whether to stop or start the servo each time the button is pressed via a conditional statement based on the value of this variable. var servo = new five.Motor(9); servo.start(); var button = new five.Button(0); var toggle = false; var speed = 255; button.on("press", function(value){ toggle = !toggle; if (toggle) { servo.start(speed); } else { servo.stop(); } }); The other dimmer can be used to change the servo speed: dimmer2.on("change", function(){ speed = Math.floor(this.raw / 4); if (toggle) { servo.start(speed); } }); There are many input and output modules available within the littleBits system for you to experiment with. You can use the Sensor class with input modules, and check out the Johnny-Five API docs to see examples of types of outputs supported by the API. You can always fall back to using the Pin class to program any littleBits module. Using the REPL Johnny-Five includes a Read-Eval-Print-Loop (REPL) so you can interactively write code to control components instantly - no waiting for code to compile and upload! Any of the JavaScript objects from your program that you want to access from the REPL need to be "injected". The following code, for example, injects our servo and led objects. this.repl.inject({ led: led, servo: servo }); After running the program using Node.js, you'll see a >> prompt in your terminal. Try some of the following functions (hit Enter after each to see it take effect): servo.stop(): stop the servo servo.start(50): start servo moving at slow speed servo.start(255): start servo moving at max speed led.on(): turn LED on led.off(): turn LED off led.pulse(): slowly fade LED in and out led.stop(): stop pulsing LED led.brightness(100): set brightness of LED - the parameter should be between 0 and 255 LittleBits are fantastic for prototyping, and pairing the littleBits Arduino with JavaScript makes prototyping interactive electronic projects even faster and easier. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. Specializing in Digital Humanities, she was a Technical Project Manager at the University of Queensland’s eResearch centre, and she has worked at Brisbane’s Distributed System Technology Centre as a Research Scientist. Anna is a JavaScript robotics enthusiast who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 1652

article-image-react-dashboard-and-visualizing-data
Xavier Bruhiere
26 Nov 2015
8 min read
Save for later

React Dashboard and Visualizing Data

Xavier Bruhiere
26 Nov 2015
8 min read
I spent the last six months working on data analytics and machine learning to feed my curiosity and prepare for my new job. It is a challenging mission and I chose to give up for a while on my current web projects to stay focused. Back then, I was coding a dashboard for an automated trading system, powered by an exciting new framework from Facebook : React. In my opinion, Web Components was the way to go and React seemed gentler with my brain than, say, Polymer. One just needed to carefully design components boundaries, properties and states and bam, you got a reusable piece of web to plug anywhere. Beautiful. This is quite a naive way to put it of course but, for an MVP, it actually kind of worked. Fast forward to last week, I was needing a new dashboard to monitor various metrics from my shiny new infrastructure. Specialized requirements kept me away from a full-fledged solution like InfluxDB and Grafana combo, so I naturally starred at my old code. Well, it turned out I did not reuse a single line of code. Since the last time I spent in web development, new tools, frameworks and methodologies had taken over the world : es6 (and transpilers), isomorphic applications, one-way data flow, hot reloading, module bundler, ... Even starter kits are remarkably complex (at least for me) and I got overwhelmed. But those new toys are also truly empowering and I persevered. In this post, we will learn to leverage them, build the simplest dashboard possible and pave the way toward modern, real-time metrics monitoring. Tooling & Motivations I think the points of so much tooling are productivity and complexity management. New single page applications usually involve a significant number of moving parts : front and backend development, data management, scaling, appealing UX, ... Isomorphic webapps with nodejs and es6 try to harmonize this workflow sharing one readable language across the stack. Node already sells the "javascript everywhere" argument but here, it goes even further, with code that can be executed both on the server and in the browser, indifferently. Team work and reusability are improved, as well as SEO (Search Engine optimization) when rendering HTML on server-side. Yet, applications' codebase can turn into a massive mess and that's where Web Components come handy. Providing clear contracts between modules, a developer is able to focus on subpart of the UI with an explicit definition of its parameters and states. This level of abstraction makes the application much more easy to navigate, maintain and reuse. Working with React gives a sense of clarity with components as Javascript objects. Lifecycle and behavior are explicitly detailed by pre-defined hooks, while properties and states are distinct attributes. We still need to glue all of those components and their dependencies together. That's where npm, Webpack and Gulp join the party. Npm is the de facto package manager for nodejs, and more and more for frontend development. What's more, it can run for you scripts and spare you from using a task runner like Gulp. Webpack, meanwhile, bundles pretty much anything thanks to its loaders. Feed it an entrypoint which require your js, jsx, css, whatever ... and it will transform and package them for the browser. Given the steep learning curve of modern full-stack development, I hope you can see the mean of those tools. Last pieces I would like to introduce for our little project are metrics-graphics and react-sparklines (that I won't actually describe but worth noting for our purpose). Both are neat frameworks to visualize data and play nicely with React, as we are going to see now. Graph Component When building components-based interfaces, first things to define are what subpart of the UI those components are. Since we start a spartiate implementation, we are only going to define a Graph. // Graph.jsx // new es6 import syntax import React from 'react'; // graph renderer import MG from 'metrics-graphics'; export default class Graph extends React.Component { // called after the `render` method below componentDidMount () { // use d3 to load data from metrics-graphics samples d3.json('node_modules/metrics-graphics/examples/data/confidence_band.json', function(data) { data = MG.convert.date(data, 'date'); MG.data_graphic({ title: {this.props.title}, data: data, format: 'percentage', width: 600, height: 200, right: 40, target: '#confidence', show_secondary_x_label: false, show_confidence_band: ['l', 'u'], x_extended_ticks: true }); }); } render () { // render the element targeted by the graph return <div id="confidence"></div>; } } This code, a trendy combination of es6 and jsx, defines in the DOM a standalone graph from the json data in confidence_band.json I stole on Mozilla official examples. Now let's actually mount and render the DOM in the main entrypoint of the application (I mentioned above with Webpack). // main.jsx // tell webpack to bundle style along with the javascript import 'metrics-graphics/dist/metricsgraphics.css'; import 'metrics-graphics/examples/css/metricsgraphics-demo.css'; import 'metrics-graphics/examples/css/highlightjs-default.css'; import React from 'react'; import Graph from './components/Graph'; function main() { // it is recommended to not directly render on body var app = document.createElement('div'); document.body.appendChild(app); // key/value pairs are available under `this.props` hash within the component React.render(<Graph title={Keep calm and build a dashboard}/>, app); } main(); Now that we defined in plain javascript the web page, it's time for our tools to take over and actually build it. Build workflow This is mostly a matter of configuration. First, create the following structure. $ tree . ├── app │ ├── components │ │ ├── Graph.jsx │ ├── main.jsx ├── build └── package.json Where package.json is defined like below. { "name": "react-dashboard", "scripts": { "build": "TARGET=build webpack", "dev": "TARGET=dev webpack-dev-server --host 0.0.0.0 --devtool eval-source --progress --colors --hot --inline --history-api-fallback" }, "devDependencies": { "babel-core": "^5.6.18", "babel-loader": "^5.3.2", "css-loader": "^0.15.1", "html-webpack-plugin": "^1.5.2", "node-libs-browser": "^0.5.2", "react-hot-loader": "^1.2.7", "style-loader": "^0.12.3", "webpack": "^1.10.1", "webpack-dev-server": "^1.10.1", "webpack-merge": "^0.1.2" }, "dependencies": { "metrics-graphics": "^2.6.0", "react": "^0.13.3" } } A quick npm install will download every package we need for development and production. Two scripts are even defined to build a static version of the site, or serve a dynamic one that will be updated on file changes detection. This formidable feature becomes essential once tasted. But we have yet to configure Webpack to enjoy it. var path = require('path'); var HtmlWebpackPlugin = require('html-webpack-plugin'); var webpack = require('webpack'); var merge = require('webpack-merge'); // discern development server from static build var TARGET = process.env.TARGET; // webpack prefers abolute path var ROOT_PATH = path.resolve(__dirname); // common environments configuration var common = { // input main.js we wrote earlier entry: [path.resolve(ROOT_PATH, 'app/main')], // import requirements with following extensions resolve: { extensions: ['', '.js', '.jsx'] }, // define the single bundle file output by the build output: { path: path.resolve(ROOT_PATH, 'build'), filename: 'bundle.js' }, module: { // also support css loading from main.js loaders: [ { test: /.css$/, loaders: ['style', 'css'] } ] }, plugins: [ // automatically generate a standard index.html to attach on the React app new HtmlWebpackPlugin({ title: 'React Dashboard' }) ] }; // production specific configuration if(TARGET === 'build') { module.exports = merge(common, { module: { // compile es6 jsx to standard es5 loaders: [ { test: /.jsx?$/, loader: 'babel?stage=1', include: path.resolve(ROOT_PATH, 'app') } ] }, // optimize output size plugins: [ new webpack.DefinePlugin({ 'process.env': { // This has effect on the react lib size 'NODE_ENV': JSON.stringify('production') } }), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] }); } // development specific configuration if(TARGET === 'dev') { module.exports = merge(common, { module: { // also transpile javascript, but also use react-hot-loader, to automagically update web page on changes loaders: [ { test: /.jsx?$/, loaders: ['react-hot', 'babel?stage=1'], include: path.resolve(ROOT_PATH, 'app'), }, ], }, }); } Webpack configuration can be hard to swallow at first but, given the huge amount of transformations to operate, this style scales very well. Plus, once setup, the development environment becomes remarkably productive. To convince yourself, run webpack-dev-server and reach localhost:8080/assets/bundle.js in your browser. Tweak the title argument in main.jsx, save the file and watch the browser update itself. We are ready to build new components and extend our modular dashboard. Conclusion We condensed in a few paragraphs a lot of what makes the current web ecosystem effervescent. I strongly encourage the reader to deepen its knowledge on those matters and consider this post as it is : an introduction. Web components, like micro-services, are fun, powerful and bleeding edges. But also complex, fast-moving and unstable. The tooling, especially, is impressive. Spend a hard time to master them and craft something cool ! About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 9291

article-image-internet-peas-gardening-javascript-part-1
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 1

Anna Gerber
23 Nov 2015
6 min read
Who wouldn't want an army of robots to help out around the home and garden? It's not science fiction: Robots are devices that sense and respond to the world around us, so with some off-the-shelf hardware, and the power of the Johnny-Five JavaScript Robotics framework, we can build and program simple "robots" to automate every day tasks. In this two part article series, we'll build an internet-connected device for monitoring plants. Bill of materials You'll need these parts to build this project Part Source Particle Core (or Photon) Particle 3xAA Battery holder e.g. with micro USB connector from DF Robot Jumper wires Any electronics supplier e.g. Sparkfun Solderless breadboard Any electronics supplier e.g. Sparkfun Photo resistor Any electronics supplier e.g. Sparkfun 1K resistor Any electronics supplier e.g. Sparkfun Soil moisture sensor e.g. Sparkfun Plants   Particle (formerly known as Spark) is a platform for developing devices for the Internet of Things. The Particle Core was their first generation Wifi development board, and has since been supeceded by the Photon. Johnny-Five supports both of these boards, as well as Arduino, BeagleBone Black, Raspberry Pi, Edison, Galileo, Electric Imp, Tessel and many other device platforms, so you can use the framework with your device of choice. The Platform Support page lists the features currently supported on each device. Any device with Analog Read support is suitable for this project. Setting up the Particle board Make sure you have a recent version of Node.js installed. We're using npm (Node Package Manager) to install the tools and libraries required for this project. Install the Particle command line tools with npm (via the Terminal on Mac, or Command Prompt on Windows): npm install -g particle-cli Particle boards need to be registered with the Particle Cloud service, and you must also configure your device to connect to your wireless network. So the first thing you'll need to do is connect it to your computer via USB and run the setup program. See the Particle Setup docs. The LED on the Particle Core should be blinking blue when you plug it in for the first time (if not, press and hold the mode button). Sign up for a Particle Account and then follow the prompts to setup your device via the Particle website, or if you prefer you can run the setup program from the command line. You'll be prompted to sign in and then to enter your Wifi SSID and password: particle setup After setup is complete, the Particle Core can be disconnected from your computer and powered by batteries or a separate USB power supply - we will connect to the board wirelessly from now on. Flashing the board We also need to flash the board with the Voodoospark firmware. Use the CLI tool to sign in to the Particle Cloud and list your devices to find out the ID of your board: particle cloud login particle list Download the firmware.cpp file and use the flash command to write the Voodoospark firmware to your device: particle cloud flash <Your Device ID> voodoospark.cpp See the Voodoospark Getting Started page for more details. You should see the following message: Flash device OK: Update started The LED on the board will flash magenta. This will take about a minute, and will change back to green when the board is ready to use. Creating a Johnny-Five project We'll be installing a few dependencies from npm, so to help manage these, we'll set up our project as an npm package. Run the init command, filling in the project details at the prompts. npm init After init has completed, you'll have a package.json file with the metadata that you entered about your project. Dependencies for the project can also be saved to this file. We'll use the --save command line argument to npm when installing packages to persist dependencies to our package.json file. We'll need the Johnny-Five npm module as well as the Particle-IO IO Plugin for Johnny-Five. npm install johnny-five --save npm install particle-io --save Johnny-Five uses the Firmata protocol to communicate with Arduino-based devices. IO Plugins provide Firmata compatible interfaces to allow Johnny-Five to communicate with non-Arduino-based devices. The Particle-IO Plugin allows you to run Node.js applications on your computer that communicate with the Particle board over Wifi, so that you can read from sensors or control components that are connected to the board. When you connect to your board, you'll need to specify your Device ID and your Particle API Access Token. You can look up your access token under Settings in the Particle IDE. It's a good idea to copy these to environment variables rather than hardcoding them into your programs. If you are on Mac or Linux, you can create a file called .particlerc then run source .particlerc: export PARTICLE_TOKEN=<Your Token Here> export PARTICLE_DEVICE_ID=<Your Device ID Here> Reading from a sensor Now we're ready to get our hands dirty! Let's confirm that we can communicate with our Particle board using Johnny-Five, by taking a reading from our soil moisture sensor. Using jumper wires, connect one pin on the soil sensor to pin A0 (analog pin 0) and the other to GND (ground). The probes go into the soil in your plant pot. Create a JavaScript file named sensor.js using your preferred text editor or IDE. We use require statements to include the Johnny-Five module and the Particle-IO plugin. We're creating an instance of the Particle IO plugin (with our token and deviceId read from our environment variables) and providing this as the io config option when creating our Board object. var five = require("johnny-five"); var Particle = require("particle-io"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { console.log("CONNECTED"); var soilSensor = new five.Sensor("A0"); soilSensor.on("change", function() { console.log(this.value); }); }); After the board is ready, we create a Sensor object to monitor changes on pin A0, and then print the value from the sensor to the Node.js console whenever it changes. Run the program using Node.js: node sensor.js Try pulling the sensor out of the soil or watering your plant to make the sensor reading change. See the Sensor API for more methods that you can use with Sensors. You can hit control-C to end the program. In the next installment we'll connect our light sensor and extend our Node.js application to monitor our plant's environment. Continue reading now! About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 2065

article-image-internet-peas-gardening-javascript-part-2
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 2

Anna Gerber
23 Nov 2015
6 min read
In this two-part article series, we're building an internet-connected garden bot using JavaScript. In part one, we set up a Particle Core board, created a Johnny-Five project, and ran a Node.js program to read raw values from a soil moisture sensor. Adding a light sensor Let's connect another sensor. We'll extend our circuit to add a photo resistor to measure the ambient light levels around our plants. Connect one lead of the photo resistor to ground, and the other to analog pin 4, with a 1K pull-down resistor from A4 to the 3.3V pin. The value of the pull-down resistor determines the raw readings from the sensor. We're using a 1K resistor so that the sensor values don't saturate under tropical sun conditions. For plants kept inside a dark room, or in a less sunny climate, a 10K resistor might be a better choice. Read more about how pull-down resistors work with photo resistors at AdaFruit. Now, in our board's ready callback function, we add another sensor instance, this time on pin A4: var lightSensor = new five.Sensor({ pin: "A4", freq: 1000 }); lightSensor.on("data", function() { console.log("Light reading " + this.value); }); For this sensor we are logging the sensor value every second, not just when it changes. We can control how often sensor events are emitted by specifying the number of milliseconds in the freq option when creating the sensor. We can use the threshold config option can be used to control when the change callback occurs. Calibrating the soil sensor The soil sensor uses the electrical resistance between two probes to provide a measure of the moisture content of the soil. We're using a commercial sensor, but you could make your own simply using two pieces of wire spaced about an inch apart (using galvinized wire to avoid rust). Water is a good conductor of electricity, so a low reading means that the soil is moist, while a high amount of resistance indicates that the soil is dry. Because these aren't very sophisticated sensors, the readings will vary from sensor to sensor. In order to do anything meaningful with the readings within our application, we'll need to calibrate our sensor. Calibrate by making a note of the sensor values for very dry soil, wet soil, and in between to get a sense of what the optimal range of values should be. For an imprecise sensor like this, it also helps to map the raw readings onto ranges that can be used to display different messages (e.g. very dry, dry, damp, wet) or trigger different actions. The scale method on the Sensor class can come in handy for this. For example, we could convert the raw readings from 0 - 1023 to a 0 - 5 scale: soilSensor.scale(0, 5).on("change", function() { console.log(this.value); }); However, the raw readings for this sensor range between about 50 (wet) to 500 (fairly dry soil). If we're only interested in when the soil is dry, i.e. when readings are above 300, we could use a conditional statement within our callback function, or use the within method so that the function is only triggered when the values are inside a range of values we care about. soilSensor.within([ 300, 500 ], function() { console.log("Water me!"); }); Our raw soil sensor values will vary depending on the temperature of the soil, so this type of sensor is best for indoor plants that aren't exposed to weather extremes. If you are installing a soil moisture sensor outdoors, consider adding a temperature sensor and then calibrate for values at different temperature ranges. Connecting more sensors We have seven analog and seven digital IO pins on the Particle Core, so we could attach more sensors, perhaps more of the same type to monitor additional planters, or different types of sensors to monitor additional conditions. There are many kinds of environmental sensors available through online marketplaces like AliExpress and ebay. These include sensors for temperature, humidity, dust, gas, water depth, particulate detection etc. Some of these sensors are straightforward analog or digital devices that can be used directly with the Johnny-Five Sensor class, as we have with our soil and light sensors. The Johnny-Five API also includes subclasses like Temperature, with controllers for some widely used sensor components. However, some sensors use protocols like SPI, I2C or OneWire, which are not as well supported by Johnny-Five across all platforms. This is always improving, for example, I2C was added to the Particle-IO plugin in October 2015. Keep an eye on I2C component backpacks which are providing support for additional sensors via secondary microcontrollers. Automation If you are gardening at scale, or going away on extended vacation, you might want more than just monitoring. You might want to automate some basic garden maintenance tasks, like turning on grow lights on overcast days, or controlling a pump to water the plants when the soil moisture level gets low. This can be acheived with relays. For example, we can connect a relay with a daylight bulb to a digital pin, and use it to turn lights on in response to the light readings, but only between certain hours: var five = require("johnny-five"); var Particle = require("particle-io"); var moment = require("moment"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { var lightSensor = new five.Sensor("A4"); var lampRelay = new five.Relay(2); lightSensor.scale(0, 5).on("change", function() { console.log("light reading is " + this.value) var now = moment(); var nightCurfew = now.endOf('day').subtract(4,'h'); var morningCurfew = now.startOf('day').add(6,'h'); if (this.value > 4) { if (!lampRelay.isOn && now.isAfter(morningCurfew) && now.isBefore(nightCurfew)) { lampRelay.on(); } } else { lampRelay.off(); } }); }); And beyond... One of the great things about using Node.js with hardware is that we can extend our apps with modules from npm. We could publish an Atom feed of sensor readings over time, push the data to a web UI using socket-io, build an alert system or create a data visualization layer, or we might build an API to control lights or pumps attached via relays to our board. It's never been easier to program your own internet-connected robot helpers and smart devices using JavaScript. Build more exciting robotics projects with servos and motors – click here to find out how. About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 2030
article-image-using-nodejs-dependencies-nwjs
Max Gfeller
19 Nov 2015
6 min read
Save for later

Using Node.js dependencies in NW.js

Max Gfeller
19 Nov 2015
6 min read
NW.js (formerly known as node-webkit) is a framework that makes it possible to write multi-platform desktop applications using the technologies you already know well: HTML, CSS and JavaScript. It bundles a Chromium and a Node (or io.js) runtime and provides additional APIs to implement native-like features like real menu bars or desktop notifications. A big advantage of having a Node/io.js runtime is to be able to make use of all the modules that are available for node developers. We can categorize three different types of modules that we can use. Internal modules Node comes with a solid set of internal modules like fs or http. It is built on the UNIX philosophy of doing only one thing and doing it very well. Therefore you won't find too much functionality in node core. The following modules are shipped with node: assert: used for writing unit tests buffer: raw memory allocation used for dealing with binary data child_process: spawn and use child processes cluster: take advatage of multi-core systems crypto: cryptographic functions dgram: use datagram sockets - dns: perform DNS lookups domain: handle multiple different IO operations as a single group events: provides the EventEmitter fs: operations on the file system http: perform http queries and create http servers https: perform https queries and create https servers net: asynchronous network wrapper os: basic operating-system related utility functions path: handle and transform file paths punycode: deal with punycode domain names querystring: deal with query strings stream: abstract interface implemented by various objects in Node timers: setTimeout, setInterval etc. tls: encrypted stream communication url: URL resolution and parsing util: various utility functions vm: sandbox to run Node code in zlib: bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw Those are documented on the official Node API documentation and can all be used within NW.js. Please take care that Chromium already defines a crypto global, so when using the crypto module in the webkit context you should assign it to a variable like crypt rather than crypto: var crypt = require('crypto'); The following example shows how we would read a file and use its contents using Node's modules: var fs = require('fs'); fs.readFile(__dirname + '/file.txt', function (error, contents) {   if (error) returnconsole.error(error);   console.log(contents); }); 3rd party JavaScript modules Soon after Node itself was started, Isaac Schlueter, who was friend of creator Ryan Dahl, started working on a package manager for Node itself. While Nodes's popularity reached new highs, a lot of packages got added to the npm registry and it soon became the fastest growing package registry. To the time of this writing there are over 169'000 packages on the registry and nearly two billion downloads each month. The npm registry is now also slowly evolving from being "only" a package manager for Node into a package manager for all things Javascript. Most of these packages can also be used inside NW.js applications. Your application's dependencies are being defined in your package.json file in the dependencies(or devDependencies) section: {   "name": "my-cool-application",   "version": "1.0.0",   "dependencies": {     "lodash": "^3.1.2"   },   "devDependencies": {     "uglify-js": "^2.4.3"   } } In the dependencies field you find all the modules that are required to run your application while in the devDependencies field only the modules required while developing the application are found. Installing a module is fairly easy and the best way to do this is with the npm install command: npm install lodash --save The install command directly downloads the latest version into your node_modules/ folder. The --save flag means that this dependency should also directly be written into your package.json file. You can also define a specific version to download by using following notation: npm install lodash@1.* or even npm install [email protected] How does node's require() work? You need to deal with two different contexts in NW.js and it is really important to always know which context you are currently in as it changes the way the require() function works. When you load a moule using Node's require() function, then this module runs in the Node context. That means you have the same globals as you would have in a pure Node script but you can't access the globals from the browser, e.g. document or window. If you write Javascript code inside of a <script> tag in your html, or when you include a script inside your HTML using <script src="">, then this code runs in the webkit context. There you have access to all browsers globals. In the webkit context The require() function is a module loading system defined by the CommonJS Modules 1.0 standard and directly implemented in node core. To offer the same smooth experience you get a modified require() method that works in webkit, too. Whenever you want to include a certain module from the webkit context, e.g. directly from an inline script in your index.html file, you need to specify the path directly from the root of your project. Let's assume the following folder structure: - app/   - app.js   - foo.js   - bar.js   - index.html And you want to include the app/app.js file directly in your index.html you need to include it like this: <script type="text/javascript">   var app = require('./app/app.js'); </script> If you need to use a module from npm then you can simply require() it and NW.js will figure out where the corresponding node_modules/ folder is located. In the node context In node when you use relative paths it will always try to locate this module relative to the file you are requiring it from. If we take the example from above then we could require the foo.js module from app.js like this: var foo = require('./foo'); About the Author Max Gfeller is a passionate web developer and JavaScript enthusiast. He is making awesome things at Cylon and can be found on Twitter @mgefeller.
Read more
  • 0
  • 0
  • 6870

article-image-overview-tdd
Packt
06 Nov 2015
11 min read
Save for later

Overview of TDD

Packt
06 Nov 2015
11 min read
 In this article, by Ravi Gupta, Harmeet Singh, and Hetal Prajapati, authors of the book Test-Driven JavaScript Development explain how testing is one of the most important phases in the development of any project, and in the traditional software development model. Testing is usually executed after the code for functionality is written. Test-driven development (TDD) makes a big difference by writing tests before the actual code. You are going to learn TDD for JavaScript and see how this approach can be utilized in the projects. In this article, you are going to learn the following: Complexity of web pages Understanding TDD Benefits of TDD and common myths (For more resources related to this topic, see here.) Complexity of web pages When Tim Berners-Lee wrote the first ever web browser around 1990, it was supposed to run HTML, neither CSS nor JavaScript. Who knew that WWW will be the most powerful communication medium? Since then, there are now a number of technologies and tools which help us write the code and run it for our needs. We do a lot these days with the help of the Internet. We shop, read, learn, share, and collaborate... well, a few words are not going to suffice to explain what we do on the Internet, are they? Over the period of time, our needs have grown to a very complex level, so is the complexity of code written for websites. It's not plain HTML anymore, not some CSS style, not some basic JavaScript tweaks. That time has passed. Pick any site you visit daily, view source by opening developer tools of the browser, and look at the source code of the site. What do you see? Too much code? Too many styles? Too many scripts? The JavaScript code and CSS code is so huge to keep it in as inline, and we need to keep them in different files, sometimes even different folders to keep them organized. Now, what happens before you publish all the code live? You test it. You test each line and see if that works fine. Well, that's a programmer's job. Zero defect, that's what every organization tries to achieve. When that is in focus, testing comes into picture, more importantly, a development style, which is essentially test driven. As the title says for this article, we're going to keep our focus on test-driven JavaScript development.   Understanding Test-driven development TDD, short for Test-driven development, is a process for software development. Kent Beck, who is known for development of TDD, refers this as "Rediscovery." Kent's answer to a question on Quora can be found at https://www.quora.com/Why-does-Kent-Beck-refer-to-the-rediscovery-of-test-driven-development. "The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD." If you go and try to find references to TDD, you would even get few references from 1968. It's not a new technique, though did not get so much attention yet. Recently, an interest toward TDD is growing, and as a result, there are a number of tools on the Web. For example, Jasmine, Mocha, DalekJS, JsUnit, QUnit, and Karma are among these popular tools and frameworks. More specifically, test-driven JavaScript development is getting popular these days. Test-driven development is a software development process, which enforces a developer to write test before production code. A developer writes a test, expects a behavior, and writes code to make the test pass. It is needless to mention that the test will always fail at the start. Need of testing To err is human. As a developer, it's not easy to find defects in our own code and often we think that our code is perfect. But there are always some chances that a defect is present in the code. Every organization or individual wants to deliver the best software they can. This is one major reason that every software, every piece of code is well tested before its release. Testing helps to detect and correct defects. There are a number of reasons why testing is needed. They are as follows: To check if the software is functioning as per the requirements There will not be just one device or one platform to run your software The end user will perform an action as a programmer you never expected There was a study conducted by National Institute of Standards and Technology (NIST) in 2002, which reported that software bugs cost the U.S. economy around $60 billion annually. With better testing, more than one-third of the cost could be avoided. The earlier the defect is found, the cheaper it is to fix it. A defect found post release would cost 10-100 times more to fix than if it had already been detected and fixed. The report of the study performed by NIST can be found at http://www.nist.gov/director/planning/upload/report02-3.pdf. If we draw a curve for the cost, it comes as an exponential when it comes to cost. The following figure clearly shows that the cost increases as the project matures with time. Sometimes, it's not possible to fix a defect without making changes in the architecture. In those cases, the cost, sometimes, is so much that developing the software from scratch seems like a better option. Benefits of TDD and common myths Every methodology has its own benefits and myths among people. The following sections will analyze the key benefits and most common myths of TDD. Benefits TDD has its own advantages over regular development approaches. There are a number of benefits, which help make a decision of using TDD over the traditional approach. Automated testing: If you did see a website code, you know that it's not easy to maintain and test all the scripts manually and keep them working. A tester may leave a few checks, but automated tests won't. Manual testing is error prone and slow. Lower cost of overall development: With TDD, the number of debugs is significantly decreased. You develop some code; run tests, if you fail, re-doing the development is significantly faster than debugging and fixing it. TDD aims at detecting defect and correcting them at an early stage, which costs much cheaper than detecting and correcting at a later stage or post release. Also, now debugging is very less frequent and significant amount of time is saved. With the help of tools/test runners like Karma, JSTestDriver, and so on, running every JavaScript tests on browser is not needed, which saves significant time in validation and verification while the development goes on. Increased productivity: Apart from time and financial benefits, TDD helps to increase productivity since the developer becomes more focused and tends to write quality code that passes and fulfills the requirement. Clean, maintainable, and flexible code: Since tests are written first, production code is often very neat and simple. When a new piece of code is added, all the tests can be run at once to see if anything failed with the change. Since we try to keep our tests atomic, and our methods also address a single goal, the code automatically becomes clean. At the end of the application development, there would be thousands of test cases which will guarantee that every piece of logic can be tested. The same test cases also act as documentation for users who are new to the development of system, since these tests act as an example of how the code works. Improved quality and reduced bugs: Complex codes invite bugs. Developers when change anything in neat and simple code, they tend to leave less or no bugs at all. They tend to focus on purpose and write code to fulfill the requirement. Keeps technical debt to minimum: This is one of the major benefits of TDD. Not writing unit tests and documentation is a big part, which increases technical debt for a software/project. Since TDD encourages you to write tests first, and if they are well written, they act as documentation, you keep technical debt for these to minimum. As Wikipedia says, A technical debt can be defined as tasks to be performed before a unit can be called complete. If the debt is not repaid, interest also adds up and makes it harder to make changes at a later stage. More about Technical debt can be found at https://en.wikipedia.org/wiki/Technical_debt. Myths Along with the benefits, TDD has some myths as well. Let's check few of them: Complete code coverage: TDD enforces to write tests first and developers write minimum amount of code to pass the test and almost 100% code coverage is done. But that does not guarantee that nothing is missed and the code is bug free. Code coverage tools do not cover all the paths. There can be infinite possibilities in loops. Of course it's not possible and feasible to check all the paths, but a developer is supposed to take care of major and critical paths. A developer is supposed to take care of business logic, flow, and process code most of the times. No need to test integration parts, setter-getter methods for properties, configurations, UI, and so on. Mocking and stubbing is to be used for integrations. No need of debugging the code: Though test-first development makes one think that debugging is not needed, but it's not always true. You need to know the state of the system when a test failed. That will help you to correct and write the code further. No need of QA: TDD cannot always cover everything. QA plays a very important role in testing. UI defects, integration defects are more likely to be caught by a QA. Even though developers are excellent, there are chances of errors. QA will try every kind of input and unexpected behavior that even a programmer did not cover with test cases. They will always try to crash the system with random inputs and discover defects. I can code faster without tests and can also validate for zero defect: While this may stand true for very small software and websites where code is small and writing test cases may increase overall time of development and delivery of the product. But for bigger products, it helps a lot to identify defects at a very early stage and gives a chance to correct at a very low cost. As seen in the previous screenshots of cost of fixing defects for phases and testing types, the cost of correcting a defect increases with time. Truly, whether TDD is required for a project or not, it depends on context. TDD ensures a good design and architecture: TDD encourages developers to write quality code, but it is not a replacement for good design practice and quality code. Will a team of developers be enough to ensure a stable and scalable architecture? Design should still be done by following the standard practices. You need to write all tests first: Another myth says that you need to write all tests first and then the actual production code. Actually, generally an iterative approach is used. Write some tests first, then some code, run the tests, fix the code, run the tests, write more tests, and so on. With TDD, you always test parts of software and keep developing. There are many myths, and covering all of them is not possible. The point is, TDD offers developers a better opportunity of delivering quality code. TDD helps organizations by delivering close to zero-defect products. Summary In this article, you learned about what TDD is. You learned about the benefits and myths of TDD. Resources for Article: Further resources on this subject: Understanding outside-in [article] Jenkins Continuous Integration [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 2120

article-image-e-commerce-mean
Packt
05 Nov 2015
8 min read
Save for later

E-commerce with MEAN

Packt
05 Nov 2015
8 min read
These days e-commerce platforms are widely available. However, as common as they might be, there are instances that after investing a significant amount of time learning how to use a specific tool you might realize that it can not fit your unique e-commerce needs as it promised. Hence, a great advantage of building your own application with an agile framework is that you can quickly meet your immediate and future needs with a system that you fully understand. Adrian Mejia Rosario, the author of the book, Building an E-Commerce Application with MEAN, shows us how MEAN stack (MongoDB, ExpressJS, AngularJS and NodeJS) is a killer JavaScript and full-stack combination. It provides agile development without compromising on performance and scalability. It is ideal for the purpose of building responsive applications with a large user base such as e-commerce applications. Let's have a look at a project using MEAN. (For more resources related to this topic, see here.) Understanding the project structure The applications built with the angular-fullstack generator have many files and directories. Some code goes in the client, other executes in the backend and another portion is just needed for development cases such as the tests suites. It’s important to understand the layout to keep the code organized. The Yeoman generators are time savers! They are created and maintained by the community following the current best practices. It creates many directories and a lot of boilerplate code to get you started. The numbers of unknown files in there might be overwhelming at first. On reviewing the directory structure created, we see that there are three main directories: client, e2e and server: The client folder will contain the AngularJS files and assets. The server directory will contain the NodeJS files, which handles ExpressJS and MongoDB. Finally, the e2e files will contain the AngularJS end-to-end tests. File Structure This is the overview of the file structure of this project: meanshop ├── client │ ├── app - App specific components │ ├── assets - Custom assets: fonts, images, etc… │ └── components - Non-app specific/reusable components │ ├── e2e - Protractor end to end tests │ └── server ├── api - Apps server API ├── auth - Authentication handlers ├── components - App-wide/reusable components ├── config - App configuration │ └── local.env.js - Environment variables │ └── environment - Node environment configuration └── views - Server rendered views Components You might be already familiar with a number of tools used in this project. If that’s not the case, you can read the brief description here. Testing AngularJS comes with a default test runner called Karma and we are going going to leverage its default choices: Karma: JavaScript unit test runner. Jasmine: It's a BDD framework to test JavaScript code. It is executed with Karma. Protractor: They are end-to-end tests for AngularJS. These are the highest levels of testing that run in the browser and simulate user interactions with the app. Tools The following are some of the tools/libraries that we are going to use in order to increase our productivity: GruntJS: It's a tool that serves to automate repetitive tasks, such as a CSS/JS minification, compilation, unit testing, and JS linting. Yeoman (yo): It's a CLI tool to scaffold web projects., It automates directory creation and file creation through generators and also provides command lines for common tasks. Travis CI: Travis CI is a continuous integration tool that runs your test suites every time you commit to the repository. EditorConfig: EditorConfig is an IDE plugin that loads the configuration from a file .editorconfig. For example, you can set indent_size = 2 indent with spaces, tabs, and so on. It’s a time saver and helps maintain consistency across multiple IDEs/teams. SocketIO: It's a library that enables real-time bidirectional communication between the server and the client. Bootstrap: It's a frontend framework for web development. We are going to use it to build the theme thought-out for this project. AngularJS full-stack: It's a generator for Yeoman that will provide useful command lines to quickly generate server/client code and deploy it to Heroku or OpenShift. BabelJS: It's a js-tojs compiler that allows to use features from the next generation JavaScript (ECMAScript 6), currently without waiting for browser support. Git: It's a distributed code versioning control system. Package managers We have package managers for our third-party backend and frontend modules. They are as follows: NPM: It is the default package manager for NodeJS. Bower: It is the frontend package manager that can be used to handle versions and dependencies of libraries and assets used in a web project. The file bower.json contains the packages and versions to install and the file .bowerrc contains the path where those packages are to be installed. The default directory is ./bower_components. Bower packages If you have followed the exact steps to scaffold our app you will have the following frontend components installed: angular angular-cookies angular-mocks angular-resource angular-sanitize angular-scenario angular-ui-router angular-socket-io angular-bootstrap bootstrap es5-shim font-awesome json3 jquery lodash Previewing the final e-commerce app Let’s take a pause from the terminal. In any project, before starting coding, we need to spend some time planning and visualizing what we are aiming for. That’s exactly what we are going to do, draw some wireframes that walk us through the app. Our e-commerce app, MEANshop, will have three main sections: Homepage Marketplace Back-office Homepage The home page will contain featured products, navigation, menus, and basic information, as you can see in the following image: Figure 2 - Wireframe of the homepage Marketplace This section will show all the products, categories, and search results. Figure 3 - Wireframe of the products page Back-office You need to be a registered user to access the back office section, as shown in the following figure:   Figure 4 - Wireframe of the login page After you login, it will present you with different options depending on the role. If you are the seller, you can create new products, such as the following: Figure 5 - Wireframe of the Product creation page If you are an admin, you can do everything that a seller does (create products) plus you can manage all the users and delete/edit products. Understanding requirements for e-commerce applications There’s no better way than to learn new concepts and technologies while developing something useful with it. This is why we are building a real-time e-commerce application from scratch. However, there are many kinds of e-commerce apps. In the following sections we will delimit what we are going to do. Minimum viable product for an e-commerce site Even the largest applications that we see today started small and grew their way up. The minimum viable product (MVP) is strictly the minimum that an application needs to work on. In the e-commerce example, it will be: Add products with title, price, description, photo, and quantity. Guest checkout page for products. One payment integration (for example, Paypal). This is strictly the minimum requirement to get an e-commerce site working. We are going to start with these but by no means will we stop there. We will keep adding features as we go and build a framework that will allow us to extend the functionality with high quality. Defining the requirements We are going to capture our requirements for the e-commerce application with user stories. A user story is a brief description of a feature told from the perspective of a user where he expresses his desire and benefit in the following format: As a <role>, I want <desire> [so that <benefit>] User stories and many other concepts were introduced with the Agile Manifesto. Learn more at https://en.wikipedia.org/wiki/Agile_software_development Here are the features that we are planning to develop through this book that have been captured as user stories: As a seller, I want to create products. As a user, I want to see all published products and its details when I click on them. As a user, I want to search for a product so that I can find what I’m looking for quickly. As a user, I want to have a category navigation menu so that I can narrow down the search results. As a user, I want to have real-time information so that I can know immediately if a product just got sold-out or became available. As a user, I want to check out products as a guest user so that I can quickly purchase an item without registering. As a user, I want to create an account so that I can save my shipping addresses, see my purchase history, and sell products. As an admin, I want to manage user roles so that I can create new admins, sellers, and remove seller permission. As an admin, I want to manage all the products so that I can ban them if they are not appropriate. As an admin, I want to see a summary of the activities and order status. All these stories might seem verbose but they are useful in capturing requirements in a consistent way. They are also handy to develop test cases against it. Summary Now that we have a gist of an e-commerce app with MEAN, lets build a full-fledged e-commerce project with Building an E-Commerce Application with MEAN. Resources for Article:   Further resources on this subject: Introduction to Couchbase [article] Protecting Your Bitcoins [article] DynamoDB Best Practices [article]
Read more
  • 0
  • 0
  • 4890
article-image-architecture-backbone
Packt
04 Nov 2015
18 min read
Save for later

Architecture of Backbone

Packt
04 Nov 2015
18 min read
In this article by Abiee Echamea, author of the book Mastering Backbone.js, you will see that one of the best things about Backbone is the freedom of building applications with the libraries of your choice, no batteries included. Backbone is not a framework but a library. Building applications with it can be challenging as no structure is provided. The developer is responsible for code organization and how to wire the pieces of code across the application; it's a big responsibility. Bad decisions about code organization can lead to buggy and unmaintainable applications that nobody wants to see. In this article, you will learn the following topics: Delegating the right responsibilities to Backbone objects Splitting the application into small and maintainable scripts (For more resources related to this topic, see here.) The big picture We can split application into two big logical parts. The first is an infrastructure part or root application, which is responsible for providing common components and utilities to the whole system. It has handlers to show error messages, activate menu items, manage breadcrumbs, and so on. It also owns common views such as dialog layouts or loading the progress bar. A root application is responsible for providing common components and utilities to the whole system. A root application is the main entry point to the system. It bootstraps the common objects, sets the global configuration, instantiates routers, attaches general services to a global application, renders the main application layout at the body element, sets up third-party plugins, starts a Backbone history, and instantiates, renders, and initializes components such as a header or breadcrumb. However, the root application itself does nothing; it is just the infrastructure to provide services to the other parts that we can call subapplications or modules. Subapplications are small applications that run business value code. It's where the real work happens. Subapplications are focused on a specific domain area, for example, invoices, mailboxes, or chats, and should be decoupled from the other applications. Each subapplication has its own router, entities, and views. To decouple subapplications from the root application, communication is made through a message bus implemented with the Backbone.Events or Backbone.Radio plugin such that services are requested to the application by triggering events instead of call methods on an object. Subapplications are focused on a specific domain area and should be decoupled from the root application and other subapplications. Figure 1.1 shows a component diagram of the application. As you can see, the root application depends on the routers of the subapplications due to the Backbone.history requirement to instantiate all the routers before calling the start method and the root application does this. Once Backbone.history is started, the browser's URL is processed and a route handler in a subapplication is triggered; this is the entry point for subapplications. Additionally, a default route can be defined in the root application for any route that is not handled on the subapplications. Figure 1.1: Logical organization of a Backbone application When you build Backbone applications in this way, you know exactly which object has the responsibility, so debugging and improving the application is easier. Remember, divide and conquer. Also by doing this, you make your code more testable, improving its robustness. Responsibilities of the Backbone objects One of the biggest issues with the Backbone documentation is no clues about how to use its objects. Developers should figure out the responsibilities for each object across the application but at least you have some experience working with Backbone already and this is not an easy task. The next sections will describe the best uses for each Backbone object. In this way, you will have a clearer idea about the scope of responsibilities of Backbone, and this will be the starting point of designing our application architecture. Keep in mind, Backbone is a library with foundation objects, so you will need to bring your own objects and structure to make an awesome Backbone application. Models This is the place where the general business logic lives. A specific business logic should be placed on other sites. A general business logic is all the rules that are so general that they can be used on multiple use cases, while specific business logic is a use case itself. Let's imagine a shopping cart. A model can be an item in the cart. The logic behind this model can include calculating the total by multiplying the unit price by the quantity or setting a new quantity. In this scenario, assume that the shop has a business rule that a customer can buy the same product only three times. This is a specific business rule because it is specific for this business, or how many stores do you know with this rule? These business rules take place on other sites and should be avoided on models. Also, it's a good idea to validate the model data before sending requests to the server. Backbone helps us with the validate method for this, so it's reasonable to put validation logic here too. Models often synchronize the data with the server, so direct calls to servers such as AJAX calls should be encapsulated at the model level. Models are the most basic pieces of information and logic; keep this in mind. Collections Consider collections as data repositories similar to a database. Collections are often used to fetch the data from the server and render its contents as lists or tables. It's not usual to see business logic here. Resource servers have different ways to deal with lists of resources. For instance, while some servers accept a skip parameter for pagination, others have a page parameter for the same purpose. Another case is responses; a server can respond with a plain array while other prefer sending an object with a data, list, or some other key, where an array of objects is placed. There is no standard way. Collections can deal with these issues, making server requests transparent for the rest of the application. Views Views have the responsibility of handling Document Object Model (DOM). Views work closely with the template engines rendering the templates and putting the results in DOM. Listen for low-level events using a jQuery API and transform them into domain ones. Views abstract the user interactions transforming his/her actions into data structures for the application, for example, clicking on a save button in a form view will create a plain object with the information in the input and trigger a domain event such as save:contact with this object attached. Then a domain-specific object can apply domain logic to the data and show a result. Business logic on views should be avoided, but basic form validations are allowed, such as accepting only numbers, but complex validations should be done on the model. Routers Routers have a simple responsibility: listening for URL changes on the browser and transforming them into a call to a handler. A router knows which handler to call for a given URL and also decodes the URL parameters and passes them to the handlers. The root application bootstraps the infrastructure, but routers decide which subapplication will be executed. In this way, routers are a kind of entry point. Domain objects It is possible to develop Backbone applications using only the Backbone objects described in the previous section, but for a medium-to-large application, it's not sufficient. We need to introduce a new kind of object with well-delimited responsibilities that use and coordinate the Backbone foundation objects. Subapplication facade This object is the public interface of a subapplication. Any interaction with the subapplication should be done through its methods. Direct calls to internal objects of the subapplication are discouraged. Typically, methods on this controller are called from the router but can be called from anywhere. The main responsibility of this object is simplifying the subapplication internals, so its work is to fetch the data from the server through models or collections and in case an error occurs during the process, it has to show an error message to the user. Once the data is loaded in a model or collection, it creates a subapplication controller that knows the views that should be rendered and has the handlers to deal with its events. The subapplication facade will transform the URL request into a Backbone data object. It shows the right error message; creates a subapplication controller, and delegates the control to it. The subapplication controller or mediator This object acts as an air traffic controller for the views, models, and collections. With a Backbone data object, it will instantiate and render the appropriate views and then coordinate them. However, the coordination task is not easy in complex layouts. Due to loose coupling reasons, a view cannot call the methods or events of the other views directly. Instead of this, a view triggers an event and the controller handles the event and orchestrates the view's behavior, if necessary. Note how the views are isolated, handling just their owned portion of DOM and triggering events when they need to communicate something. Business logic for simple use cases can be implemented here, but for more complex interactions, another strategy is needed. This object implements the mediator pattern allowing other basic objects such as views and models to keep it simple and allow loose coupling. The logic workflow The application starts bootstrapping common components and then initializes all the routers available for the subapplications and starts Backbone.history. See Figure 1.2, After initialization, the URL on the browser will trigger a route for a subapplication, then a route handler instantiates a subapplication facade object and calls the method that knows how to handle the request. The facade will create a Backbone data object, such as a collection, and fetch the data from the server calling its fetch method. If an error is issued while fetching the data, the subapplication facade will ask the root application to show the error, for example, a 500 Internal Server Error. Figure 1.2: Abstract architecture for subapplications Once the data is in a model or collection, the subapplication facade will instantiate the subapplication object that knows the business rules for the use case and pass the model or collection to it. Then, it renders one or more view with the information of the model or collection and places the results in the DOM. The views will listen for DOM events, for example, click, and transform them into a higher-level event to be consumed by the application object. The subapplication object listens for events on models and views and coordinates them when an event is triggered. When the business rules are not too complex, they can be implemented on this application object, such as deleting a model. Models and views can be in sync with the Backbone events or use a library for bindings such as Backbone.Stickit. In the next section, we will describe this process step by step with code examples for a better understanding of the concepts explained. Route handling The entry point for a subapplication is given by its routes, which ideally share the same namespace. For instance, a contacts subapplication can have these routes: contacts: Lists all the available contacts Contacts/page/:page: Paginates the contacts collection contacts/new: Shows a form to create a new contact contacts/view/:id: Shows an invoice given its ID contacts/edit/:id: Shows a form to edit a contact Note how all the routes start with the /contacts prefix. It's a good practice to use the same prefix for all the subapplication routes. In this way, the user will know where he/she is in the application, and you will have a clean separation of responsibilities. Use the same prefix for all URLs in one subapplication; avoid mixing routes with the other subapplications. When the user points the browser to one of these routes, a route handler is triggered. The function handler parses the URL request and delegates the request to the subapplication object, as follows: var ContactsRouter = Backbone.Router.extend({ routes: { "contacts": "showContactList", "contacts/page/:page": "showContactList", "contacts/new": "createContact", "contacts/view/:id": "showContact", "contacts/edit/:id": "editContact" }, showContactList: function(page) { page = page || 1; page = page > 0 ? page : 1; var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactList(page); }, createContact: function() { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showNewContactForm(); }, showContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactById(contactId); }, editContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactEditorById(contactId); } }); The validation of the URL parameters should be done on the router as shown in the showContactList method. Once the validation is done, ContactsRouter instantiates an application object, ContactsApp, which is a facade for the Contacts subapplication; finally, ContactsRouter calls an API method to handle the user request. The router doesn't know anything about business logic; it just knows how to decode the URL requests and which object to call in order to handle the request. Here, the region object points to an existing DOM node by passing the application and tells us where the application should be rendered. The subapplication facade A subapplication is composed of smaller pieces that handle specific use cases. In the case of the contacts app, a use case can be see a contact, create a new contact, or edit a contact. The implementation of these use cases is separated on different objects that handle views, events, and business logic for a specific use case. The facade basically fetches the data from the server, handles the connection errors, and creates the objects needed for the use case, as shown here: function ContactsApp(options) { this.region = options.region; this.showContactList = function(page) { App.trigger("loading:start"); new ContactCollection().fetch({ success: _.bind(function(collection, response, options) { this._showList(collection); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showList = function(contacts) { var contactList = new ContactList({region: this.region}); contactList.showList(contacts); } this.showNewContactForm = function() { this._showEditor(new Contact()); }; this.showContactEditorById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showEditor(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showEditor = function(contact) { var contactEditor = new ContactEditor({region: this.region}); contactEditor.showEditor(contact); } this.showContactById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showViewer(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showViewer = function(contact) { var contactViewer = new ContactViewer({region: this.region}); contactViewer.showContact(contact); } } The simplest handler is showNewContactForm, which is called when the user wants to create a new contact. This creates a new Contact object and passes to the _showEditor method, which will render an editor for a blank Contact. The handler doesn't need to know how to do this because the ContactEditor application will do the job. Other handlers follow the same pattern, triggering an event for the root application to show a loading widget to the user while fetching the data from the server. Once the server responds successfully, it calls another method to handle the result. If an error occurs during the operation, it triggers an event to the root application to show a friendly error to the user. Handlers receive an object and create an application object that renders a set of views and handles the user interactions. The object created will respond to the action of the users, that is, let's imagine the object handling a form to save a contact. When users click on the save button, it will handle the save process and maybe show a message such as Are you sure want to save the changes and take the right action? The subapplication mediator The responsibility of the subapplication mediator object is to render the required layout and views to be showed to the user. It knows which views need to be rendered and in which order, so instantiate the views with the models if needed and put the results on the DOM. After rendering the necessary views, it will listen for user interactions as Backbone events triggered from the views; methods on the object will handle the interaction as described in the use cases. The mediator pattern is applied to this object to coordinate efforts between the views. For example, imagine that we have a form with contact data. As the user made some input in the edition form, other views will render a preview business card for the contact; in this case, the form view will trigger changes to the application object and the application object will tell the business card view to use a new set of data each time. As you can see, the views are decoupled and this is the objective of the application object. The following snippet shows the application that shows a list of contacts. It creates a ContactListView view, which knows how to render a collection of contacts and pass the contacts collection to be rendered: var ContactList = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showList = function(contacts) { var contactList = new ContactListView({ collection: contacts }); this.region.show(contactList); this.listenTo(contactList, "item:contact:delete", this._deleteContact); } this._deleteContact = function(contact) { if (confirm('Are you sure?')) { contact.collection.remove(contact); } } this.close = function() { this.stopListening(); } } The ContactListView view will be responsible for transforming this into the DOM nodes and responding to collection events such as adding a new contact or removing one. Once the view is initialized, it is rendered on a specific region previously specified. When the view is finally on DOM, the application listens for the "item:contact:delete" event, which will be triggered if the user clicks on a delete button rendered for each contact. To see a contact, a ContactViewer application is responsible for managing the use case, which is as follows: var ContactViewer = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showContact = function(contact) { var contactView = new ContactView({model: contact}); this.region.show(contactView); this.listenTo(contactView, "contact:delete", this._deleteContact); }, this._deleteContact = function(contact) { if (confirm("Are you sure?")) { contact.destroy({ success: function() { App.router.navigate("/contacts", true); }, error: function() { alert("Something goes wrong"); } }); } } } It's the same situation, that is, the contact list creates a view that manages the DOM interactions, renders on the specified region, and listens for events. From the details view of a contact, users can delete them. Similar to a list, a _deleteContact method handles the event, but the difference is when a contact is deleted, the application is redirected to the list of contacts, which is the expected behavior. You can see how the handler uses the root application infrastructure by calling the navigate method of the global App.router. The handler forms to create or edit contacts are very similar, so the same ContactEditor can be used for both the cases. This object will show a form to the user and will wait for the save action, as shown in the following code: var ContactEditor = function(options) { _.extend(this, Backbone.Events) this.region = options.region; this.showEditor = function(contact) { var contactForm = new ContactForm({model: contact}); this.region.show(contactForm); this.listenTo(contactForm, "contact:save", this._saveContact); }, this._saveContact = function(contact) { contact.save({ success: function() { alert("Successfully saved"); App.router.navigate("/contacts"); }, error: function() { alert("Something goes wrong"); } }); } } In this case, the model can have modifications in its data. In simple layouts, the views and model can work nicely with the model-view data bindings, so no extra code is needed. In this case, we will assume that the model is updated as the user puts in information in the form, for example, Backbone.Stickit. When the save button is clicked, a "contact:save" event is triggered and the application responds with the _saveContact method. See how the method issues a save call to the standard Backbone model and waits for the result. In successful requests, a message will be displayed and the user is redirected to the contact list. In errors, a message will tell the user that the application found a problem while saving the contact. The implementation details about the views are outside of the scope of this article, but you can abstract the work made by this object by seeing the snippets in this section. Summary In this article, we started by describing in a general way how a Backbone application works. It describes two main parts, a root application and subapplications. A root application provides common infrastructure to the other smaller and focused applications that we call subapplications. Subapplications are loose-coupled with the other subapplications and should own resources such as views, controllers, routers, and so on. A subapplication manages a small part of the system and no more. Communication between the subapplications and root application is made through an event-driven bus, such as Backbone.Events or Backbone.Radio. The user interacts with the application using views that a subapplication renders. A subapplication mediator orchestrates interaction between the views, models, and collections. It also handles the business logic such as saving or deleting a resource. Resources for Article: Further resources on this subject: Object-Oriented JavaScript with Backbone Classes [article] Building a Simple Blog [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 1010

article-image-magento-2-development-cookbook
Packt
03 Nov 2015
4 min read
Save for later

Upgrading from Magneto 1

Packt
03 Nov 2015
4 min read
In Magento 2 Development Cookbook by Bart Delvaux, the overarching goal of this book is to provides you with the with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. (For more resources related to this topic, see here.) Why Magento 2 Solve common problems encountered while extending your Magento 2 store to fit your business needs. Exciting and enhanced features of Magento 2 such as customizing security permissions, intelligent filtered search options, easy third-party integration, among others. Learn to build and maintain a Magento 2 shop via a visual-based page editor and customize the look and feel using Magento 2 offerings on the go. What this article covers? This article covers Preparing an upgrade from Magento 1. Preparing an upgrade from Magento 1 The differences between Magento 1 and Magento 2 are big. The code has a whole new structure with a lot of improvements but there is one big disadvantage. What to do if I want to upgrade my Magento 1 shop to a Magento 2 shop. Magento created an upgrade tool that migrates the data of the Magento 1 database to the right structure for a Magento 2 database. The custom modules in your Magento 1 shop will not work in a Magento 2. It is possible that some of your modules will have a Magento 2 version and depending of the module, the module author will have a migration tool to migrate the data that is in the module. Getting ready Before we get started, make sure you have an empty (without sample data) Magento 2 installation with the same version as the Migration tool that is available at: https://github.com/magento/data-migration-tool-ce How to do it In your Magento 2 version (with the same version as the migration tool), run the following commands: composer config repositories.data-migration-tool git https://github.com/magento/data-migration-tool-ce composer require magento/data-migration-tool:dev-master Install Magento 2 with an empty database by running the installer. Make sure you configure it with the right time zone and currencies. When these steps are done, you can test the tool by running the following command: php vendor/magento/data-migration-tool/bin/migrate This command will print the usage of the command. The next thing is creating the configuration files. The examples of the configuration files are in the following folder: vendor/magento/data-migration-tool/etc/<version>. We can create a copy of this folder where we can set our custom configuration values. For a Magento 1.9 installation, we have to run the following cp command: cp –R vendor/magento/data-migration-tool/etc/ce-to-ce/1.9.1.0/ vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration Open the vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist file and search for the source/database and destination/database tags. Change the values of these database settings to your database settings like in the following code: <source> <database host="localhost" name="magento1" user="root"/> </source> <destination> <database host="localhost" name="magento2_migration" user="root"/> </destination> Rename that file to config.xml with the following command: mv vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml How it works By adding a composer dependency, we installed the data migration tool for Magento 2 in the codebase. This migration tool is a PHP command line script that will handle the migration steps from a Magento 1 shop. In the etc folder of the migration module, there is an example configuration of an empty Magento 1.9 shop. If you want to migrate an existing Magento 1 shop, you have to customize these configuration files so it matches your preferred state. In the next recipe, we will learn how we can use the script to start the migration. Who this book is written for? This book is packed with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. Summary In this article, we learned about how to Prepare an upgrade from Magento 1. Read Magento 2 Development Cookbook to gain detailed knowledge of Magento 2 workflows, explore use cases for advanced features, craft well thought out orchestrations, troubleshoot unexpected behavior, and extend Magento 2 through customizations. Other related titles are: Magento : Beginner's Guide - Second Edition Mastering Magento Magento: Beginner's Guide Mastering Magento Theme Design Resources for Article: Further resources on this subject: Creating a Responsive Magento Theme with Bootstrap 3[article] Social Media and Magento[article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 1871